ADC and DMA

13 Mar 2011

Another question: Is it possible to set the PLL clock to 65 MHz without loosing timers, USB etc?

On 65 MHz (or 130 MHz) you can exactly time the DMA to 5us multiples because of the 65 cycles for a conversion. OK this would scarifies some processing power but it would free the CPU from interrupt overhead.

Personally I would still like to be able to use the DMA as it's silent and runs in the background without interrupts overhead etc. The 96 MHz clock and the 65 cycles just make DMA not usable from a timing perspective.

13 Mar 2011

From what I see in the manual it would involve using PLL1 to drive the 48 MHz USB. After that reprogramming the PLL0 should be possible (if the CPU does not grind to a halt).

I'm not sure what drives the CPU (if the PLL0 is involved) and what happens to the USB connection if you swap clock courses.

13 Mar 2011

Page 597

Note: It is recommended that software and hardware peripheral requests are not used at the same time.

So you can start a burst with software but then the peripheral isn't allowed to tell the GPDMA and one width transfer is required. This would just burst at max speed regardless of if a conversion has actually completed or not. Result, garbage data.

Imho the information regarding DMACSoftBReq and DMACSync is a little too vauge. And regarding DMACSoftBReq, I think you are getting confused between DMA bursting and ADC bursting. They aren't the same.

Anyway, see this post. It demos using ADC->GPDMA triggered vai TIMER1. I think this is ultimately what you want. And it's about the fastest you are going to get. If it's still no good you'll have to play with teh init code a little, you may be able to tweak a bit more out of it.

But, at the end of the day, if this isn't fast enough for your application then it's time to sit down with your client (is that you also?) and discuss the app specification. You'll have to trade off somewhere. If speed of acquisition exceeds what you have you have to do one of two things. Relax the spec or use difference hardware (such as a pair of ADC chips on SPI for example allowing simultanous data sampling). There's a trade off somewhere to be made.

13 Mar 2011

Altering the PLL will probably break the Mbed libraries. I'm not even sure you can do it as the Mbed libs setup the PLL and I believe (I can't remember where I saw this) once you hit main() you can no longer change the PLL.

13 Mar 2011

It's not the speed I'm worried (i need 1ms multiples) about but nested interrupt causing timing problems. I'm old-fashioned and like XTAL stable timing and ADC conversions and nice round timing figures.

I have several alternatives now to investigate (DMA/TIMER0/RIT). But my app needs to do more than ADC conversions, it needs to drive two CAN bus connected motion controllers and control two heaters with a PID loop and do some other stuff.

I'm thinking of moving the PID loops from Timer0 to the RIT interrupt as they are not that demanding on timing. Then I have a higher priority timer0 or 1 all for the ADC conversions. Logging the sysclock alongside the samples will provide me enough info on timing (early results show timer0 is remarkably stable). Enough to think about...

Btw in the BLDC library the PLL0 is stopped and restarted suggesting it's not connected to anything. This as part of a workaround for one of the errata's.

I have to dump the PLL and Clock registers to see how things work (side-effect of not having the mBed lib sources I guess). Will be a nice small project!

Btw thanks for clearing up the DMA riddle.

13 Mar 2011

Hi Wim,

Sounds like a cool project. Be sure to get some photos put up (and a nice write up too, it's always great to see what people are up to)!

Regarding the ADC->GPDMA triggered via TIMER1 I did. I can give you some timings of that example.

The TIM1 ISR takes 8us to setup a DMA transfer and initiate a burst of 3 reads from the ADC using burst mode. For the DMA to grab the conversions and store (ie the time before the GPDMA makes a callback to let you know you have 3 results) is 16us.

In order to ensure you grab your data on the 1ms requirement then you would need to give TIMER1 the highest priority over all other interrupts. I would suggest that the second highest is then the GPDMA. You could move the PID to the GPDMA callback (I am assuming the PID is being driven by the ADC samples) and do away with the need for it in another ISR.

13 Mar 2011

Hi Andy,

Andy Kirkham wrote:

Altering the PLL will probably break the Mbed libraries. I'm not even sure you can do it as the Mbed libs setup the PLL and I believe (I can't remember where I saw this) once you hit main() you can no longer change the PLL.

Did you read this code? It seems like it's possible to over and underclock the embed and even get peripherals to function again.

http://mbed.org/users/no2chem/notebook/mbed-clock-control--benchmarks/

13 Mar 2011

Andy Kirkham wrote:

Regarding the ADC->GPDMA triggered via TIMER1 I did. I can give you some timings of that example.

You don't happen to have a sample lingering around of triggering ADC by the pretty much unused match/cap timers?

As for the project, I hope it says cool. The CAN bus worries me a bit but I intend to make the mBed 'transparent' to messages and have it just relay stuff from PC to motion controller. I did that years ago for Omron temperature controllers and it worked just fine. Besides, you can add features without altering the embedded software. Then easy flashing does not mean flashing every week with an update.

The previous controller still runs it's original firmware (and that was difficult to flash, so we tried to avoid it).

13 Mar 2011

Wim van der Vegt wrote:

You don't happen to have a sample lingering around of triggering ADC by the pretty much unused match/cap timers?

Not sure what you are asking. That example used TIMER1 Match Compare to trigger the ADC->GPDMA process every second. It's just a case of editing the #define SAMPLE_PERIOD 1000000 to #define SAMPLE_PERIOD 1000 to get it to repeatedly do it every millisecond. Or am I mistunderstanding?

[Edit, btw, the example I did triggers an ADC burst via GPDMA from Timer1 match compare but can in fact be triggered from any sort of interrupt, GPIO, capture, etc]

14 Mar 2011

Hi Andy

I missed the line

LPC_TIM1->MCR

So you use timer1 interrupt to generate a small DMA transfer.

Small question, why is the adBuffer simensioned as NUM_OF_SAMPLES * 3 (ie 6)?

I'll print the code and walk it line by line (and try it). I do not mind that the timing is shifted (the measurement is not that absolute, i only need the sample interval to be constant, the rest I can compensate).

Thanks (I already start wondering what you will cook up next...)!

14 Mar 2011

Wim van der Vegt wrote:

Small question, why is the adBuffer simensioned as NUM_OF_SAMPLES * 3 (ie 6)?

Oh, looks like a typo (hangover from messing about)! The "* 3" isn't required.

As for what's next? Sending a double buffered scheme to DAC (like an MP3 player but the buffer flushing to DAC via DMA at 44.1kHz). Doing that allows the CPU plenty of time to fill the alternate buffer.

14 Mar 2011

So next question about shady regions in the manual:: ADC/DMA/LL?

What do you put in the Linked List when you use them during ADC? The channel registers ADODR0..7? How else does it know what channels to pick (See page 581 of user.manual.lpc17xx.pdf).

Do DMA modes like p2m not just say one address is incremented (memory) and the other isn't (the peripheral).

When DMA is using the global ADC register that creates the slip, it reads this register a number of times. But if you have a linked list it can't (my guess) and it maybe that you can start using the slip-less channel data registers (ie ADODR0..7) in the LL. The DMA would even clear the done bit (just by reading).

I also start to get the impression that the burst mode requirement is only caused by triggering the ADC by DMA. But if you can run the DMA itself with a LL on the ADODR0..7 registers you can use it for cherry picking samples from a continuous stream if you let the ADC run free at a much higher rate. Then what is the difference between ADODR0..7 and GPIO (both are just memory addresses).

As I look to the LL stuff it also seems (at least I get the impression from p613 of user.manual.lpc17xx.pdf) like a nice way to get rid of the interleaved ADC samples (from burst DMA) and store everything in a separate array.

14 Mar 2011

The use of LLs on a per ADC channel via the ADDRx registers sounds like a possible solution to the channel information slippage problem. Just set the length to 1 and have LL items for the number of channels remaining -1, all each with a length of 1. That would also mean you have a buffer for each channel (that's how you identify it) and the buffer length can simply be 1. I'll have a play with that.

The idea of a continuous stream however is more of a problem. The GPDMA still requires a terminal count, it can't be infinite with the last LL pointing back to a previous one. So DMA transfers themselves must always be burst (you can make it appear like a seemless stream by going to a new transfer when an old one finishes much the same way the sinewave example I did does). [edit, actually after re-reading the manual it looks like you can stream with LL forever]

14 Mar 2011

Hi

Is the GPDMA terminal count not determined by the setting the transfer size to (a multiple of) the size of the channels from the LL? But when doing a m2m transfer I'm under the impression it does one item at a time, this would rule out this approach (it would do all conversion for ADC1, then for ADC2 etc) for more than a single set of conversions.

So my guess is that you would have kick the DMA into life using small transfers, like the 'DMA Software Burst Request register (DMACSoftBReq - 0x5000 4020)' sounds like the one to use in a Timer/Ticker or ADC interrupt (and I think you do not have to configure/reload a new DMA request, as memory addresses should keep increasing). There are two sets of registers with for this (Each set, Single & Burst has one register with last in it's name).

You could use the TC interrupt for restarting a new transfer.