More on ADC burst mode issues

15 Mar 2011

This post is an extension of the forum thread titled “ADC and DMA” http://mbed.org/forum/mbed/topic/1798/?page=1. That thread discussed the anomalous behavior of the ADC when read in burst mode, particularly with respect to DMA operation.

Similar issues show up in interrupt or polled operation of the ADC in burst mode. This posting discusses the issue in that context.

Recap of the Problem

The problem, as reported by user Wim van der Vegt, is that one cannot rely on correct association of channel number and data value when data is read from the ADGDR “global data register” in burst mode.

Andy Kirkham confirmed that the values read via DMA were off by one index count. For example the first data retrieved should have been channel 0, but was identified as channel 1 by the CHN bits. The ADC result value, however, was correct – it matched the voltage being applied to Input 0 of the ADC.

After further investigation, Andy concluded that the CHN bits were not latched, and thus were showing the currently-converting channel information.

One suggested work-around was to switch to an interrupt-based handler, and have it read the specific ADDR “data register” for each of the burst channels. This avoids the problem with the ADGDR.

The other suggested work-around was to use a channel correction factor if using DMA, which, of course, must use the ADGDR as its source of data.

Further Investigation

I set up a simple series of tests to profile the behavior of the bits in the ADGDR register. The approach was to take snapshots of the register contents at regular intervals, and observe the changes over time.

The contents of the ADGDR register were not processed in any way, but simply saved into an array. An mbed Timer was read immediately after the ADGDR register was polled, and its value also saved into an array.

After the desired number of entries were collected, an output routine read the arrays and sent “printf” data to the pc, for collection by a terminal program. That data was then imported to Excel and plotted.

The first plot below shows normal operation of the ADC. The ADC clock was set to 1 uS (CLKDIV = 23). The data samples were taken every 1 uS as well. The ADC was set up to convert channels 0 – 2 in burst mode. The signals applied to the inputs were about 0.38, 1.4, and 2.5 volts for channels 0 to 2, respectively.

The top trace of the graph shows the state of the DONE bit. It is raised approximately every 65 uS (recall that the ADC takes 65 clocks to do a conversion).

The bottom trace shows the CHN field, which counts in sequence 0, 1, 2, 0, 1, 2, and so on.

The middle trace shows the data value at each sample point. The operation of the ADC approximation controller shows up as “oscillation” in the value before the final result is reached. This is expected behavior.

https://lh5.googleusercontent.com/_uPXih5V64C0/TX6tt9QLwRI/AAAAAAAAAFE/qnPabyY8oXw/s800/ADC%20Figure%201.JPG

The second plot shows a detailed area around the time that the second sample finishes its conversion. The middle trace shows the data value convergence, the top trace shows DONE rising once data is stable, and the lower trace shows CHN being incremented after DONE is raised. Again, this is expected behavior.

https://lh5.googleusercontent.com/_uPXih5V64C0/TX6ttr4gzDI/AAAAAAAAAFA/bpmAvLsmvEA/s800/ADC%20Figure%202.JPG

The results so far are well behaved because the ADC is operating with a slow, 1 MHz clock. Problems show up as the speed of the clock is increased. The third plot shows an example of the problem.

Here the ADC clock period has been sped up modestly to 0.667 uS (1.50 MHz). CLKDIV equals 15 for this case. This time the plot of detailed area around the end of the eighth bit shows the proverbial smoking gun: DONE is asserted with the wrong (the new) CHN value. This explains the channel offset mentioned in the introduction.

https://lh6.googleusercontent.com/_uPXih5V64C0/TX6tufG2THI/AAAAAAAAAFI/t-F9hEd5k_o/s800/ADC%20Figure%203.JPG

Speed limit

The fact that the problem gets worse with a faster ADC clock suggests a race condition. I decided to expand the timing for a final test.

In this test, the ADC clock was slowed way down. I chose a 10uS period (CLKDIV = 239). This would allow my test program to sample ADGDR 10 times during every ADC clock period. I added a delay at the start of the test, so that only data around the end of a conversion would be sampled.

The plot below shows the results. The middle trace shows the ADC value converging on its final value, with intermediate results taking about 10 sample periods as expected. At 1276 uS, DONE is raised. The CHN bits still reflect the proper value.

At 1286 uS, the CHN bits increment. This confirms that they automatically change one ADC clock period after the previous conversion completes.

https://lh6.googleusercontent.com/_uPXih5V64C0/TX6ttW8msWI/AAAAAAAAAE8/jjMfhrZru1M/s800/ADC%20Figure%204.JPG

Conclusions

The ADC allows the user one ADC clock period to read valid data from the ADGDR result register. If more time than that elapses, the data will no longer be valid. The first thing to change is the CHN bits, as shown immediately above. Much later, the data value will bobble about (for the final 12 clocks before the conversion ends).

This means that the latency in the routines that service the ADGDR must be much less than one ADC clock period. Consider what that means. The maximum clock speed is specified as 13 MHz. That gives a whopping 77nS to service the ADC.

Andy K. reported that the DMA could support an 8 MHz clock – so its response time must be better than 125nS. Impressive.

Software polling will only work for slow ADC clock rates. That may not be an issue for quasi-static applications (polling temperature sensors, perhaps). Otherwise, one needs to use something like Simon Blandford’s interrupt-driven approach to burst mode support, which avoids use of the ADGDR altogether http://mbed.org/users/simonb/programs/ADC_test/5zlnn/.

I’m not sure this is a bug (could be just a lack of clear documentation), but it certainly doesn’t smell like a feature.

20 Mar 2011

Hi,

Nice measurements and explaination (I hope).

If the ADGDR is so sensitive to the time when reading it I get the feeling the same 'feature' might also be responsible for the spikes we see at higher data rates.

If the ADC starts a new conversion before the old one is read and stored you get spikes (without it having much to do with noise the analogue cirtcuitry).

Most of the ADC routines just let the ADc run free at high speed and cherry pick a sample when needed in constrast to converting at the desired interval.

Wim

20 Mar 2011

Hi Wim,

Cherry picking values is fine if you just want to "keep a record". But if you are a Control Engineer implementing a PID or other closed loop control system the time between samples becomes very important. And then with that comes an understanding of the samples you've got. If there's a library filter infront of you getting the data then you are going to want to know everything about that filter. Hexley is better qualified than me to explain this further, but trust me. Although having a new filter there that makes it "look nice" isn't always what end users really need.

26 Mar 2011

Hi

I had some strange observations when i was debugging code. I was trying to lower the Peripheral ADC clock derived from the CCLK to CCLK/8 instead of the usual CCLK/4 default most people use.

  • PCLK = 96/8 Mhz + CLKDIV = 6 -> Slip at 2Mhz
  • PCLK = 96/4 Mhz + CLKDIV = 12 -> No Slip at 2Mhz
  • PCLK = 96/4 Mhz + CLKDIV = 6 -> Slip at 4Mhz
  • PCLK = 96/2 Mhz + CLKDIV = 12 -> No Slip at 4Mhz

Somehow the raw peripheral clock is of influence and not so much the actual conversion clock. It seems the peripheral is used in the ADC converter too (and not only for inpu of the ADC prescaler).

Basically it says that it is important to keep the pheripheral clock as high as possible (96/1 is off-course the max but I stopped testing at 96/2 = 48Mhz) and use the ADCR divider to bring down the actual ADC clock to below the maximum allowed value of 13Mhz.

But if I remembed the ADC Spike threads the higher the clock the worst the spikes.

26 Mar 2011

Was this using DMA or polling?

27 Mar 2011

Hi Hexley,

Short DMA requests setup from within a timer1 interrupt that triggers on the cap value.

See the code from Andy at http://mbed.org/forum/mbed/topic/1965/?page=1#comment-10464 (second sample).

I modified this a bit so it suits my needs:

  • inside the DMA completion i increment the destition address for a next request.
  • i switched to 2 channels
  • i added some arrays to store timer timestamps to see how performance is doing.
27 Mar 2011

Hi,

I tested PCLK at 96Mhz and CLKDIV of 11 to produce no channel number slip, so the ADC Clock is running at 8Mhz (and in my code doing bursts of two channels).

Setting CLKDIV to 10 already produces channel number slip.