6 years ago.

Timer and wait in LPCXpresso1549

I am trying to generate a signal through the DAC pin of LPCXpresso1549. The sampling frequency is 96000 Hz, and I have 96001 samples. Therefore, I set up a timer of 0.00001 second (1/96000) to write out samples one by one. However, does not matter timer or wait_us() function, I can never reach 0.00001 second. If I use timer start(), read(), I can get the interval of about 0.00007 second, and if wait_us(), I get the interval of 0.0001 second. Here is my code:

  1. include "mbed.h"
  2. include "chirp.h" header file contents samples chirpData[]

Serial pc(USBTX, USBRX); Timer t; AnalogOut aout(P0_12);

int main() { int i = 0;

while(true) {

t.start(); if(t.read() >= 0.00001){ aout.write_u16(chirpData[i]); i++; wait_us(10); pc.printf("The time taken was %f seconds\n", t.read()); t.stop(); t.reset(); }

if(i > 96000) { i = 0; } } }

2 Answers

6 years ago.

Here's the code in code tags.

#include "mbed.h"
#include "chirp.h"
Serial pc(USBTX, USBRX); 
Timer t; 
AnalogOut aout(P0_12);

int main() {
    int i = 0;

    while(true) {

        t.start(); 
        if(t.read() >= 0.00001) { 
            aout.write_u16(chirpData[i]); 
            i++; 
            wait_us(10); 
            pc.printf("The time taken was %f seconds\n", t.read()); 
            t.stop(); 
            t.reset(); 
        }

        if(i > 96000) { 
            i = 0; 
        } 
    }
}

I don't know that I have an answer, but here are some observations. A 10us update rate is quite fast and running that reliably is going to require careful planning. The polling strategy will necessarily introduce some jitter. Perhaps use a 10us interrupt and do your aout.write from there (provided that function is not too long).

For a polling loop like this I would use the t.read_us() function and work with integer microseconds rather than the float number which has potential for a small amount of rounding error.

Your first if block, when t.read() >= 10us, is Very long. You have a 10us wait in there. And then keep in mind printf itself will block until the message is sent. If using default baud of 9600 that's actually a significant amount of time relative to the 10us.

The pc.printf() call will read a different value from t.read() because you call that function once in the if() check and then again for pc.printf(). You probably need to avoid calling printf every loop (aka 100,000 times per second). Perhaps add a second timer, and do a single printf once after the 96000 samples to confirm that total time.

It's slightly weird to call t.start() every single loop. For continuous timing call t.start() once before the while() loop and never call t.stop(). The t.reset() will start the counter over again all by itself.

Your if condition, often you see this type of thing implemented as a blocking while() loop. So you continuously read the timer and compare the value is a tight loop, and then break out of it when the value is met.

#include "mbed.h"
#include "chirp.h"
Serial pc(USBTX, USBRX); 
Timer t; 
AnalogOut aout(P0_12);

int main() {
    int i = 0;

    t.reset();
    t.start(); 
    while(true) {

        while(t.read_us() < 10);
        t.reset();
        aout.write_u16(chirpData[i]); 
        i++; 
        
        if(i > 96000) { 
            i = 0; 
        } 
    }
}

That being said, there may be a better strategy to ensure the samples always arrive precisely on schedule.

6 years ago.

I'd start of by saying that all of Grahams points are valid.

The CPU manual indicates that the maximum DAC speed is 500 kSamples/s which would imply around 2 us per output so your rate of 96k should be possible but you have at most 8 us for all of your other code each time around the loop.

You really need to avoid using floats, you're using a 72 MHz M3 with no FPU, a very crude rule of thumb is that a floating point operation in a non-fpu CPU will cost you around 100 cycles. So you're looking at over 1 us per floating point operation, that's a lot of your time budget used up performing avoidable calculations.

If you want to keep roughly your current structure then I'd suggest a slight change to Grahams code, rather than resetting your timer each loop and waiting until it hits 10 us let it keep running and track the time you want to next output data. This doesn't reduce the jitter in the output but it does prevent time errors from accumulating so you at least maintain the correct average output rate rather than always running slightly slow.

This will however have issues after 35 minutes when the timer value rolls over. If this could be an issue the reset the timer when i wraps around.

#include "mbed.h"
#include "chirp.h"
Serial pc(USBTX, USBRX); 
Timer t; 
AnalogOut aout(P0_12);
 
int main() {
    int i = 0;
    int NextTime =0;
    t.reset();
    t.start(); 
    while(true) {
       if (t.read_us() >= NextTime) {
          aout.write_u16(chirpData[i]); 
          i++; 
          NextTime += 10;
          if(i > 96000) { 
              i = 0; 
          } 
      }
  }
}

The other option is to use a Ticker and let the system timers handle scheduling the transmit. This has the advantage that it significantly simplifies the code.

#include "mbed.h"
#include "chirp.h"
Serial pc(USBTX, USBRX); 
Ticker TxTick; 
AnalogOut aout(P0_12);

void onTxTick() {
  static int i = 0;
  aout.write_u16(chirpData[i++]);   
  if (i > 96000) 
     i = 0; 
}
 
int main() {
    TxTick.attach_us(&onTxTick,10);
    while(true) {
     }
}

If this still doesn't work reliably the manual also indicates that it's possible to DMA data into the DAC and hook it's output trigger directly to an internal timer in order to give an accurate jitter free output with almost no CPU overhead. It would be far more work but as an absolute last resort you could access the hardware directly, bypass the mbed libraries for the DAC, and set things up manually. Whether it's worth all that effort is another matter.