ADXL345 accelerometer output help

08 Sep 2010

I just got my ADXL345 accelerometer working using the cookbook code.

I notice that the output is in ADC counts (bits).  I need to convert this to g's.  Just looking for suggestions for the conversion.

I plan to take the ADC count reading at rest.  I would set this equal to 1G.  Then simply divide the number of the count into 1G to provide the resolution.  The math is simple.  The syntax might be too.

What do you all think?

08 Sep 2010

The datasheet is here.

The lines of the example here:

00016     //Full resolution, +/-16g, 4mg/LSB.
00017     accelerometer.setDataFormatControl(0x0B);

Say that you'll be using +/- 16g resolution, and you'll get 4milli-g's per least significant bit.

From page 3 of the datasheet:

±16 g Range Full resolution 13-bit

Since we're reading a 13-bit unsigned (MSB isn't a sign bit) number, our value will be between 0 and 2^13.

I assume that the 0g value will be half the range = 2^13 / 2 = 2^12 = 4096.

So to zero-offset our value we simply subtract 4096.

g_data = value - 4096

To get it to scale correctly we need to divide it by 4096/16 (we get a max of +16g)

g_data = (value - 4096) / 256

I hope that works - let me know if it doesn't match up with the data you're getting  (I haven't actually used an ADXL345 before)

08 Sep 2010

Hey guys,

I believe the XYZ registers already hold a 2's complement value (with DATAx0 holding the LSB), so make sure you cast to an appropriate variable (i.e int16_t etc..). You don't have to offset anything. 256 counts/g at +/-16g is correct.

09 Sep 2010 . Edited: 09 Sep 2010

Igor is correct - the registers hold 2's complement sign extended values; as he says, simply cast the values you get as (int16_t) and you're good to go.

Here is the code I use to calibrate my ADXL345:

void calibrateAccelerometer(void) {
    
    //Take a number of readings and average them
    //to calculate any bias the accelerometer may have.
    for (int i = 0; i < 32; i++) {

        accelerometer.getOutput(readings);

        a_x += (int16_t) readings[0];
        a_y += (int16_t) readings[1];
        a_z += (int16_t) readings[2];

        //50Hz data rate.
        wait(0.02);

    }

    a_x /= 128;
    a_y /= 128;
    a_z /= 128;

    //At 4mg/LSB, 250 LSBs is 1g.
    a_xBias = a_x;
    a_yBias = a_y;
    a_zBias = (a_z - 250);

    a_x = 0;
    a_y = 0;
    a_z = 0;   

}

You can now take away these biases from the subsequent readings you take which should give you readings for approximately 0 on the x and y axes and approximately 1g on the z axis (assuming you did the calibration on a flat surface and are also taking readings in the same orientation).

The datasheet says 4mg/LSB at the top and then 3.9mg/LSB in the specifications... if you aren't doing a "proper" calibration [see something like this for more details] then that 0.1 won't really make that much difference, but it would make the a_z bias become (a_z - 256) instead of (a_z - 250) in case the numbers were slightly confusing.

09 Sep 2010

Thanks for the replies.  This is helpful.

I have tried to scale the values of the ADC which are defined as an int.  However the compiler complains.  When I try to float the values there are even more errors associated with the getoutput(readings) command.

Any ideas?

09 Sep 2010

Show the code and the exact text of error messages.

10 Sep 2010

I found the problem.  The asterick pointer was missing when calling the buffer for the output data on the ADXL chip.  The program was obviously confused...  And so was this operator.