Important changes to repositories hosted on mbed.com
Mbed hosted mercurial repositories are deprecated and are due to be permanently deleted in July 2026.
To keep a copy of this software download the repository Zip archive or clone locally using Mercurial.
It is also possible to export all your personal repositories from the account settings page.
Dependencies: mbed FastIO FastPWM USBDevice
Plunger/tcd1103Sensor.h@106:e9e3b46132c1, 2020-02-03 (annotated)
- Committer:
- mjr
- Date:
- Mon Feb 03 21:27:55 2020 +0000
- Revision:
- 106:e9e3b46132c1
- Parent:
- 104:6e06e0f4b476
- Child:
- 109:310ac82cbbee
Check diagnostic LEDs against all configured pins (not just output ports)
Who changed what in which revision?
| User | Revision | Line number | New contents of line |
|---|---|---|---|
| mjr | 100:1ff35c07217c | 1 | // Toshiba TCD1103 linear image sensors |
| mjr | 100:1ff35c07217c | 2 | // |
| mjr | 100:1ff35c07217c | 3 | // This sensor is similar to the original TSL1410R in both its electronic |
| mjr | 100:1ff35c07217c | 4 | // interface and the theory of operation. The details of the electronics |
| mjr | 100:1ff35c07217c | 5 | // are different enough that we can't reuse the same code at the hardware |
| mjr | 100:1ff35c07217c | 6 | // interface level, but the principle of operation is similar: the sensor |
| mjr | 100:1ff35c07217c | 7 | // provides a serial interface to a file of pixels transferred as analog |
| mjr | 100:1ff35c07217c | 8 | // voltage levels representing the charge collected. |
| mjr | 100:1ff35c07217c | 9 | // |
| mjr | 100:1ff35c07217c | 10 | // As with the TSL1410R, we position the sensor so that the pixel row is |
| mjr | 104:6e06e0f4b476 | 11 | // aligned with the plunger axis, and we detect the plunger position by |
| mjr | 104:6e06e0f4b476 | 12 | // looking for a dark/light edge at the end of the plunger. However, |
| mjr | 104:6e06e0f4b476 | 13 | // the optics for this sensor are very different because of the sensor's |
| mjr | 104:6e06e0f4b476 | 14 | // size. The TSL1410R is by some magical coincidence the same size as |
| mjr | 104:6e06e0f4b476 | 15 | // the plunger travel range, so we set that sensor up so that the plunger |
| mjr | 104:6e06e0f4b476 | 16 | // is backlit with respect to the sensor, and simply casts a shadow on |
| mjr | 104:6e06e0f4b476 | 17 | // the sensor. The TCD1103, in contrast, has a pixel array that's only |
| mjr | 104:6e06e0f4b476 | 18 | // 8mm long, so we can't use the direct shadow approach. Instead, we |
| mjr | 104:6e06e0f4b476 | 19 | // have to use a lens to focus an image of the plunger on the sensor. |
| mjr | 104:6e06e0f4b476 | 20 | // With a focused image, we can front-light the plunger and take a picture |
| mjr | 104:6e06e0f4b476 | 21 | // of the plunger itself rather than of an occluded back-light. |
| mjr | 100:1ff35c07217c | 22 | // |
| mjr | 104:6e06e0f4b476 | 23 | // Even though we use "edge sensing", this class isn't based on the |
| mjr | 104:6e06e0f4b476 | 24 | // PlungerSensorEdgePos class. Our sensing algorithm is a little different, |
| mjr | 104:6e06e0f4b476 | 25 | // and much simpler, because we're working with a proper image of the |
| mjr | 104:6e06e0f4b476 | 26 | // plunger, rather than an image of its shadow. The shadow tends to be |
| mjr | 104:6e06e0f4b476 | 27 | // rather fuzzy, and the TSL14xx sensors were pretty noisy, so we had to |
| mjr | 104:6e06e0f4b476 | 28 | // work fairly hard to distinguish an edge in the image from a noise spike. |
| mjr | 104:6e06e0f4b476 | 29 | // This sensor has very low noise, and the focused image produces a sharp |
| mjr | 104:6e06e0f4b476 | 30 | // edge, so we can use a more straightforward algorithm that just looks |
| mjr | 104:6e06e0f4b476 | 31 | // for the first bright spot. |
| mjr | 104:6e06e0f4b476 | 32 | // |
| mjr | 104:6e06e0f4b476 | 33 | // The TCD1103 uses a negative image: brighter pixels are represented by |
| mjr | 104:6e06e0f4b476 | 34 | // lower numbers. The electronics of the sensor are such that the dynamic |
| mjr | 104:6e06e0f4b476 | 35 | // range for the pixel analag voltage signal (which is what our pixel |
| mjr | 104:6e06e0f4b476 | 36 | // elements represent) is only about 1V, or about 30% of the 3.3V range of |
| mjr | 104:6e06e0f4b476 | 37 | // the ADC. Dark pixels read at about 2V (about 167 after 8-bit ADC |
| mjr | 104:6e06e0f4b476 | 38 | // quantization), and saturated pixels read at 1V (78 on the ADC). So our |
| mjr | 104:6e06e0f4b476 | 39 | // effective dynamic range after quantization is about 100 steps. That |
| mjr | 104:6e06e0f4b476 | 40 | // would be pretty terrible if the goal were to take pictures for an art |
| mjr | 104:6e06e0f4b476 | 41 | // gallery, and there are things we could do in the electronic interface |
| mjr | 106:e9e3b46132c1 | 42 | // to improve it. In particular, we could use an op-amp to expand the |
| mjr | 104:6e06e0f4b476 | 43 | // voltage range on the ADC input and remove the DC offset, so that the |
| mjr | 106:e9e3b46132c1 | 44 | // signal going into the ADC covers the ADC's full 0V - 3.3V range. That |
| mjr | 106:e9e3b46132c1 | 45 | // technique is actually used in some other projects using this sensor |
| mjr | 106:e9e3b46132c1 | 46 | // where the goal is to yield pictures as the end result. But it's |
| mjr | 106:e9e3b46132c1 | 47 | // pretty complicated to set up and fine-tune to get the voltage range |
| mjr | 106:e9e3b46132c1 | 48 | // expansion just right, and we really don't need it; the edge detection |
| mjr | 106:e9e3b46132c1 | 49 | // works fine with what we get directly from the sensor. |
| mjr | 106:e9e3b46132c1 | 50 | |
| mjr | 100:1ff35c07217c | 51 | |
| mjr | 104:6e06e0f4b476 | 52 | |
| mjr | 104:6e06e0f4b476 | 53 | #include "plunger.h" |
| mjr | 100:1ff35c07217c | 54 | #include "TCD1103.h" |
| mjr | 100:1ff35c07217c | 55 | |
| mjr | 100:1ff35c07217c | 56 | template <bool invertedLogicGates> |
| mjr | 100:1ff35c07217c | 57 | class PlungerSensorImageInterfaceTCD1103: public PlungerSensorImageInterface |
| mjr | 100:1ff35c07217c | 58 | { |
| mjr | 100:1ff35c07217c | 59 | public: |
| mjr | 100:1ff35c07217c | 60 | PlungerSensorImageInterfaceTCD1103(PinName fm, PinName os, PinName icg, PinName sh) |
| mjr | 104:6e06e0f4b476 | 61 | : PlungerSensorImageInterface(1500), sensor(fm, os, icg, sh) |
| mjr | 100:1ff35c07217c | 62 | { |
| mjr | 100:1ff35c07217c | 63 | } |
| mjr | 100:1ff35c07217c | 64 | |
| mjr | 100:1ff35c07217c | 65 | // is the sensor ready? |
| mjr | 100:1ff35c07217c | 66 | virtual bool ready() { return sensor.ready(); } |
| mjr | 100:1ff35c07217c | 67 | |
| mjr | 101:755f44622abc | 68 | virtual void init() { } |
| mjr | 100:1ff35c07217c | 69 | |
| mjr | 100:1ff35c07217c | 70 | // get the average sensor scan time |
| mjr | 100:1ff35c07217c | 71 | virtual uint32_t getAvgScanTime() { return sensor.getAvgScanTime(); } |
| mjr | 100:1ff35c07217c | 72 | |
| mjr | 101:755f44622abc | 73 | virtual void readPix(uint8_t* &pix, uint32_t &t) |
| mjr | 100:1ff35c07217c | 74 | { |
| mjr | 100:1ff35c07217c | 75 | // get the image array from the last capture |
| mjr | 104:6e06e0f4b476 | 76 | sensor.getPix(pix, t); |
| mjr | 100:1ff35c07217c | 77 | } |
| mjr | 100:1ff35c07217c | 78 | |
| mjr | 101:755f44622abc | 79 | virtual void releasePix() { sensor.releasePix(); } |
| mjr | 101:755f44622abc | 80 | |
| mjr | 101:755f44622abc | 81 | virtual void setMinIntTime(uint32_t us) { sensor.setMinIntTime(us); } |
| mjr | 100:1ff35c07217c | 82 | |
| mjr | 100:1ff35c07217c | 83 | // the low-level interface to the TSL14xx sensor |
| mjr | 100:1ff35c07217c | 84 | TCD1103<invertedLogicGates> sensor; |
| mjr | 100:1ff35c07217c | 85 | }; |
| mjr | 100:1ff35c07217c | 86 | |
| mjr | 100:1ff35c07217c | 87 | template<bool invertedLogicGates> |
| mjr | 104:6e06e0f4b476 | 88 | class PlungerSensorTCD1103: public PlungerSensorImage<int> |
| mjr | 100:1ff35c07217c | 89 | { |
| mjr | 100:1ff35c07217c | 90 | public: |
| mjr | 100:1ff35c07217c | 91 | PlungerSensorTCD1103(PinName fm, PinName os, PinName icg, PinName sh) |
| mjr | 104:6e06e0f4b476 | 92 | : PlungerSensorImage(sensor, 1500, 1499, true), sensor(fm, os, icg, sh) |
| mjr | 100:1ff35c07217c | 93 | { |
| mjr | 100:1ff35c07217c | 94 | } |
| mjr | 100:1ff35c07217c | 95 | |
| mjr | 100:1ff35c07217c | 96 | protected: |
| mjr | 104:6e06e0f4b476 | 97 | // Process an image. This seeks the first dark-to-light edge in the image. |
| mjr | 104:6e06e0f4b476 | 98 | // We assume that the background (open space behind the plunger) has a |
| mjr | 104:6e06e0f4b476 | 99 | // dark (minimally reflective) backdrop, and that the tip of the plunger |
| mjr | 104:6e06e0f4b476 | 100 | // has a bright white strip right at the end. So the end of the plunger |
| mjr | 104:6e06e0f4b476 | 101 | // should be easily identifiable in the image as the first bright edge |
| mjr | 104:6e06e0f4b476 | 102 | // we see starting at the "far" end. |
| mjr | 104:6e06e0f4b476 | 103 | virtual bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/) |
| mjr | 104:6e06e0f4b476 | 104 | { |
| mjr | 104:6e06e0f4b476 | 105 | // Scan the pixel array to determine the actual dynamic range |
| mjr | 104:6e06e0f4b476 | 106 | // of this image. That will let us determine what consistutes |
| mjr | 104:6e06e0f4b476 | 107 | // "bright" when we're looking for the bright spot. |
| mjr | 104:6e06e0f4b476 | 108 | uint8_t pixMin = 255, pixMax = 0; |
| mjr | 104:6e06e0f4b476 | 109 | const uint8_t *p = pix; |
| mjr | 104:6e06e0f4b476 | 110 | for (int i = n; i != 0; --i) |
| mjr | 104:6e06e0f4b476 | 111 | { |
| mjr | 104:6e06e0f4b476 | 112 | uint8_t c = *p++; |
| mjr | 104:6e06e0f4b476 | 113 | if (c < pixMin) pixMin = c; |
| mjr | 104:6e06e0f4b476 | 114 | if (c > pixMax) pixMax = c; |
| mjr | 104:6e06e0f4b476 | 115 | } |
| mjr | 104:6e06e0f4b476 | 116 | |
| mjr | 104:6e06e0f4b476 | 117 | // Figure the threshold brightness for the bright spot as halfway |
| mjr | 104:6e06e0f4b476 | 118 | // between the min and max. |
| mjr | 104:6e06e0f4b476 | 119 | uint8_t threshold = (pixMin + pixMax)/2; |
| mjr | 104:6e06e0f4b476 | 120 | |
| mjr | 104:6e06e0f4b476 | 121 | // Scan for the first bright-enough pixel. Remember that we're |
| mjr | 104:6e06e0f4b476 | 122 | // working with a negative image, so "brighter" is "less than". |
| mjr | 104:6e06e0f4b476 | 123 | p = pix; |
| mjr | 104:6e06e0f4b476 | 124 | for (int i = n; i != 0; --i, ++p) |
| mjr | 104:6e06e0f4b476 | 125 | { |
| mjr | 104:6e06e0f4b476 | 126 | if (*p < threshold) |
| mjr | 104:6e06e0f4b476 | 127 | { |
| mjr | 104:6e06e0f4b476 | 128 | // got it - report this position |
| mjr | 104:6e06e0f4b476 | 129 | pos = p - pix; |
| mjr | 104:6e06e0f4b476 | 130 | return true; |
| mjr | 104:6e06e0f4b476 | 131 | } |
| mjr | 104:6e06e0f4b476 | 132 | } |
| mjr | 104:6e06e0f4b476 | 133 | |
| mjr | 104:6e06e0f4b476 | 134 | // no edge found - report failure |
| mjr | 104:6e06e0f4b476 | 135 | return false; |
| mjr | 104:6e06e0f4b476 | 136 | } |
| mjr | 104:6e06e0f4b476 | 137 | |
| mjr | 104:6e06e0f4b476 | 138 | // Use a fixed orientation for this sensor. The shadow-edge sensors |
| mjr | 104:6e06e0f4b476 | 139 | // try to infer the direction by checking which end of the image is |
| mjr | 104:6e06e0f4b476 | 140 | // brighter, which works well for the shadow sensors because the back |
| mjr | 104:6e06e0f4b476 | 141 | // end of the image will always be in shadow. But for this sensor, |
| mjr | 104:6e06e0f4b476 | 142 | // we're taking an image of the plunger (not its shadow), and the |
| mjr | 104:6e06e0f4b476 | 143 | // back end of the plunger is the part with the spring, which has a |
| mjr | 104:6e06e0f4b476 | 144 | // fuzzy and complex reflectivity pattern because of the spring. |
| mjr | 104:6e06e0f4b476 | 145 | // So for this sensor, it's better to insist that the user sets it |
| mjr | 104:6e06e0f4b476 | 146 | // up in a canonical orientation. That's a reasaonble expectation |
| mjr | 104:6e06e0f4b476 | 147 | // for this sensor anyway, because the physical installation won't |
| mjr | 104:6e06e0f4b476 | 148 | // be as ad hoc as the TSL1410R setup, which only required that you |
| mjr | 104:6e06e0f4b476 | 149 | // mounted the sensor itself. In this case, you have to build a |
| mjr | 104:6e06e0f4b476 | 150 | // circuit board and mount a lens on it, so it's reasonable to |
| mjr | 104:6e06e0f4b476 | 151 | // expect that everyone will be using the mounting apparatus plans |
| mjr | 104:6e06e0f4b476 | 152 | // that we'll detail in the build guide. In any case, we'll just |
| mjr | 104:6e06e0f4b476 | 153 | // make it clear in the instructions that you have to mount the |
| mjr | 104:6e06e0f4b476 | 154 | // sensor in a certain orientation. |
| mjr | 104:6e06e0f4b476 | 155 | virtual int getOrientation() const { return 1; } |
| mjr | 104:6e06e0f4b476 | 156 | |
| mjr | 104:6e06e0f4b476 | 157 | // the hardware sensor interface |
| mjr | 100:1ff35c07217c | 158 | PlungerSensorImageInterfaceTCD1103<invertedLogicGates> sensor; |
| mjr | 100:1ff35c07217c | 159 | }; |