An I/O controller for virtual pinball machines: accelerometer nudge sensing, analog plunger input, button input encoding, LedWiz compatible output controls, and more.

Dependencies:   mbed FastIO FastPWM USBDevice

Fork of Pinscape_Controller by Mike R

Embed: (wiki syntax)

« Back to documentation index

Show/hide line numbers edgeSensor.h Source File

edgeSensor.h

00001 // Edge position sensor - 2D optical
00002 //
00003 // This class implements our plunger sensor interface using edge
00004 // detection on a 2D optical sensor.  With this setup, a 2D optical
00005 // sensor is placed close to the plunger, parallel to the rod, with a 
00006 // light source opposite the plunger.  This makes the plunger cast a
00007 // shadow on the sensor.  We figure the plunger position by detecting
00008 // where the shadow is, by finding the edge between the bright and
00009 // dark regions in the image.
00010 //
00011 // This class is designed to work with any type of 2D optical sensor.
00012 // We have subclasses for the TSL1410R and TSL1412S sensors, but other
00013 // similar sensors could be supported as well by adding interfaces for
00014 // the physical electronics.  For the edge detection, we just need an 
00015 // array of pixel readings.
00016 
00017 #ifndef _EDGESENSOR_H_
00018 #define _EDGESENSOR_H_
00019 
00020 #include "plunger.h"
00021 
00022 // Scan method - select a method listed below.  Method 2 (find the point
00023 // with maximum brightness slop) seems to work the best so far.
00024 #define SCAN_METHOD 2
00025 //
00026 //
00027 //  0 = One-way scan.  This is the original algorithim from the v1 software, 
00028 //      with some slight improvements.  We start at the brighter end of the
00029 //      sensor and scan until we find a pixel darker than a threshold level 
00030 //      (halfway between the respective brightness levels at the bright and 
00031 //      dark ends of the sensor).  The original v1 algorithm simply stopped
00032 //      there.  This version is slightly improved: it scans for a few more 
00033 //      pixels to make sure that the majority of the adjacent pixels are 
00034 //      also in shadow, to help reject false edges from sensor noise or 
00035 //      optical shadows that make one pixel read darker than it should.
00036 //
00037 //  1 = Meet in the middle.  We start two scans concurrently, one from 
00038 //      the dark end of the sensor and one from the bright end.  For
00039 //      the scan from the dark end, we stop when we reach a pixel that's
00040 //      brighter than the average dark level by 2/3 of the gap between 
00041 //      the dark and bright levels.  For the scan from the bright end,
00042 //      we stop when we reach a pixel that's darker by 2/3 of the gap.
00043 //      Each time we stop, we look to see if the other scan has reached
00044 //      the same place.  If so, the two scans converged on a common
00045 //      point, which we take to be the edge between the dark and bright
00046 //      sections.  If the two scans haven't converged yet, we switch to
00047 //      the other scan and continue it.  We repeat this process until
00048 //      the two converge.  The benefit of this approach vs the older
00049 //      one-way scan is that it's much more tolerant of noise, and the
00050 //      degree of noise tolerance is dictated by how noisy the signal
00051 //      actually is.  The dynamic degree of tolerance is good because
00052 //      higher noise tolerance tends to result in reduced resolution.
00053 //
00054 //  2 = Maximum dL/ds (highest first derivative of luminance change per
00055 //      distance, or put another way, the steepest brightness slope).
00056 //      This scans the whole image and looks for the position with the 
00057 //      highest dL/ds value.  We average over a window of several pixels, 
00058 //      to smooth out pixel noise; this should avoid treating a single 
00059 //      spiky pixel as having a steep slope adjacent to it.  The advantage
00060 //      in this approach is that it looks for the strongest edge after
00061 //      considering all edges across the whole image, which should make 
00062 //      it less likely to be fooled by isolated noise that creates a 
00063 //      single false edge.  Algorithms 1 and 2 have basically fixed 
00064 //      thresholds for what constitutes an edge, but this approach is 
00065 //      more dynamic in that it evaluates each edge-like region and picks 
00066 //      the best one.  The width of the edge is still fixed, since that's 
00067 //      determined by the pixel window.  But that should be okay since we 
00068 //      only deal with one type of image.  It should be possible to adjust 
00069 //      the light source and sensor position to always yield an image with 
00070 //      a narrow enough edge region.
00071 //
00072 //      The max dL/ds method is the most compute-intensive method, because
00073 //      of the pixel window averaging.  An assembly language implemementation
00074 //      seems to be needed to make it fast enough on the KL25Z.  This method
00075 //      has a fixed run time because it always does exactly one pass over
00076 //      the whole pixel array.
00077 //
00078 //  3 = Total bright pixel count.  This simply adds up the total number
00079 //      of pixels above a threshold brightness, without worrying about 
00080 //      whether they're contiguous with other pixels on the same side
00081 //      of the edge.  Since we know there's always exactly one edge,
00082 //      all of the dark pixels should in principle be on one side, and
00083 //      all of the light pixels should be on the other side.  There
00084 //      might be some noise that creates isolated pixels that don't
00085 //      match their neighbors, but these should average out.  The virtue
00086 //      of this approach (apart from its simplicity) is that it should
00087 //      be immune to false edges - local spikes due to noise - that
00088 //      might fool the algorithms that explicitly look for edges.  In
00089 //      practice, though, it seems to be even more sensitive to noise
00090 //      than the other algorithms, probably because it treats every pixel
00091 //      as independent and thus doesn't have any sort of inherent noise
00092 //      reduction from considering relationships among pixels.
00093 //
00094 
00095 // assembler routine to scan for an edge using "mode 2" (maximum slope)
00096 extern "C" int edgeScanMode2(const uint8_t *pix, int npix, const uint8_t **edgePtr, int dir);
00097 
00098 // PlungerSensor interface implementation for edge detection setups.
00099 // This is a generic base class for image-based sensors where we detect
00100 // the plunger position by finding the edge of the shadow it casts on
00101 // the detector.
00102 //
00103 // Edge sensors use the image pixel span as the native position scale,
00104 // since a position reading is the pixel offset of the shadow edge.
00105 class PlungerSensorEdgePos: public PlungerSensorImage<int>
00106 {
00107 public:
00108     PlungerSensorEdgePos(PlungerSensorImageInterface &sensor, int npix)
00109         : PlungerSensorImage(sensor, npix, npix - 1)
00110     {
00111     }
00112     
00113     // Process an image - scan for the shadow edge to determine the plunger
00114     // position.
00115     //
00116     // If we detect the plunger position, we set 'pos' to the pixel location
00117     // of the edge and return true; otherwise we return false.  The 'pos'
00118     // value returned, if any, is adjusted for sensor orientation so that
00119     // it reflects the logical plunger position (i.e., distance retracted,
00120     // where 0 is always the fully forward position and 'n' is fully
00121     // retracted).
00122 
00123 #if SCAN_METHOD == 0
00124     // Scan method 0: one-way scan; original method used in v1 firmware.
00125     bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/)
00126     {        
00127         // Get the levels at each end
00128         int a = (int(pix[0]) + pix[1] + pix[2] + pix[3] + pix[4])/5;
00129         int b = (int(pix[n-1]) + pix[n-2] + pix[n-3] + pix[n-4] + pix[n-5])/5;
00130         
00131         // Figure the sensor orientation based on the relative brightness
00132         // levels at the opposite ends of the image.  We're going to scan
00133         // across the image from each side - 'bi' is the starting index
00134         // scanning from the bright side, 'di' is the starting index on
00135         // the dark side.  'binc' and 'dinc' are the pixel increments
00136         // for the respective indices.
00137         int bi;
00138         if (a > b+10)
00139         {
00140             // left end is brighter - standard orientation
00141             dir = 1;
00142             bi = 4;
00143         }
00144         else if (b > a+10)
00145         {
00146            // right end is brighter - reverse orientation
00147             dir = -1;
00148             bi = n - 5;
00149         }
00150         else if (dir != 0)
00151         {
00152             // We don't have enough contrast to detect the orientation
00153             // from this image, so either the image is too overexposed
00154             // or underexposed to be useful, or the entire sensor is in
00155             // light or darkness.  We'll assume the latter: the plunger
00156             // is blocking the whole window or isn't in the frame at
00157             // all.  We'll also assume that the exposure level is
00158             // similar to that in recent frames where we *did* detect
00159             // the direction.  This means that if the new exposure level
00160             // (which is about the same over the whole array) is less
00161             // than the recent midpoint, we must be entirely blocked
00162             // by the plunger, so it's all the way forward; if the
00163             // brightness is above the recent midpoint, we must be
00164             // entirely exposed, so the plunger is all the way back.
00165 
00166             // figure the average of the recent midpoint brightnesses            
00167             int sum = 0;
00168             for (int i = 0 ; i < countof(midpt) ; sum += midpt[i++]) ;
00169             sum /= countof(midpt);
00170             
00171             // Figure the average of our two ends.  We have very
00172             // little contrast overall, so we already know that the
00173             // two ends are about the same, but we can't expect the
00174             // lighting to be perfectly uniform.  Averaging the ends
00175             // will smooth out variations due to light source placement,
00176             // sensor noise, etc.
00177             a = (a+b)/2;
00178             
00179             // Check if we seem to be fully exposed or fully covered.
00180             pos = a < sum ? 0 : n;
00181             
00182             // stop here with a successful reading
00183             return true;
00184         }
00185         else
00186         {
00187             // We can't detect the orientation from this image, and 
00188             // we don't know it from previous images, so we have nothing
00189             // to go on.  Give up and return failure.
00190             return false;
00191         }
00192             
00193         // Figure the crossover brightness levels for detecting the edge.
00194         // The midpoint is the brightness level halfway between the bright
00195         // and dark regions we detected at the opposite ends of the sensor.
00196         // To find the edge, we'll look for a brightness level slightly 
00197         // *past* the midpoint, to help reject noise - the bright region
00198         // pixels should all cluster close to the higher level, and the
00199         // shadow region should all cluster close to the lower level.
00200         // We'll define "close" as within 1/3 of the gap between the 
00201         // extremes.
00202         int mid = (a+b)/2;
00203 
00204         // Scan from the bright side looking, for a pixel that drops below the
00205         // midpoint brightess.  To reduce false positives from noise, check to
00206         // see if the majority of the next few pixels stay in shadow - if not,
00207         // consider the dark pixel to be some kind of transient noise, and
00208         // continue looking for a more solid edge.
00209         for (int i = 5 ; i < n-5 ; ++i, bi += dir)
00210         {
00211             // check to see if we found a dark pixel
00212             if (pix[bi] < mid)
00213             {
00214                 // make sure we have a sustained edge
00215                 int ok = 0;
00216                 int bi2 = bi + dir;
00217                 for (int j = 0 ; j < 5 ; ++j, bi2 += dir)
00218                 {
00219                     // count this pixel if it's darker than the midpoint
00220                     if (pix[bi2] < mid)
00221                         ++ok;
00222                 }
00223                 
00224                 // if we're clearly in the dark section, we have our edge
00225                 if (ok > 3)
00226                 {
00227                     // Success.  Since we found an edge in this scan, save the
00228                     // midpoint brightness level in our history list, to help
00229                     // with any future frames with insufficient contrast.
00230                     midpt[midptIdx++] = mid;
00231                     midptIdx %= countof(midpt);
00232                     
00233                     // return the detected position
00234                     pos = i;
00235                     return true;
00236                 }
00237             }
00238         }
00239         
00240         // no edge found
00241         return false;
00242     }
00243 #endif // SCAN_METHOD 0
00244     
00245 #if SCAN_METHOD == 1
00246     // Scan method 1: meet in the middle.
00247     bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/)
00248     {        
00249         // Get the levels at each end
00250         int a = (int(pix[0]) + pix[1] + pix[2] + pix[3] + pix[4])/5;
00251         int b = (int(pix[n-1]) + pix[n-2] + pix[n-3] + pix[n-4] + pix[n-5])/5;
00252         
00253         // Figure the sensor orientation based on the relative brightness
00254         // levels at the opposite ends of the image.  We're going to scan
00255         // across the image from each side - 'bi' is the starting index
00256         // scanning from the bright side, 'di' is the starting index on
00257         // the dark side.  'binc' and 'dinc' are the pixel increments
00258         // for the respective indices.
00259         int bi, di;
00260         int binc, dinc;
00261         if (a > b+10)
00262         {
00263             // left end is brighter - standard orientation
00264             dir = 1;
00265             bi = 4, di = n - 5;
00266             binc = 1, dinc = -1;
00267         }
00268         else if (b > a+10)
00269         {
00270             // right end is brighter - reverse orientation
00271             dir = -1;
00272             bi = n - 5, di = 4;
00273             binc = -1, dinc = 1;
00274         }
00275         else
00276         {
00277             // can't detect direction
00278             return false;
00279         }
00280             
00281         // Figure the crossover brightness levels for detecting the edge.
00282         // The midpoint is the brightness level halfway between the bright
00283         // and dark regions we detected at the opposite ends of the sensor.
00284         // To find the edge, we'll look for a brightness level slightly 
00285         // *past* the midpoint, to help reject noise - the bright region
00286         // pixels should all cluster close to the higher level, and the
00287         // shadow region should all cluster close to the lower level.
00288         // We'll define "close" as within 1/3 of the gap between the 
00289         // extremes.
00290         int mid = (a+b)/2;
00291         int delta6 = abs(a-b)/6;
00292         int crossoverHi = mid + delta6;
00293         int crossoverLo = mid - delta6;
00294 
00295         // Scan inward from the each end, looking for edges.  Each time we
00296         // find an edge from one direction, we'll see if the scan from the
00297         // other direction agrees.  If it does, we have a winner.  If they
00298         // don't agree, we must have found some noise in one direction or the
00299         // other, so switch sides and continue the scan.  On each continued
00300         // scan, if the stopping point from the last scan *was* noise, we'll
00301         // start seeing the expected non-edge pixels again as we move on,
00302         // so we'll effectively factor out the noise.  If what stopped us
00303         // *wasn't* noise but was a legitimate edge, we'll see that we're
00304         // still in the region that stopped us in the first place and just
00305         // stop again immediately.  
00306         //
00307         // The two sides have to converge, because they march relentlessly
00308         // towards each other until they cross.  Even if we have a totally
00309         // random bunch of pixels, the two indices will eventually meet and
00310         // we'll declare that to be the edge position.  The processing time
00311         // is linear in the pixel count - it's equivalent to one pass over
00312         // the pixels.  The measured time for 1280 pixels is about 1.3ms,
00313         // which is about half the DMA transfer time.  Our goal is always
00314         // to complete the processing in less than the DMA transfer time,
00315         // since that's as fast as we can possibly go with the physical
00316         // sensor.  Since our processing time is overlapped with the DMA
00317         // transfer, the overall frame rate is limited by the *longer* of
00318         // the two times, not the sum of the two times.  So as long as the
00319         // processing takes less time than the DMA transfer, we're not 
00320         // contributing at all to the overall frame rate limit - it's like
00321         // we're not even here.
00322         for (;;)
00323         {
00324             // scan from the bright side
00325             for (bi += binc ; bi >= 5 && bi <= n-6 ; bi += binc)
00326             {
00327                 // if we found a dark pixel, consider it to be an edge
00328                 if (pix[bi] < crossoverLo)
00329                     break;
00330             }
00331             
00332             // if we reached an extreme, return failure
00333             if (bi < 5 || bi > n-6)
00334                 return false;
00335             
00336             // if the two directions crossed, we have a winner
00337             if (binc > 0 ? bi >= di : bi <= di)
00338             {
00339                 pos = (dir == 1 ? bi : n - bi);
00340                 return true;
00341             }
00342             
00343             // they haven't converged yet, so scan from the dark side
00344             for (di += dinc ; di >= 5 && di <= n-6 ; di += dinc)
00345             {
00346                 // if we found a bright pixel, consider it to be an edge
00347                 if (pix[di] > crossoverHi)
00348                     break;
00349             }
00350             
00351             // if we reached an extreme, return failure
00352             if (di < 5 || di > n-6)
00353                 return false;
00354             
00355             // if they crossed now, we have a winner
00356             if (binc > 0 ? bi >= di : bi <= di)
00357             {
00358                 pos = (dir == 1 ? di : n - di);
00359                 return true;
00360             }
00361         }
00362     }
00363 #endif // SCAN METHOD 1
00364 
00365 #if SCAN_METHOD == 2
00366     // Scan method 2: scan for steepest brightness slope.
00367     virtual bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/)
00368     {        
00369         // Get the levels at each end by averaging across several pixels.
00370         // Compute just the sums: don't bother dividing by the count, since 
00371         // the sums are equivalent to the averages as long as we know 
00372         // everything is multiplied by the number of samples.
00373         int a = (int(pix[0]) + pix[1] + pix[2] + pix[3] + pix[4]);
00374         int b = (int(pix[n-1]) + pix[n-2] + pix[n-3] + pix[n-4] + pix[n-5]);
00375         
00376         // Figure the sensor orientation based on the relative brightness
00377         // levels at the opposite ends of the image.  We're going to scan
00378         // across the image from each side - 'bi' is the starting index
00379         // scanning from the bright side, 'di' is the starting index on
00380         // the dark side.  'binc' and 'dinc' are the pixel increments
00381         // for the respective indices.
00382         if (a > b + 50)
00383         {
00384             // left end is brighter - standard orientation
00385             dir = 1;
00386         }
00387         else if (b > a + 50)
00388         {
00389             // right end is brighter - reverse orientation
00390             dir = -1;
00391         }
00392         else
00393         {
00394             // can't determine direction
00395             return false;
00396         }
00397 
00398         // scan for the steepest edge using the assembly language 
00399         // implementation (since the C++ version is too slow)
00400         const uint8_t *edgep = 0;
00401         if (edgeScanMode2(pix, n, &edgep, dir))
00402         {
00403             // edgep has the pixel array pointer; convert it to an offset
00404             pos = edgep - pix;
00405             
00406             // if the sensor orientation is reversed, figure the index from
00407             // the other end of the array
00408             if (dir < 0)
00409                 pos = n - pos;
00410                 
00411             // success
00412             return true;
00413         }
00414         else
00415         {
00416             // no edge found
00417             return false;
00418         }
00419 
00420     }
00421 #endif // SCAN_METHOD 2
00422 
00423 #if SCAN_METHOD == 3
00424     // Scan method 0: one-way scan; original method used in v1 firmware.
00425     bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/)
00426     {        
00427         // Get the levels at each end
00428         int a = (int(pix[0]) + pix[1] + pix[2] + pix[3] + pix[4])/5;
00429         int b = (int(pix[n-1]) + pix[n-2] + pix[n-3] + pix[n-4] + pix[n-5])/5;
00430         
00431         // Figure the sensor orientation based on the relative brightness
00432         // levels at the opposite ends of the image.  We're going to scan
00433         // across the image from each side - 'bi' is the starting index
00434         // scanning from the bright side, 'di' is the starting index on
00435         // the dark side.  'binc' and 'dinc' are the pixel increments
00436         // for the respective indices.
00437         if (a > b+10)
00438         {
00439             // left end is brighter - standard orientation
00440             dir = 1;
00441         }
00442         else if (b > a+10)
00443         {
00444            // right end is brighter - reverse orientation
00445             dir = -1;
00446         }
00447         else
00448         {
00449             // We can't detect the orientation from this image
00450             return false;
00451         }
00452             
00453         // Figure the crossover brightness levels for detecting the edge.
00454         // The midpoint is the brightness level halfway between the bright
00455         // and dark regions we detected at the opposite ends of the sensor.
00456         // To find the edge, we'll look for a brightness level slightly 
00457         // *past* the midpoint, to help reject noise - the bright region
00458         // pixels should all cluster close to the higher level, and the
00459         // shadow region should all cluster close to the lower level.
00460         // We'll define "close" as within 1/3 of the gap between the 
00461         // extremes.
00462         int mid = (a+b)/2;
00463 
00464         // Count pixels brighter than the brightness midpoint.  We assume
00465         // that all of the bright pixels are contiguously within the bright
00466         // region, so we simply have to count them up.  Even if we have a
00467         // few noisy pixels in the dark region above the midpoint, these
00468         // should on average be canceled out by anomalous dark pixels in
00469         // the bright region.
00470         int bcnt = 0;
00471         for (int i = 0 ; i < n ; ++i)
00472         {
00473             if (pix[i] > mid)
00474                 ++bcnt;
00475         }
00476         
00477         // The position is simply the size of the bright region
00478         pos = bcnt;
00479         if (dir < 1)
00480             pos = n - pos;
00481         return true;
00482     }
00483 #endif // SCAN_METHOD 3
00484     
00485     
00486 protected:
00487     // Sensor orientation.  +1 means that the "tip" end - which is always
00488     // the brighter end in our images - is at the 0th pixel in the array.
00489     // -1 means that the tip is at the nth pixel in the array.  0 means
00490     // that we haven't figured it out yet.  We automatically infer this
00491     // from the relative light levels at each end of the array when we
00492     // successfully find a shadow edge.  The reason we save the information
00493     // is that we might occasionally get frames that are fully in shadow
00494     // or fully in light, and we can't infer the direction from such
00495     // frames.  Saving the information from past frames gives us a fallback 
00496     // when we can't infer it from the current frame.  Note that we update
00497     // this each time we can infer the direction, so the device will adapt
00498     // on the fly even if the user repositions the sensor while the software
00499     // is running.
00500     virtual int getOrientation() const { return dir; }
00501     int dir;
00502        
00503     // History of midpoint brightness levels for the last few successful
00504     // scans.  This is a circular buffer that we write on each scan where
00505     // we successfully detect a shadow edge.  (It's circular, so we
00506     // effectively discard the oldest element whenever we write a new one.)
00507     //
00508     // We use the history in cases where we have too little contrast to
00509     // detect an edge.  In these cases, we assume that the entire sensor
00510     // is either in shadow or light, which can happen if the plunger is at
00511     // one extreme or the other such that the edge of its shadow is out of 
00512     // the frame.  (Ideally, the sensor should be positioned so that the
00513     // shadow edge is always in the frame, but it's not always possible
00514     // to do this given the constrained space within a cabinet.)  The
00515     // history helps us decide which case we have - all shadow or all
00516     // light - by letting us compare our average pixel level in this
00517     // frame to the range in recent frames.  This assumes that the
00518     // exposure level is fairly consistent from frame to frame, which 
00519     // is usually true because the sensor and light source are both
00520     // fixed in place.
00521     // 
00522     // We always try first to infer the bright and dark levels from the 
00523     // image, since this lets us adapt automatically to different exposure 
00524     // levels.  The exposure level can vary by integration time and the 
00525     // intensity and positioning of the light source, and we want
00526     // to be as flexible as we can about both.
00527     uint8_t midpt[10];
00528     uint8_t midptIdx;
00529     
00530 public:
00531 };
00532 
00533 
00534 #endif /* _EDGESENSOR_H_ */