An I/O controller for virtual pinball machines: accelerometer nudge sensing, analog plunger input, button input encoding, LedWiz compatible output controls, and more.

Dependencies:   mbed FastIO FastPWM USBDevice

Fork of Pinscape_Controller by Mike R

/media/uploads/mjr/pinscape_no_background_small_L7Miwr6.jpg

This is Version 2 of the Pinscape Controller, an I/O controller for virtual pinball machines. (You can find the old version 1 software here.) Pinscape is software for the KL25Z that turns the board into a full-featured I/O controller for virtual pinball, with support for accelerometer-based nudging, a real plunger, button inputs, and feedback device control.

In case you haven't heard of the concept before, a "virtual pinball machine" is basically a video pinball simulator that's built into a real pinball machine body. A TV monitor goes in place of the pinball playfield, and a second TV goes in the backbox to serve as the "backglass" display. A third smaller monitor can serve as the "DMD" (the Dot Matrix Display used for scoring on newer machines), or you can even install a real pinball plasma DMD. A computer is hidden inside the cabinet, running pinball emulation software that displays a life-sized playfield on the main TV. The cabinet has all of the usual buttons, too, so it not only looks like the real thing, but plays like it too. That's a picture of my own machine to the right. On the outside, it's built exactly like a real arcade pinball machine, with the same overall dimensions and all of the standard pinball cabinet hardware.

A few small companies build and sell complete, finished virtual pinball machines, but I think it's more fun as a DIY project. If you have some basic wood-working skills and know your way around PCs, you can build one from scratch. The computer part is just an ordinary Windows PC, and all of the pinball emulation can be built out of free, open-source software. In that spirit, the Pinscape Controller is an open-source software/hardware project that offers a no-compromises, all-in-one control center for all of the unique input/output needs of a virtual pinball cabinet. If you've been thinking about building one of these, but you're not sure how to connect a plunger, flipper buttons, lights, nudge sensor, and whatever else you can think of, this project might be just what you're looking for.

You can find much more information about DIY Pin Cab building in general in the Virtual Cabinet Forum on vpforums.org. Also visit my Pinscape Resources page for more about this project and other virtual pinball projects I'm working on.

Downloads

  • Pinscape Release Builds: This page has download links for all of the Pinscape software. To get started, install and run the Pinscape Config Tool on your Windows computer. It will lead you through the steps for installing the Pinscape firmware on the KL25Z.
  • Config Tool Source Code. The complete C# source code for the config tool. You don't need this to run the tool, but it's available if you want to customize anything or see how it works inside.

Documentation

The new Version 2 Build Guide is now complete! This new version aims to be a complete guide to building a virtual pinball machine, including not only the Pinscape elements but all of the basics, from sourcing parts to building all of the hardware.

You can also refer to the original Hardware Build Guide (PDF), but that's out of date now, since it refers to the old version 1 software, which was rather different (especially when it comes to configuration).

System Requirements

The new config tool requires a fairly up-to-date Microsoft .NET installation. If you use Windows Update to keep your system current, you should be fine. A modern version of Internet Explorer (IE) is required, even if you don't use it as your main browser, because the config tool uses some system components that Microsoft packages into the IE install set. I test with IE11, so that's known to work. IE8 doesn't work. IE9 and 10 are unknown at this point.

The Windows requirements are only for the config tool. The firmware doesn't care about anything on the Windows side, so if you can make do without the config tool, you can use almost any Windows setup.

Main Features

Plunger: The Pinscape Controller started out as a "mechanical plunger" controller: a device for attaching a real pinball plunger to the video game software so that you could launch the ball the natural way. This is still, of course, a central feature of the project. The software supports several types of sensors: a high-resolution optical sensor (which works by essentially taking pictures of the plunger as it moves); a slide potentionmeter (which determines the position via the changing electrical resistance in the pot); a quadrature sensor (which counts bars printed on a special guide rail that it moves along); and an IR distance sensor (which determines the position by sending pulses of light at the plunger and measuring the round-trip travel time). The Build Guide explains how to set up each type of sensor.

Nudging: The KL25Z (the little microcontroller that the software runs on) has a built-in accelerometer. The Pinscape software uses it to sense when you nudge the cabinet, and feeds the acceleration data to the pinball software on the PC. This turns physical nudges into virtual English on the ball. The accelerometer is quite sensitive and accurate, so we can measure the difference between little bumps and hard shoves, and everything in between. The result is natural and immersive.

Buttons: You can wire real pinball buttons to the KL25Z, and the software will translate the buttons into PC input. You have the option to map each button to a keyboard key or joystick button. You can wire up your flipper buttons, Magna Save buttons, Start button, coin slots, operator buttons, and whatever else you need.

Feedback devices: You can also attach "feedback devices" to the KL25Z. Feedback devices are things that create tactile, sound, and lighting effects in sync with the game action. The most popular PC pinball emulators know how to address a wide variety of these devices, and know how to match them to on-screen action in each virtual table. You just need an I/O controller that translates commands from the PC into electrical signals that turn the devices on and off. The Pinscape Controller can do that for you.

Expansion Boards

There are two main ways to run the Pinscape Controller: standalone, or using the "expansion boards".

In the basic standalone setup, you just need the KL25Z, plus whatever buttons, sensors, and feedback devices you want to attach to it. This mode lets you take advantage of everything the software can do, but for some features, you'll have to build some ad hoc external circuitry to interface external devices with the KL25Z. The Build Guide has detailed plans for exactly what you need to build.

The other option is the Pinscape Expansion Boards. The expansion boards are a companion project, which is also totally free and open-source, that provides Printed Circuit Board (PCB) layouts that are designed specifically to work with the Pinscape software. The PCB designs are in the widely used EAGLE format, which many PCB manufacturers can turn directly into physical boards for you. The expansion boards organize all of the external connections more neatly than on the standalone KL25Z, and they add all of the interface circuitry needed for all of the advanced software functions. The big thing they bring to the table is lots of high-power outputs. The boards provide a modular system that lets you add boards to add more outputs. If you opt for the basic core setup, you'll have enough outputs for all of the toys in a really well-equipped cabinet. If your ambitions go beyond merely well-equipped and run to the ridiculously extravagant, just add an extra board or two. The modular design also means that you can add to the system over time.

Expansion Board project page

Update notes

If you have a Pinscape V1 setup already installed, you should be able to switch to the new version pretty seamlessly. There are just a couple of things to be aware of.

First, the "configuration" procedure is completely different in the new version. Way better and way easier, but it's not what you're used to from V1. In V1, you had to edit the project source code and compile your own custom version of the program. No more! With V2, you simply install the standard, pre-compiled .bin file, and select options using the Pinscape Config Tool on Windows.

Second, if you're using the TSL1410R optical sensor for your plunger, there's a chance you'll need to boost your light source's brightness a little bit. The "shutter speed" is faster in this version, which means that it doesn't spend as much time collecting light per frame as before. The software actually does "auto exposure" adaptation on every frame, so the increased shutter speed really shouldn't bother it, but it does require a certain minimum level of contrast, which requires a certain minimal level of lighting. Check the plunger viewer in the setup tool if you have any problems; if the image looks totally dark, try increasing the light level to see if that helps.

New Features

V2 has numerous new features. Here are some of the highlights...

Dynamic configuration: as explained above, configuration is now handled through the Config Tool on Windows. It's no longer necessary to edit the source code or compile your own modified binary.

Improved plunger sensing: the software now reads the TSL1410R optical sensor about 15x faster than it did before. This allows reading the sensor at full resolution (400dpi), about 400 times per second. The faster frame rate makes a big difference in how accurately we can read the plunger position during the fast motion of a release, which allows for more precise position sensing and faster response. The differences aren't dramatic, since the sensing was already pretty good even with the slower V1 scan rate, but you might notice a little better precision in tricky skill shots.

Keyboard keys: button inputs can now be mapped to keyboard keys. The joystick button option is still available as well, of course. Keyboard keys have the advantage of being closer to universal for PC pinball software: some pinball software can be set up to take joystick input, but nearly all PC pinball emulators can take keyboard input, and nearly all of them use the same key mappings.

Local shift button: one physical button can be designed as the local shift button. This works like a Shift button on a keyboard, but with cabinet buttons. It allows each physical button on the cabinet to have two PC keys assigned, one normal and one shifted. Hold down the local shift button, then press another key, and the other key's shifted key mapping is sent to the PC. The shift button can have a regular key mapping of its own as well, so it can do double duty. The shift feature lets you access more functions without cluttering your cabinet with extra buttons. It's especially nice for less frequently used functions like adjusting the volume or activating night mode.

Night mode: the output controller has a new "night mode" option, which lets you turn off all of your noisy devices with a single button, switch, or PC command. You can designate individual ports as noisy or not. Night mode only disables the noisemakers, so you still get the benefit of your flashers, button lights, and other quiet devices. This lets you play late into the night without disturbing your housemates or neighbors.

Gamma correction: you can designate individual output ports for gamma correction. This adjusts the intensity level of an output to make it match the way the human eye perceives brightness, so that fades and color mixes look more natural in lighting devices. You can apply this to individual ports, so that it only affects ports that actually have lights of some kind attached.

IR Remote Control: the controller software can transmit and/or receive IR remote control commands if you attach appropriate parts (an IR LED to send, an IR sensor chip to receive). This can be used to turn on your TV(s) when the system powers on, if they don't turn on automatically, and for any other functions you can think of requiring IR send/receive capabilities. You can assign IR commands to cabinet buttons, so that pressing a button on your cabinet sends a remote control command from the attached IR LED, and you can have the controller generate virtual key presses on your PC in response to received IR commands. If you have the IR sensor attached, the system can use it to learn commands from your existing remotes.

Yet more USB fixes: I've been gradually finding and fixing USB bugs in the mbed library for months now. This version has all of the fixes of the last couple of releases, of course, plus some new ones. It also has a new "last resort" feature, since there always seems to be "just one more" USB bug. The last resort is that you can tell the device to automatically reboot itself if it loses the USB connection and can't restore it within a given time limit.

More Downloads

  • Custom VP builds: I created modified versions of Visual Pinball 9.9 and Physmod5 that you might want to use in combination with this controller. The modified versions have special handling for plunger calibration specific to the Pinscape Controller, as well as some enhancements to the nudge physics. If you're not using the plunger, you might still want it for the nudge improvements. The modified version also works with any other input controller, so you can get the enhanced nudging effects even if you're using a different plunger/nudge kit. The big change in the modified versions is a "filter" for accelerometer input that's designed to make the response to cabinet nudges more realistic. It also makes the response more subdued than in the standard VP, so it's not to everyone's taste. The downloads include both the updated executables and the source code changes, in case you want to merge the changes into your own custom version(s).

    Note! These features are now standard in the official VP releases, so you don't need my custom builds if you're using 9.9.1 or later and/or VP 10. I don't think there's any reason to use my versions instead of the latest official ones, and in fact I'd encourage you to use the official releases since they're more up to date, but I'm leaving my builds available just in case. In the official versions, look for the checkbox "Enable Nudge Filter" in the Keys preferences dialog. My custom versions don't include that checkbox; they just enable the filter unconditionally.
  • Output circuit shopping list: This is a saved shopping cart at mouser.com with the parts needed to build one copy of the high-power output circuit for the LedWiz emulator feature, for use with the standalone KL25Z (that is, without the expansion boards). The quantities in the cart are for one output channel, so if you want N outputs, simply multiply the quantities by the N, with one exception: you only need one ULN2803 transistor array chip for each eight output circuits. If you're using the expansion boards, you won't need any of this, since the boards provide their own high-power outputs.
  • Cary Owens' optical sensor housing: A 3D-printable design for a housing/mounting bracket for the optical plunger sensor, designed by Cary Owens. This makes it easy to mount the sensor.
  • Lemming77's potentiometer mounting bracket and shooter rod connecter: Sketchup designs for 3D-printable parts for mounting a slide potentiometer as the plunger sensor. These were designed for a particular slide potentiometer that used to be available from an Aliexpress.com seller but is no longer listed. You can probably use this design as a starting point for other similar devices; just check the dimensions before committing the design to plastic.

Copyright and License

The Pinscape firmware is copyright 2014, 2021 by Michael J Roberts. It's released under an MIT open-source license. See License.

Warning to VirtuaPin Kit Owners

This software isn't designed as a replacement for the VirtuaPin plunger kit's firmware. If you bought the VirtuaPin kit, I recommend that you don't install this software. The VirtuaPin kit uses the same KL25Z microcontroller that Pinscape uses, but the rest of its hardware is different and incompatible. In particular, the Pinscape firmware doesn't include support for the IR proximity sensor used in the VirtuaPin plunger kit, so you won't be able to use your plunger device with the Pinscape firmware. In addition, the VirtuaPin setup uses a different set of GPIO pins for the button inputs from the Pinscape defaults, so if you do install the Pinscape firmware, you'll have to go into the Config Tool and reassign all of the buttons to match the VirtuaPin wiring.

Committer:
mjr
Date:
Sat Apr 18 19:08:55 2020 +0000
Revision:
109:310ac82cbbee
Parent:
104:6e06e0f4b476
TCD1103 DMA setup time padding to fix sporadic missed first pixel in transfer; fix TV ON so that the TV ON IR commands don't have to be grouped in the IR command first slots

Who changed what in which revision?

UserRevisionLine numberNew contents of line
mjr 82:4f6209cb5c33 1 // Edge position sensor - 2D optical
mjr 82:4f6209cb5c33 2 //
mjr 82:4f6209cb5c33 3 // This class implements our plunger sensor interface using edge
mjr 82:4f6209cb5c33 4 // detection on a 2D optical sensor. With this setup, a 2D optical
mjr 82:4f6209cb5c33 5 // sensor is placed close to the plunger, parallel to the rod, with a
mjr 82:4f6209cb5c33 6 // light source opposite the plunger. This makes the plunger cast a
mjr 82:4f6209cb5c33 7 // shadow on the sensor. We figure the plunger position by detecting
mjr 82:4f6209cb5c33 8 // where the shadow is, by finding the edge between the bright and
mjr 82:4f6209cb5c33 9 // dark regions in the image.
mjr 82:4f6209cb5c33 10 //
mjr 82:4f6209cb5c33 11 // This class is designed to work with any type of 2D optical sensor.
mjr 82:4f6209cb5c33 12 // We have subclasses for the TSL1410R and TSL1412S sensors, but other
mjr 82:4f6209cb5c33 13 // similar sensors could be supported as well by adding interfaces for
mjr 82:4f6209cb5c33 14 // the physical electronics. For the edge detection, we just need an
mjr 82:4f6209cb5c33 15 // array of pixel readings.
mjr 82:4f6209cb5c33 16
mjr 82:4f6209cb5c33 17 #ifndef _EDGESENSOR_H_
mjr 82:4f6209cb5c33 18 #define _EDGESENSOR_H_
mjr 82:4f6209cb5c33 19
mjr 82:4f6209cb5c33 20 #include "plunger.h"
mjr 82:4f6209cb5c33 21
mjr 82:4f6209cb5c33 22 // Scan method - select a method listed below. Method 2 (find the point
mjr 82:4f6209cb5c33 23 // with maximum brightness slop) seems to work the best so far.
mjr 82:4f6209cb5c33 24 #define SCAN_METHOD 2
mjr 82:4f6209cb5c33 25 //
mjr 82:4f6209cb5c33 26 //
mjr 82:4f6209cb5c33 27 // 0 = One-way scan. This is the original algorithim from the v1 software,
mjr 82:4f6209cb5c33 28 // with some slight improvements. We start at the brighter end of the
mjr 82:4f6209cb5c33 29 // sensor and scan until we find a pixel darker than a threshold level
mjr 82:4f6209cb5c33 30 // (halfway between the respective brightness levels at the bright and
mjr 82:4f6209cb5c33 31 // dark ends of the sensor). The original v1 algorithm simply stopped
mjr 82:4f6209cb5c33 32 // there. This version is slightly improved: it scans for a few more
mjr 82:4f6209cb5c33 33 // pixels to make sure that the majority of the adjacent pixels are
mjr 82:4f6209cb5c33 34 // also in shadow, to help reject false edges from sensor noise or
mjr 82:4f6209cb5c33 35 // optical shadows that make one pixel read darker than it should.
mjr 82:4f6209cb5c33 36 //
mjr 82:4f6209cb5c33 37 // 1 = Meet in the middle. We start two scans concurrently, one from
mjr 82:4f6209cb5c33 38 // the dark end of the sensor and one from the bright end. For
mjr 82:4f6209cb5c33 39 // the scan from the dark end, we stop when we reach a pixel that's
mjr 82:4f6209cb5c33 40 // brighter than the average dark level by 2/3 of the gap between
mjr 82:4f6209cb5c33 41 // the dark and bright levels. For the scan from the bright end,
mjr 82:4f6209cb5c33 42 // we stop when we reach a pixel that's darker by 2/3 of the gap.
mjr 82:4f6209cb5c33 43 // Each time we stop, we look to see if the other scan has reached
mjr 82:4f6209cb5c33 44 // the same place. If so, the two scans converged on a common
mjr 82:4f6209cb5c33 45 // point, which we take to be the edge between the dark and bright
mjr 82:4f6209cb5c33 46 // sections. If the two scans haven't converged yet, we switch to
mjr 82:4f6209cb5c33 47 // the other scan and continue it. We repeat this process until
mjr 82:4f6209cb5c33 48 // the two converge. The benefit of this approach vs the older
mjr 82:4f6209cb5c33 49 // one-way scan is that it's much more tolerant of noise, and the
mjr 82:4f6209cb5c33 50 // degree of noise tolerance is dictated by how noisy the signal
mjr 82:4f6209cb5c33 51 // actually is. The dynamic degree of tolerance is good because
mjr 82:4f6209cb5c33 52 // higher noise tolerance tends to result in reduced resolution.
mjr 82:4f6209cb5c33 53 //
mjr 82:4f6209cb5c33 54 // 2 = Maximum dL/ds (highest first derivative of luminance change per
mjr 82:4f6209cb5c33 55 // distance, or put another way, the steepest brightness slope).
mjr 82:4f6209cb5c33 56 // This scans the whole image and looks for the position with the
mjr 82:4f6209cb5c33 57 // highest dL/ds value. We average over a window of several pixels,
mjr 82:4f6209cb5c33 58 // to smooth out pixel noise; this should avoid treating a single
mjr 82:4f6209cb5c33 59 // spiky pixel as having a steep slope adjacent to it. The advantage
mjr 82:4f6209cb5c33 60 // in this approach is that it looks for the strongest edge after
mjr 82:4f6209cb5c33 61 // considering all edges across the whole image, which should make
mjr 82:4f6209cb5c33 62 // it less likely to be fooled by isolated noise that creates a
mjr 82:4f6209cb5c33 63 // single false edge. Algorithms 1 and 2 have basically fixed
mjr 82:4f6209cb5c33 64 // thresholds for what constitutes an edge, but this approach is
mjr 82:4f6209cb5c33 65 // more dynamic in that it evaluates each edge-like region and picks
mjr 82:4f6209cb5c33 66 // the best one. The width of the edge is still fixed, since that's
mjr 82:4f6209cb5c33 67 // determined by the pixel window. But that should be okay since we
mjr 82:4f6209cb5c33 68 // only deal with one type of image. It should be possible to adjust
mjr 82:4f6209cb5c33 69 // the light source and sensor position to always yield an image with
mjr 82:4f6209cb5c33 70 // a narrow enough edge region.
mjr 82:4f6209cb5c33 71 //
mjr 82:4f6209cb5c33 72 // The max dL/ds method is the most compute-intensive method, because
mjr 82:4f6209cb5c33 73 // of the pixel window averaging. An assembly language implemementation
mjr 82:4f6209cb5c33 74 // seems to be needed to make it fast enough on the KL25Z. This method
mjr 82:4f6209cb5c33 75 // has a fixed run time because it always does exactly one pass over
mjr 82:4f6209cb5c33 76 // the whole pixel array.
mjr 82:4f6209cb5c33 77 //
mjr 82:4f6209cb5c33 78 // 3 = Total bright pixel count. This simply adds up the total number
mjr 82:4f6209cb5c33 79 // of pixels above a threshold brightness, without worrying about
mjr 82:4f6209cb5c33 80 // whether they're contiguous with other pixels on the same side
mjr 82:4f6209cb5c33 81 // of the edge. Since we know there's always exactly one edge,
mjr 82:4f6209cb5c33 82 // all of the dark pixels should in principle be on one side, and
mjr 82:4f6209cb5c33 83 // all of the light pixels should be on the other side. There
mjr 82:4f6209cb5c33 84 // might be some noise that creates isolated pixels that don't
mjr 82:4f6209cb5c33 85 // match their neighbors, but these should average out. The virtue
mjr 82:4f6209cb5c33 86 // of this approach (apart from its simplicity) is that it should
mjr 82:4f6209cb5c33 87 // be immune to false edges - local spikes due to noise - that
mjr 82:4f6209cb5c33 88 // might fool the algorithms that explicitly look for edges. In
mjr 82:4f6209cb5c33 89 // practice, though, it seems to be even more sensitive to noise
mjr 82:4f6209cb5c33 90 // than the other algorithms, probably because it treats every pixel
mjr 82:4f6209cb5c33 91 // as independent and thus doesn't have any sort of inherent noise
mjr 82:4f6209cb5c33 92 // reduction from considering relationships among pixels.
mjr 82:4f6209cb5c33 93 //
mjr 82:4f6209cb5c33 94
mjr 82:4f6209cb5c33 95 // assembler routine to scan for an edge using "mode 2" (maximum slope)
mjr 104:6e06e0f4b476 96 extern "C" int edgeScanMode2(const uint8_t *pix, int npix, const uint8_t **edgePtr, int dir);
mjr 82:4f6209cb5c33 97
mjr 82:4f6209cb5c33 98 // PlungerSensor interface implementation for edge detection setups.
mjr 82:4f6209cb5c33 99 // This is a generic base class for image-based sensors where we detect
mjr 82:4f6209cb5c33 100 // the plunger position by finding the edge of the shadow it casts on
mjr 82:4f6209cb5c33 101 // the detector.
mjr 87:8d35c74403af 102 //
mjr 87:8d35c74403af 103 // Edge sensors use the image pixel span as the native position scale,
mjr 87:8d35c74403af 104 // since a position reading is the pixel offset of the shadow edge.
mjr 87:8d35c74403af 105 class PlungerSensorEdgePos: public PlungerSensorImage<int>
mjr 82:4f6209cb5c33 106 {
mjr 82:4f6209cb5c33 107 public:
mjr 104:6e06e0f4b476 108 PlungerSensorEdgePos(PlungerSensorImageInterface &sensor, int npix)
mjr 104:6e06e0f4b476 109 : PlungerSensorImage(sensor, npix, npix - 1)
mjr 82:4f6209cb5c33 110 {
mjr 82:4f6209cb5c33 111 }
mjr 82:4f6209cb5c33 112
mjr 82:4f6209cb5c33 113 // Process an image - scan for the shadow edge to determine the plunger
mjr 82:4f6209cb5c33 114 // position.
mjr 82:4f6209cb5c33 115 //
mjr 82:4f6209cb5c33 116 // If we detect the plunger position, we set 'pos' to the pixel location
mjr 82:4f6209cb5c33 117 // of the edge and return true; otherwise we return false. The 'pos'
mjr 82:4f6209cb5c33 118 // value returned, if any, is adjusted for sensor orientation so that
mjr 82:4f6209cb5c33 119 // it reflects the logical plunger position (i.e., distance retracted,
mjr 82:4f6209cb5c33 120 // where 0 is always the fully forward position and 'n' is fully
mjr 82:4f6209cb5c33 121 // retracted).
mjr 82:4f6209cb5c33 122
mjr 82:4f6209cb5c33 123 #if SCAN_METHOD == 0
mjr 82:4f6209cb5c33 124 // Scan method 0: one-way scan; original method used in v1 firmware.
mjr 87:8d35c74403af 125 bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/)
mjr 82:4f6209cb5c33 126 {
mjr 82:4f6209cb5c33 127 // Get the levels at each end
mjr 82:4f6209cb5c33 128 int a = (int(pix[0]) + pix[1] + pix[2] + pix[3] + pix[4])/5;
mjr 82:4f6209cb5c33 129 int b = (int(pix[n-1]) + pix[n-2] + pix[n-3] + pix[n-4] + pix[n-5])/5;
mjr 82:4f6209cb5c33 130
mjr 82:4f6209cb5c33 131 // Figure the sensor orientation based on the relative brightness
mjr 82:4f6209cb5c33 132 // levels at the opposite ends of the image. We're going to scan
mjr 82:4f6209cb5c33 133 // across the image from each side - 'bi' is the starting index
mjr 82:4f6209cb5c33 134 // scanning from the bright side, 'di' is the starting index on
mjr 82:4f6209cb5c33 135 // the dark side. 'binc' and 'dinc' are the pixel increments
mjr 82:4f6209cb5c33 136 // for the respective indices.
mjr 82:4f6209cb5c33 137 int bi;
mjr 82:4f6209cb5c33 138 if (a > b+10)
mjr 82:4f6209cb5c33 139 {
mjr 82:4f6209cb5c33 140 // left end is brighter - standard orientation
mjr 82:4f6209cb5c33 141 dir = 1;
mjr 82:4f6209cb5c33 142 bi = 4;
mjr 82:4f6209cb5c33 143 }
mjr 82:4f6209cb5c33 144 else if (b > a+10)
mjr 82:4f6209cb5c33 145 {
mjr 82:4f6209cb5c33 146 // right end is brighter - reverse orientation
mjr 82:4f6209cb5c33 147 dir = -1;
mjr 82:4f6209cb5c33 148 bi = n - 5;
mjr 82:4f6209cb5c33 149 }
mjr 82:4f6209cb5c33 150 else if (dir != 0)
mjr 82:4f6209cb5c33 151 {
mjr 82:4f6209cb5c33 152 // We don't have enough contrast to detect the orientation
mjr 82:4f6209cb5c33 153 // from this image, so either the image is too overexposed
mjr 82:4f6209cb5c33 154 // or underexposed to be useful, or the entire sensor is in
mjr 82:4f6209cb5c33 155 // light or darkness. We'll assume the latter: the plunger
mjr 82:4f6209cb5c33 156 // is blocking the whole window or isn't in the frame at
mjr 82:4f6209cb5c33 157 // all. We'll also assume that the exposure level is
mjr 82:4f6209cb5c33 158 // similar to that in recent frames where we *did* detect
mjr 82:4f6209cb5c33 159 // the direction. This means that if the new exposure level
mjr 82:4f6209cb5c33 160 // (which is about the same over the whole array) is less
mjr 82:4f6209cb5c33 161 // than the recent midpoint, we must be entirely blocked
mjr 82:4f6209cb5c33 162 // by the plunger, so it's all the way forward; if the
mjr 82:4f6209cb5c33 163 // brightness is above the recent midpoint, we must be
mjr 82:4f6209cb5c33 164 // entirely exposed, so the plunger is all the way back.
mjr 82:4f6209cb5c33 165
mjr 82:4f6209cb5c33 166 // figure the average of the recent midpoint brightnesses
mjr 82:4f6209cb5c33 167 int sum = 0;
mjr 82:4f6209cb5c33 168 for (int i = 0 ; i < countof(midpt) ; sum += midpt[i++]) ;
mjr 82:4f6209cb5c33 169 sum /= countof(midpt);
mjr 82:4f6209cb5c33 170
mjr 82:4f6209cb5c33 171 // Figure the average of our two ends. We have very
mjr 82:4f6209cb5c33 172 // little contrast overall, so we already know that the
mjr 82:4f6209cb5c33 173 // two ends are about the same, but we can't expect the
mjr 82:4f6209cb5c33 174 // lighting to be perfectly uniform. Averaging the ends
mjr 82:4f6209cb5c33 175 // will smooth out variations due to light source placement,
mjr 82:4f6209cb5c33 176 // sensor noise, etc.
mjr 82:4f6209cb5c33 177 a = (a+b)/2;
mjr 82:4f6209cb5c33 178
mjr 104:6e06e0f4b476 179 // Check if we seem to be fully exposed or fully covered.
mjr 82:4f6209cb5c33 180 pos = a < sum ? 0 : n;
mjr 104:6e06e0f4b476 181
mjr 104:6e06e0f4b476 182 // stop here with a successful reading
mjr 82:4f6209cb5c33 183 return true;
mjr 82:4f6209cb5c33 184 }
mjr 82:4f6209cb5c33 185 else
mjr 82:4f6209cb5c33 186 {
mjr 82:4f6209cb5c33 187 // We can't detect the orientation from this image, and
mjr 82:4f6209cb5c33 188 // we don't know it from previous images, so we have nothing
mjr 82:4f6209cb5c33 189 // to go on. Give up and return failure.
mjr 82:4f6209cb5c33 190 return false;
mjr 82:4f6209cb5c33 191 }
mjr 82:4f6209cb5c33 192
mjr 82:4f6209cb5c33 193 // Figure the crossover brightness levels for detecting the edge.
mjr 82:4f6209cb5c33 194 // The midpoint is the brightness level halfway between the bright
mjr 82:4f6209cb5c33 195 // and dark regions we detected at the opposite ends of the sensor.
mjr 82:4f6209cb5c33 196 // To find the edge, we'll look for a brightness level slightly
mjr 82:4f6209cb5c33 197 // *past* the midpoint, to help reject noise - the bright region
mjr 82:4f6209cb5c33 198 // pixels should all cluster close to the higher level, and the
mjr 82:4f6209cb5c33 199 // shadow region should all cluster close to the lower level.
mjr 82:4f6209cb5c33 200 // We'll define "close" as within 1/3 of the gap between the
mjr 82:4f6209cb5c33 201 // extremes.
mjr 82:4f6209cb5c33 202 int mid = (a+b)/2;
mjr 82:4f6209cb5c33 203
mjr 82:4f6209cb5c33 204 // Scan from the bright side looking, for a pixel that drops below the
mjr 82:4f6209cb5c33 205 // midpoint brightess. To reduce false positives from noise, check to
mjr 82:4f6209cb5c33 206 // see if the majority of the next few pixels stay in shadow - if not,
mjr 82:4f6209cb5c33 207 // consider the dark pixel to be some kind of transient noise, and
mjr 82:4f6209cb5c33 208 // continue looking for a more solid edge.
mjr 82:4f6209cb5c33 209 for (int i = 5 ; i < n-5 ; ++i, bi += dir)
mjr 82:4f6209cb5c33 210 {
mjr 82:4f6209cb5c33 211 // check to see if we found a dark pixel
mjr 82:4f6209cb5c33 212 if (pix[bi] < mid)
mjr 82:4f6209cb5c33 213 {
mjr 82:4f6209cb5c33 214 // make sure we have a sustained edge
mjr 82:4f6209cb5c33 215 int ok = 0;
mjr 82:4f6209cb5c33 216 int bi2 = bi + dir;
mjr 82:4f6209cb5c33 217 for (int j = 0 ; j < 5 ; ++j, bi2 += dir)
mjr 82:4f6209cb5c33 218 {
mjr 82:4f6209cb5c33 219 // count this pixel if it's darker than the midpoint
mjr 82:4f6209cb5c33 220 if (pix[bi2] < mid)
mjr 82:4f6209cb5c33 221 ++ok;
mjr 82:4f6209cb5c33 222 }
mjr 82:4f6209cb5c33 223
mjr 82:4f6209cb5c33 224 // if we're clearly in the dark section, we have our edge
mjr 82:4f6209cb5c33 225 if (ok > 3)
mjr 82:4f6209cb5c33 226 {
mjr 82:4f6209cb5c33 227 // Success. Since we found an edge in this scan, save the
mjr 82:4f6209cb5c33 228 // midpoint brightness level in our history list, to help
mjr 82:4f6209cb5c33 229 // with any future frames with insufficient contrast.
mjr 82:4f6209cb5c33 230 midpt[midptIdx++] = mid;
mjr 82:4f6209cb5c33 231 midptIdx %= countof(midpt);
mjr 82:4f6209cb5c33 232
mjr 82:4f6209cb5c33 233 // return the detected position
mjr 82:4f6209cb5c33 234 pos = i;
mjr 82:4f6209cb5c33 235 return true;
mjr 82:4f6209cb5c33 236 }
mjr 82:4f6209cb5c33 237 }
mjr 82:4f6209cb5c33 238 }
mjr 82:4f6209cb5c33 239
mjr 82:4f6209cb5c33 240 // no edge found
mjr 82:4f6209cb5c33 241 return false;
mjr 82:4f6209cb5c33 242 }
mjr 82:4f6209cb5c33 243 #endif // SCAN_METHOD 0
mjr 82:4f6209cb5c33 244
mjr 82:4f6209cb5c33 245 #if SCAN_METHOD == 1
mjr 82:4f6209cb5c33 246 // Scan method 1: meet in the middle.
mjr 87:8d35c74403af 247 bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/)
mjr 82:4f6209cb5c33 248 {
mjr 82:4f6209cb5c33 249 // Get the levels at each end
mjr 82:4f6209cb5c33 250 int a = (int(pix[0]) + pix[1] + pix[2] + pix[3] + pix[4])/5;
mjr 82:4f6209cb5c33 251 int b = (int(pix[n-1]) + pix[n-2] + pix[n-3] + pix[n-4] + pix[n-5])/5;
mjr 82:4f6209cb5c33 252
mjr 82:4f6209cb5c33 253 // Figure the sensor orientation based on the relative brightness
mjr 82:4f6209cb5c33 254 // levels at the opposite ends of the image. We're going to scan
mjr 82:4f6209cb5c33 255 // across the image from each side - 'bi' is the starting index
mjr 82:4f6209cb5c33 256 // scanning from the bright side, 'di' is the starting index on
mjr 82:4f6209cb5c33 257 // the dark side. 'binc' and 'dinc' are the pixel increments
mjr 82:4f6209cb5c33 258 // for the respective indices.
mjr 82:4f6209cb5c33 259 int bi, di;
mjr 82:4f6209cb5c33 260 int binc, dinc;
mjr 82:4f6209cb5c33 261 if (a > b+10)
mjr 82:4f6209cb5c33 262 {
mjr 82:4f6209cb5c33 263 // left end is brighter - standard orientation
mjr 82:4f6209cb5c33 264 dir = 1;
mjr 82:4f6209cb5c33 265 bi = 4, di = n - 5;
mjr 82:4f6209cb5c33 266 binc = 1, dinc = -1;
mjr 82:4f6209cb5c33 267 }
mjr 82:4f6209cb5c33 268 else if (b > a+10)
mjr 82:4f6209cb5c33 269 {
mjr 82:4f6209cb5c33 270 // right end is brighter - reverse orientation
mjr 82:4f6209cb5c33 271 dir = -1;
mjr 82:4f6209cb5c33 272 bi = n - 5, di = 4;
mjr 82:4f6209cb5c33 273 binc = -1, dinc = 1;
mjr 82:4f6209cb5c33 274 }
mjr 82:4f6209cb5c33 275 else
mjr 82:4f6209cb5c33 276 {
mjr 82:4f6209cb5c33 277 // can't detect direction
mjr 82:4f6209cb5c33 278 return false;
mjr 82:4f6209cb5c33 279 }
mjr 82:4f6209cb5c33 280
mjr 82:4f6209cb5c33 281 // Figure the crossover brightness levels for detecting the edge.
mjr 82:4f6209cb5c33 282 // The midpoint is the brightness level halfway between the bright
mjr 82:4f6209cb5c33 283 // and dark regions we detected at the opposite ends of the sensor.
mjr 82:4f6209cb5c33 284 // To find the edge, we'll look for a brightness level slightly
mjr 82:4f6209cb5c33 285 // *past* the midpoint, to help reject noise - the bright region
mjr 82:4f6209cb5c33 286 // pixels should all cluster close to the higher level, and the
mjr 82:4f6209cb5c33 287 // shadow region should all cluster close to the lower level.
mjr 82:4f6209cb5c33 288 // We'll define "close" as within 1/3 of the gap between the
mjr 82:4f6209cb5c33 289 // extremes.
mjr 82:4f6209cb5c33 290 int mid = (a+b)/2;
mjr 82:4f6209cb5c33 291 int delta6 = abs(a-b)/6;
mjr 82:4f6209cb5c33 292 int crossoverHi = mid + delta6;
mjr 82:4f6209cb5c33 293 int crossoverLo = mid - delta6;
mjr 82:4f6209cb5c33 294
mjr 82:4f6209cb5c33 295 // Scan inward from the each end, looking for edges. Each time we
mjr 82:4f6209cb5c33 296 // find an edge from one direction, we'll see if the scan from the
mjr 82:4f6209cb5c33 297 // other direction agrees. If it does, we have a winner. If they
mjr 82:4f6209cb5c33 298 // don't agree, we must have found some noise in one direction or the
mjr 82:4f6209cb5c33 299 // other, so switch sides and continue the scan. On each continued
mjr 82:4f6209cb5c33 300 // scan, if the stopping point from the last scan *was* noise, we'll
mjr 82:4f6209cb5c33 301 // start seeing the expected non-edge pixels again as we move on,
mjr 82:4f6209cb5c33 302 // so we'll effectively factor out the noise. If what stopped us
mjr 82:4f6209cb5c33 303 // *wasn't* noise but was a legitimate edge, we'll see that we're
mjr 82:4f6209cb5c33 304 // still in the region that stopped us in the first place and just
mjr 82:4f6209cb5c33 305 // stop again immediately.
mjr 82:4f6209cb5c33 306 //
mjr 82:4f6209cb5c33 307 // The two sides have to converge, because they march relentlessly
mjr 82:4f6209cb5c33 308 // towards each other until they cross. Even if we have a totally
mjr 82:4f6209cb5c33 309 // random bunch of pixels, the two indices will eventually meet and
mjr 82:4f6209cb5c33 310 // we'll declare that to be the edge position. The processing time
mjr 82:4f6209cb5c33 311 // is linear in the pixel count - it's equivalent to one pass over
mjr 82:4f6209cb5c33 312 // the pixels. The measured time for 1280 pixels is about 1.3ms,
mjr 82:4f6209cb5c33 313 // which is about half the DMA transfer time. Our goal is always
mjr 82:4f6209cb5c33 314 // to complete the processing in less than the DMA transfer time,
mjr 82:4f6209cb5c33 315 // since that's as fast as we can possibly go with the physical
mjr 82:4f6209cb5c33 316 // sensor. Since our processing time is overlapped with the DMA
mjr 82:4f6209cb5c33 317 // transfer, the overall frame rate is limited by the *longer* of
mjr 82:4f6209cb5c33 318 // the two times, not the sum of the two times. So as long as the
mjr 82:4f6209cb5c33 319 // processing takes less time than the DMA transfer, we're not
mjr 82:4f6209cb5c33 320 // contributing at all to the overall frame rate limit - it's like
mjr 82:4f6209cb5c33 321 // we're not even here.
mjr 82:4f6209cb5c33 322 for (;;)
mjr 82:4f6209cb5c33 323 {
mjr 82:4f6209cb5c33 324 // scan from the bright side
mjr 82:4f6209cb5c33 325 for (bi += binc ; bi >= 5 && bi <= n-6 ; bi += binc)
mjr 82:4f6209cb5c33 326 {
mjr 82:4f6209cb5c33 327 // if we found a dark pixel, consider it to be an edge
mjr 82:4f6209cb5c33 328 if (pix[bi] < crossoverLo)
mjr 82:4f6209cb5c33 329 break;
mjr 82:4f6209cb5c33 330 }
mjr 82:4f6209cb5c33 331
mjr 82:4f6209cb5c33 332 // if we reached an extreme, return failure
mjr 82:4f6209cb5c33 333 if (bi < 5 || bi > n-6)
mjr 82:4f6209cb5c33 334 return false;
mjr 82:4f6209cb5c33 335
mjr 82:4f6209cb5c33 336 // if the two directions crossed, we have a winner
mjr 82:4f6209cb5c33 337 if (binc > 0 ? bi >= di : bi <= di)
mjr 82:4f6209cb5c33 338 {
mjr 82:4f6209cb5c33 339 pos = (dir == 1 ? bi : n - bi);
mjr 82:4f6209cb5c33 340 return true;
mjr 82:4f6209cb5c33 341 }
mjr 82:4f6209cb5c33 342
mjr 82:4f6209cb5c33 343 // they haven't converged yet, so scan from the dark side
mjr 82:4f6209cb5c33 344 for (di += dinc ; di >= 5 && di <= n-6 ; di += dinc)
mjr 82:4f6209cb5c33 345 {
mjr 82:4f6209cb5c33 346 // if we found a bright pixel, consider it to be an edge
mjr 82:4f6209cb5c33 347 if (pix[di] > crossoverHi)
mjr 82:4f6209cb5c33 348 break;
mjr 82:4f6209cb5c33 349 }
mjr 82:4f6209cb5c33 350
mjr 82:4f6209cb5c33 351 // if we reached an extreme, return failure
mjr 82:4f6209cb5c33 352 if (di < 5 || di > n-6)
mjr 82:4f6209cb5c33 353 return false;
mjr 82:4f6209cb5c33 354
mjr 82:4f6209cb5c33 355 // if they crossed now, we have a winner
mjr 82:4f6209cb5c33 356 if (binc > 0 ? bi >= di : bi <= di)
mjr 82:4f6209cb5c33 357 {
mjr 82:4f6209cb5c33 358 pos = (dir == 1 ? di : n - di);
mjr 82:4f6209cb5c33 359 return true;
mjr 82:4f6209cb5c33 360 }
mjr 82:4f6209cb5c33 361 }
mjr 82:4f6209cb5c33 362 }
mjr 82:4f6209cb5c33 363 #endif // SCAN METHOD 1
mjr 82:4f6209cb5c33 364
mjr 82:4f6209cb5c33 365 #if SCAN_METHOD == 2
mjr 82:4f6209cb5c33 366 // Scan method 2: scan for steepest brightness slope.
mjr 87:8d35c74403af 367 virtual bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/)
mjr 82:4f6209cb5c33 368 {
mjr 82:4f6209cb5c33 369 // Get the levels at each end by averaging across several pixels.
mjr 82:4f6209cb5c33 370 // Compute just the sums: don't bother dividing by the count, since
mjr 82:4f6209cb5c33 371 // the sums are equivalent to the averages as long as we know
mjr 82:4f6209cb5c33 372 // everything is multiplied by the number of samples.
mjr 82:4f6209cb5c33 373 int a = (int(pix[0]) + pix[1] + pix[2] + pix[3] + pix[4]);
mjr 82:4f6209cb5c33 374 int b = (int(pix[n-1]) + pix[n-2] + pix[n-3] + pix[n-4] + pix[n-5]);
mjr 82:4f6209cb5c33 375
mjr 82:4f6209cb5c33 376 // Figure the sensor orientation based on the relative brightness
mjr 82:4f6209cb5c33 377 // levels at the opposite ends of the image. We're going to scan
mjr 82:4f6209cb5c33 378 // across the image from each side - 'bi' is the starting index
mjr 82:4f6209cb5c33 379 // scanning from the bright side, 'di' is the starting index on
mjr 82:4f6209cb5c33 380 // the dark side. 'binc' and 'dinc' are the pixel increments
mjr 82:4f6209cb5c33 381 // for the respective indices.
mjr 82:4f6209cb5c33 382 if (a > b + 50)
mjr 82:4f6209cb5c33 383 {
mjr 82:4f6209cb5c33 384 // left end is brighter - standard orientation
mjr 82:4f6209cb5c33 385 dir = 1;
mjr 82:4f6209cb5c33 386 }
mjr 82:4f6209cb5c33 387 else if (b > a + 50)
mjr 82:4f6209cb5c33 388 {
mjr 82:4f6209cb5c33 389 // right end is brighter - reverse orientation
mjr 82:4f6209cb5c33 390 dir = -1;
mjr 82:4f6209cb5c33 391 }
mjr 82:4f6209cb5c33 392 else
mjr 82:4f6209cb5c33 393 {
mjr 82:4f6209cb5c33 394 // can't determine direction
mjr 82:4f6209cb5c33 395 return false;
mjr 82:4f6209cb5c33 396 }
mjr 82:4f6209cb5c33 397
mjr 82:4f6209cb5c33 398 // scan for the steepest edge using the assembly language
mjr 82:4f6209cb5c33 399 // implementation (since the C++ version is too slow)
mjr 82:4f6209cb5c33 400 const uint8_t *edgep = 0;
mjr 82:4f6209cb5c33 401 if (edgeScanMode2(pix, n, &edgep, dir))
mjr 82:4f6209cb5c33 402 {
mjr 82:4f6209cb5c33 403 // edgep has the pixel array pointer; convert it to an offset
mjr 82:4f6209cb5c33 404 pos = edgep - pix;
mjr 82:4f6209cb5c33 405
mjr 82:4f6209cb5c33 406 // if the sensor orientation is reversed, figure the index from
mjr 82:4f6209cb5c33 407 // the other end of the array
mjr 82:4f6209cb5c33 408 if (dir < 0)
mjr 82:4f6209cb5c33 409 pos = n - pos;
mjr 82:4f6209cb5c33 410
mjr 82:4f6209cb5c33 411 // success
mjr 82:4f6209cb5c33 412 return true;
mjr 82:4f6209cb5c33 413 }
mjr 82:4f6209cb5c33 414 else
mjr 82:4f6209cb5c33 415 {
mjr 82:4f6209cb5c33 416 // no edge found
mjr 82:4f6209cb5c33 417 return false;
mjr 82:4f6209cb5c33 418 }
mjr 82:4f6209cb5c33 419
mjr 82:4f6209cb5c33 420 }
mjr 82:4f6209cb5c33 421 #endif // SCAN_METHOD 2
mjr 82:4f6209cb5c33 422
mjr 82:4f6209cb5c33 423 #if SCAN_METHOD == 3
mjr 82:4f6209cb5c33 424 // Scan method 0: one-way scan; original method used in v1 firmware.
mjr 87:8d35c74403af 425 bool process(const uint8_t *pix, int n, int &pos, int& /*processResult*/)
mjr 82:4f6209cb5c33 426 {
mjr 82:4f6209cb5c33 427 // Get the levels at each end
mjr 82:4f6209cb5c33 428 int a = (int(pix[0]) + pix[1] + pix[2] + pix[3] + pix[4])/5;
mjr 82:4f6209cb5c33 429 int b = (int(pix[n-1]) + pix[n-2] + pix[n-3] + pix[n-4] + pix[n-5])/5;
mjr 82:4f6209cb5c33 430
mjr 82:4f6209cb5c33 431 // Figure the sensor orientation based on the relative brightness
mjr 82:4f6209cb5c33 432 // levels at the opposite ends of the image. We're going to scan
mjr 82:4f6209cb5c33 433 // across the image from each side - 'bi' is the starting index
mjr 82:4f6209cb5c33 434 // scanning from the bright side, 'di' is the starting index on
mjr 82:4f6209cb5c33 435 // the dark side. 'binc' and 'dinc' are the pixel increments
mjr 82:4f6209cb5c33 436 // for the respective indices.
mjr 82:4f6209cb5c33 437 if (a > b+10)
mjr 82:4f6209cb5c33 438 {
mjr 82:4f6209cb5c33 439 // left end is brighter - standard orientation
mjr 82:4f6209cb5c33 440 dir = 1;
mjr 82:4f6209cb5c33 441 }
mjr 82:4f6209cb5c33 442 else if (b > a+10)
mjr 82:4f6209cb5c33 443 {
mjr 82:4f6209cb5c33 444 // right end is brighter - reverse orientation
mjr 82:4f6209cb5c33 445 dir = -1;
mjr 82:4f6209cb5c33 446 }
mjr 82:4f6209cb5c33 447 else
mjr 82:4f6209cb5c33 448 {
mjr 82:4f6209cb5c33 449 // We can't detect the orientation from this image
mjr 82:4f6209cb5c33 450 return false;
mjr 82:4f6209cb5c33 451 }
mjr 82:4f6209cb5c33 452
mjr 82:4f6209cb5c33 453 // Figure the crossover brightness levels for detecting the edge.
mjr 82:4f6209cb5c33 454 // The midpoint is the brightness level halfway between the bright
mjr 82:4f6209cb5c33 455 // and dark regions we detected at the opposite ends of the sensor.
mjr 82:4f6209cb5c33 456 // To find the edge, we'll look for a brightness level slightly
mjr 82:4f6209cb5c33 457 // *past* the midpoint, to help reject noise - the bright region
mjr 82:4f6209cb5c33 458 // pixels should all cluster close to the higher level, and the
mjr 82:4f6209cb5c33 459 // shadow region should all cluster close to the lower level.
mjr 82:4f6209cb5c33 460 // We'll define "close" as within 1/3 of the gap between the
mjr 82:4f6209cb5c33 461 // extremes.
mjr 82:4f6209cb5c33 462 int mid = (a+b)/2;
mjr 82:4f6209cb5c33 463
mjr 82:4f6209cb5c33 464 // Count pixels brighter than the brightness midpoint. We assume
mjr 82:4f6209cb5c33 465 // that all of the bright pixels are contiguously within the bright
mjr 82:4f6209cb5c33 466 // region, so we simply have to count them up. Even if we have a
mjr 82:4f6209cb5c33 467 // few noisy pixels in the dark region above the midpoint, these
mjr 82:4f6209cb5c33 468 // should on average be canceled out by anomalous dark pixels in
mjr 82:4f6209cb5c33 469 // the bright region.
mjr 82:4f6209cb5c33 470 int bcnt = 0;
mjr 82:4f6209cb5c33 471 for (int i = 0 ; i < n ; ++i)
mjr 82:4f6209cb5c33 472 {
mjr 82:4f6209cb5c33 473 if (pix[i] > mid)
mjr 82:4f6209cb5c33 474 ++bcnt;
mjr 82:4f6209cb5c33 475 }
mjr 82:4f6209cb5c33 476
mjr 82:4f6209cb5c33 477 // The position is simply the size of the bright region
mjr 82:4f6209cb5c33 478 pos = bcnt;
mjr 82:4f6209cb5c33 479 if (dir < 1)
mjr 82:4f6209cb5c33 480 pos = n - pos;
mjr 82:4f6209cb5c33 481 return true;
mjr 82:4f6209cb5c33 482 }
mjr 82:4f6209cb5c33 483 #endif // SCAN_METHOD 3
mjr 82:4f6209cb5c33 484
mjr 82:4f6209cb5c33 485
mjr 82:4f6209cb5c33 486 protected:
mjr 82:4f6209cb5c33 487 // Sensor orientation. +1 means that the "tip" end - which is always
mjr 82:4f6209cb5c33 488 // the brighter end in our images - is at the 0th pixel in the array.
mjr 82:4f6209cb5c33 489 // -1 means that the tip is at the nth pixel in the array. 0 means
mjr 82:4f6209cb5c33 490 // that we haven't figured it out yet. We automatically infer this
mjr 82:4f6209cb5c33 491 // from the relative light levels at each end of the array when we
mjr 82:4f6209cb5c33 492 // successfully find a shadow edge. The reason we save the information
mjr 82:4f6209cb5c33 493 // is that we might occasionally get frames that are fully in shadow
mjr 82:4f6209cb5c33 494 // or fully in light, and we can't infer the direction from such
mjr 82:4f6209cb5c33 495 // frames. Saving the information from past frames gives us a fallback
mjr 82:4f6209cb5c33 496 // when we can't infer it from the current frame. Note that we update
mjr 82:4f6209cb5c33 497 // this each time we can infer the direction, so the device will adapt
mjr 82:4f6209cb5c33 498 // on the fly even if the user repositions the sensor while the software
mjr 82:4f6209cb5c33 499 // is running.
mjr 82:4f6209cb5c33 500 virtual int getOrientation() const { return dir; }
mjr 82:4f6209cb5c33 501 int dir;
mjr 82:4f6209cb5c33 502
mjr 82:4f6209cb5c33 503 // History of midpoint brightness levels for the last few successful
mjr 82:4f6209cb5c33 504 // scans. This is a circular buffer that we write on each scan where
mjr 82:4f6209cb5c33 505 // we successfully detect a shadow edge. (It's circular, so we
mjr 82:4f6209cb5c33 506 // effectively discard the oldest element whenever we write a new one.)
mjr 82:4f6209cb5c33 507 //
mjr 82:4f6209cb5c33 508 // We use the history in cases where we have too little contrast to
mjr 82:4f6209cb5c33 509 // detect an edge. In these cases, we assume that the entire sensor
mjr 82:4f6209cb5c33 510 // is either in shadow or light, which can happen if the plunger is at
mjr 82:4f6209cb5c33 511 // one extreme or the other such that the edge of its shadow is out of
mjr 82:4f6209cb5c33 512 // the frame. (Ideally, the sensor should be positioned so that the
mjr 82:4f6209cb5c33 513 // shadow edge is always in the frame, but it's not always possible
mjr 82:4f6209cb5c33 514 // to do this given the constrained space within a cabinet.) The
mjr 82:4f6209cb5c33 515 // history helps us decide which case we have - all shadow or all
mjr 82:4f6209cb5c33 516 // light - by letting us compare our average pixel level in this
mjr 82:4f6209cb5c33 517 // frame to the range in recent frames. This assumes that the
mjr 82:4f6209cb5c33 518 // exposure level is fairly consistent from frame to frame, which
mjr 82:4f6209cb5c33 519 // is usually true because the sensor and light source are both
mjr 82:4f6209cb5c33 520 // fixed in place.
mjr 82:4f6209cb5c33 521 //
mjr 82:4f6209cb5c33 522 // We always try first to infer the bright and dark levels from the
mjr 82:4f6209cb5c33 523 // image, since this lets us adapt automatically to different exposure
mjr 82:4f6209cb5c33 524 // levels. The exposure level can vary by integration time and the
mjr 82:4f6209cb5c33 525 // intensity and positioning of the light source, and we want
mjr 82:4f6209cb5c33 526 // to be as flexible as we can about both.
mjr 82:4f6209cb5c33 527 uint8_t midpt[10];
mjr 82:4f6209cb5c33 528 uint8_t midptIdx;
mjr 82:4f6209cb5c33 529
mjr 82:4f6209cb5c33 530 public:
mjr 82:4f6209cb5c33 531 };
mjr 82:4f6209cb5c33 532
mjr 82:4f6209cb5c33 533
mjr 82:4f6209cb5c33 534 #endif /* _EDGESENSOR_H_ */