opencv on mbed

Dependencies:   mbed

Committer:
joeverbout
Date:
Thu Mar 31 21:16:38 2016 +0000
Revision:
0:ea44dc9ed014
OpenCV on mbed attempt

Who changed what in which revision?

UserRevisionLine numberNew contents of line
joeverbout 0:ea44dc9ed014 1 /*M///////////////////////////////////////////////////////////////////////////////////////
joeverbout 0:ea44dc9ed014 2 //
joeverbout 0:ea44dc9ed014 3 // IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
joeverbout 0:ea44dc9ed014 4 //
joeverbout 0:ea44dc9ed014 5 // By downloading, copying, installing or using the software you agree to this license.
joeverbout 0:ea44dc9ed014 6 // If you do not agree to this license, do not download, install,
joeverbout 0:ea44dc9ed014 7 // copy or use the software.
joeverbout 0:ea44dc9ed014 8 //
joeverbout 0:ea44dc9ed014 9 //
joeverbout 0:ea44dc9ed014 10 // License Agreement
joeverbout 0:ea44dc9ed014 11 // For Open Source Computer Vision Library
joeverbout 0:ea44dc9ed014 12 //
joeverbout 0:ea44dc9ed014 13 // Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
joeverbout 0:ea44dc9ed014 14 // Copyright (C) 2009, Willow Garage Inc., all rights reserved.
joeverbout 0:ea44dc9ed014 15 // Copyright (C) 2013, OpenCV Foundation, all rights reserved.
joeverbout 0:ea44dc9ed014 16 // Third party copyrights are property of their respective owners.
joeverbout 0:ea44dc9ed014 17 //
joeverbout 0:ea44dc9ed014 18 // Redistribution and use in source and binary forms, with or without modification,
joeverbout 0:ea44dc9ed014 19 // are permitted provided that the following conditions are met:
joeverbout 0:ea44dc9ed014 20 //
joeverbout 0:ea44dc9ed014 21 // * Redistribution's of source code must retain the above copyright notice,
joeverbout 0:ea44dc9ed014 22 // this list of conditions and the following disclaimer.
joeverbout 0:ea44dc9ed014 23 //
joeverbout 0:ea44dc9ed014 24 // * Redistribution's in binary form must reproduce the above copyright notice,
joeverbout 0:ea44dc9ed014 25 // this list of conditions and the following disclaimer in the documentation
joeverbout 0:ea44dc9ed014 26 // and/or other materials provided with the distribution.
joeverbout 0:ea44dc9ed014 27 //
joeverbout 0:ea44dc9ed014 28 // * The name of the copyright holders may not be used to endorse or promote products
joeverbout 0:ea44dc9ed014 29 // derived from this software without specific prior written permission.
joeverbout 0:ea44dc9ed014 30 //
joeverbout 0:ea44dc9ed014 31 // This software is provided by the copyright holders and contributors "as is" and
joeverbout 0:ea44dc9ed014 32 // any express or implied warranties, including, but not limited to, the implied
joeverbout 0:ea44dc9ed014 33 // warranties of merchantability and fitness for a particular purpose are disclaimed.
joeverbout 0:ea44dc9ed014 34 // In no event shall the Intel Corporation or contributors be liable for any direct,
joeverbout 0:ea44dc9ed014 35 // indirect, incidental, special, exemplary, or consequential damages
joeverbout 0:ea44dc9ed014 36 // (including, but not limited to, procurement of substitute goods or services;
joeverbout 0:ea44dc9ed014 37 // loss of use, data, or profits; or business interruption) however caused
joeverbout 0:ea44dc9ed014 38 // and on any theory of liability, whether in contract, strict liability,
joeverbout 0:ea44dc9ed014 39 // or tort (including negligence or otherwise) arising in any way out of
joeverbout 0:ea44dc9ed014 40 // the use of this software, even if advised of the possibility of such damage.
joeverbout 0:ea44dc9ed014 41 //
joeverbout 0:ea44dc9ed014 42 //M*/
joeverbout 0:ea44dc9ed014 43
joeverbout 0:ea44dc9ed014 44 #ifndef __OPENCV_OBJDETECT_HPP__
joeverbout 0:ea44dc9ed014 45 #define __OPENCV_OBJDETECT_HPP__
joeverbout 0:ea44dc9ed014 46
joeverbout 0:ea44dc9ed014 47 #include "opencv2/core.hpp"
joeverbout 0:ea44dc9ed014 48
joeverbout 0:ea44dc9ed014 49 /**
joeverbout 0:ea44dc9ed014 50 @defgroup objdetect Object Detection
joeverbout 0:ea44dc9ed014 51
joeverbout 0:ea44dc9ed014 52 Haar Feature-based Cascade Classifier for Object Detection
joeverbout 0:ea44dc9ed014 53 ----------------------------------------------------------
joeverbout 0:ea44dc9ed014 54
joeverbout 0:ea44dc9ed014 55 The object detector described below has been initially proposed by Paul Viola @cite Viola01 and
joeverbout 0:ea44dc9ed014 56 improved by Rainer Lienhart @cite Lienhart02 .
joeverbout 0:ea44dc9ed014 57
joeverbout 0:ea44dc9ed014 58 First, a classifier (namely a *cascade of boosted classifiers working with haar-like features*) is
joeverbout 0:ea44dc9ed014 59 trained with a few hundred sample views of a particular object (i.e., a face or a car), called
joeverbout 0:ea44dc9ed014 60 positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary
joeverbout 0:ea44dc9ed014 61 images of the same size.
joeverbout 0:ea44dc9ed014 62
joeverbout 0:ea44dc9ed014 63 After a classifier is trained, it can be applied to a region of interest (of the same size as used
joeverbout 0:ea44dc9ed014 64 during the training) in an input image. The classifier outputs a "1" if the region is likely to show
joeverbout 0:ea44dc9ed014 65 the object (i.e., face/car), and "0" otherwise. To search for the object in the whole image one can
joeverbout 0:ea44dc9ed014 66 move the search window across the image and check every location using the classifier. The
joeverbout 0:ea44dc9ed014 67 classifier is designed so that it can be easily "resized" in order to be able to find the objects of
joeverbout 0:ea44dc9ed014 68 interest at different sizes, which is more efficient than resizing the image itself. So, to find an
joeverbout 0:ea44dc9ed014 69 object of an unknown size in the image the scan procedure should be done several times at different
joeverbout 0:ea44dc9ed014 70 scales.
joeverbout 0:ea44dc9ed014 71
joeverbout 0:ea44dc9ed014 72 The word "cascade" in the classifier name means that the resultant classifier consists of several
joeverbout 0:ea44dc9ed014 73 simpler classifiers (*stages*) that are applied subsequently to a region of interest until at some
joeverbout 0:ea44dc9ed014 74 stage the candidate is rejected or all the stages are passed. The word "boosted" means that the
joeverbout 0:ea44dc9ed014 75 classifiers at every stage of the cascade are complex themselves and they are built out of basic
joeverbout 0:ea44dc9ed014 76 classifiers using one of four different boosting techniques (weighted voting). Currently Discrete
joeverbout 0:ea44dc9ed014 77 Adaboost, Real Adaboost, Gentle Adaboost and Logitboost are supported. The basic classifiers are
joeverbout 0:ea44dc9ed014 78 decision-tree classifiers with at least 2 leaves. Haar-like features are the input to the basic
joeverbout 0:ea44dc9ed014 79 classifiers, and are calculated as described below. The current algorithm uses the following
joeverbout 0:ea44dc9ed014 80 Haar-like features:
joeverbout 0:ea44dc9ed014 81
joeverbout 0:ea44dc9ed014 82 ![image](pics/haarfeatures.png)
joeverbout 0:ea44dc9ed014 83
joeverbout 0:ea44dc9ed014 84 The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within
joeverbout 0:ea44dc9ed014 85 the region of interest and the scale (this scale is not the same as the scale used at the detection
joeverbout 0:ea44dc9ed014 86 stage, though these two scales are multiplied). For example, in the case of the third line feature
joeverbout 0:ea44dc9ed014 87 (2c) the response is calculated as the difference between the sum of image pixels under the
joeverbout 0:ea44dc9ed014 88 rectangle covering the whole feature (including the two white stripes and the black stripe in the
joeverbout 0:ea44dc9ed014 89 middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to
joeverbout 0:ea44dc9ed014 90 compensate for the differences in the size of areas. The sums of pixel values over a rectangular
joeverbout 0:ea44dc9ed014 91 regions are calculated rapidly using integral images (see below and the integral description).
joeverbout 0:ea44dc9ed014 92
joeverbout 0:ea44dc9ed014 93 To see the object detector at work, have a look at the facedetect demo:
joeverbout 0:ea44dc9ed014 94 <https://github.com/Itseez/opencv/tree/master/samples/cpp/dbt_face_detection.cpp>
joeverbout 0:ea44dc9ed014 95
joeverbout 0:ea44dc9ed014 96 The following reference is for the detection part only. There is a separate application called
joeverbout 0:ea44dc9ed014 97 opencv_traincascade that can train a cascade of boosted classifiers from a set of samples.
joeverbout 0:ea44dc9ed014 98
joeverbout 0:ea44dc9ed014 99 @note In the new C++ interface it is also possible to use LBP (local binary pattern) features in
joeverbout 0:ea44dc9ed014 100 addition to Haar-like features. .. [Viola01] Paul Viola and Michael J. Jones. Rapid Object Detection
joeverbout 0:ea44dc9ed014 101 using a Boosted Cascade of Simple Features. IEEE CVPR, 2001. The paper is available online at
joeverbout 0:ea44dc9ed014 102 <http://research.microsoft.com/en-us/um/people/viola/Pubs/Detect/violaJones_CVPR2001.pdf>
joeverbout 0:ea44dc9ed014 103
joeverbout 0:ea44dc9ed014 104 @{
joeverbout 0:ea44dc9ed014 105 @defgroup objdetect_c C API
joeverbout 0:ea44dc9ed014 106 @}
joeverbout 0:ea44dc9ed014 107 */
joeverbout 0:ea44dc9ed014 108
joeverbout 0:ea44dc9ed014 109 typedef struct CvHaarClassifierCascade CvHaarClassifierCascade;
joeverbout 0:ea44dc9ed014 110
joeverbout 0:ea44dc9ed014 111 namespace cv
joeverbout 0:ea44dc9ed014 112 {
joeverbout 0:ea44dc9ed014 113
joeverbout 0:ea44dc9ed014 114 //! @addtogroup objdetect
joeverbout 0:ea44dc9ed014 115 //! @{
joeverbout 0:ea44dc9ed014 116
joeverbout 0:ea44dc9ed014 117 ///////////////////////////// Object Detection ////////////////////////////
joeverbout 0:ea44dc9ed014 118
joeverbout 0:ea44dc9ed014 119 //! class for grouping object candidates, detected by Cascade Classifier, HOG etc.
joeverbout 0:ea44dc9ed014 120 //! instance of the class is to be passed to cv::partition (see cxoperations.hpp)
joeverbout 0:ea44dc9ed014 121 class CV_EXPORTS SimilarRects
joeverbout 0:ea44dc9ed014 122 {
joeverbout 0:ea44dc9ed014 123 public:
joeverbout 0:ea44dc9ed014 124 SimilarRects(double _eps) : eps(_eps) {}
joeverbout 0:ea44dc9ed014 125 inline bool operator()(const Rect& r1, const Rect& r2) const
joeverbout 0:ea44dc9ed014 126 {
joeverbout 0:ea44dc9ed014 127 double delta = eps*(std::min(r1.width, r2.width) + std::min(r1.height, r2.height))*0.5;
joeverbout 0:ea44dc9ed014 128 return std::abs(r1.x - r2.x) <= delta &&
joeverbout 0:ea44dc9ed014 129 std::abs(r1.y - r2.y) <= delta &&
joeverbout 0:ea44dc9ed014 130 std::abs(r1.x + r1.width - r2.x - r2.width) <= delta &&
joeverbout 0:ea44dc9ed014 131 std::abs(r1.y + r1.height - r2.y - r2.height) <= delta;
joeverbout 0:ea44dc9ed014 132 }
joeverbout 0:ea44dc9ed014 133 double eps;
joeverbout 0:ea44dc9ed014 134 };
joeverbout 0:ea44dc9ed014 135
joeverbout 0:ea44dc9ed014 136 /** @brief Groups the object candidate rectangles.
joeverbout 0:ea44dc9ed014 137
joeverbout 0:ea44dc9ed014 138 @param rectList Input/output vector of rectangles. Output vector includes retained and grouped
joeverbout 0:ea44dc9ed014 139 rectangles. (The Python list is not modified in place.)
joeverbout 0:ea44dc9ed014 140 @param groupThreshold Minimum possible number of rectangles minus 1. The threshold is used in a
joeverbout 0:ea44dc9ed014 141 group of rectangles to retain it.
joeverbout 0:ea44dc9ed014 142 @param eps Relative difference between sides of the rectangles to merge them into a group.
joeverbout 0:ea44dc9ed014 143
joeverbout 0:ea44dc9ed014 144 The function is a wrapper for the generic function partition . It clusters all the input rectangles
joeverbout 0:ea44dc9ed014 145 using the rectangle equivalence criteria that combines rectangles with similar sizes and similar
joeverbout 0:ea44dc9ed014 146 locations. The similarity is defined by eps. When eps=0 , no clustering is done at all. If
joeverbout 0:ea44dc9ed014 147 \f$\texttt{eps}\rightarrow +\inf\f$ , all the rectangles are put in one cluster. Then, the small
joeverbout 0:ea44dc9ed014 148 clusters containing less than or equal to groupThreshold rectangles are rejected. In each other
joeverbout 0:ea44dc9ed014 149 cluster, the average rectangle is computed and put into the output rectangle list.
joeverbout 0:ea44dc9ed014 150 */
joeverbout 0:ea44dc9ed014 151 CV_EXPORTS void groupRectangles(std::vector<Rect>& rectList, int groupThreshold, double eps = 0.2);
joeverbout 0:ea44dc9ed014 152 /** @overload */
joeverbout 0:ea44dc9ed014 153 CV_EXPORTS_W void groupRectangles(CV_IN_OUT std::vector<Rect>& rectList, CV_OUT std::vector<int>& weights,
joeverbout 0:ea44dc9ed014 154 int groupThreshold, double eps = 0.2);
joeverbout 0:ea44dc9ed014 155 /** @overload */
joeverbout 0:ea44dc9ed014 156 CV_EXPORTS void groupRectangles(std::vector<Rect>& rectList, int groupThreshold,
joeverbout 0:ea44dc9ed014 157 double eps, std::vector<int>* weights, std::vector<double>* levelWeights );
joeverbout 0:ea44dc9ed014 158 /** @overload */
joeverbout 0:ea44dc9ed014 159 CV_EXPORTS void groupRectangles(std::vector<Rect>& rectList, std::vector<int>& rejectLevels,
joeverbout 0:ea44dc9ed014 160 std::vector<double>& levelWeights, int groupThreshold, double eps = 0.2);
joeverbout 0:ea44dc9ed014 161 /** @overload */
joeverbout 0:ea44dc9ed014 162 CV_EXPORTS void groupRectangles_meanshift(std::vector<Rect>& rectList, std::vector<double>& foundWeights,
joeverbout 0:ea44dc9ed014 163 std::vector<double>& foundScales,
joeverbout 0:ea44dc9ed014 164 double detectThreshold = 0.0, Size winDetSize = Size(64, 128));
joeverbout 0:ea44dc9ed014 165
joeverbout 0:ea44dc9ed014 166 template<> CV_EXPORTS void DefaultDeleter<CvHaarClassifierCascade>::operator ()(CvHaarClassifierCascade* obj) const;
joeverbout 0:ea44dc9ed014 167
joeverbout 0:ea44dc9ed014 168 enum { CASCADE_DO_CANNY_PRUNING = 1,
joeverbout 0:ea44dc9ed014 169 CASCADE_SCALE_IMAGE = 2,
joeverbout 0:ea44dc9ed014 170 CASCADE_FIND_BIGGEST_OBJECT = 4,
joeverbout 0:ea44dc9ed014 171 CASCADE_DO_ROUGH_SEARCH = 8
joeverbout 0:ea44dc9ed014 172 };
joeverbout 0:ea44dc9ed014 173
joeverbout 0:ea44dc9ed014 174 class CV_EXPORTS_W BaseCascadeClassifier : public Algorithm
joeverbout 0:ea44dc9ed014 175 {
joeverbout 0:ea44dc9ed014 176 public:
joeverbout 0:ea44dc9ed014 177 virtual ~BaseCascadeClassifier();
joeverbout 0:ea44dc9ed014 178 virtual bool empty() const = 0;
joeverbout 0:ea44dc9ed014 179 virtual bool load( const String& filename ) = 0;
joeverbout 0:ea44dc9ed014 180 virtual void detectMultiScale( InputArray image,
joeverbout 0:ea44dc9ed014 181 CV_OUT std::vector<Rect>& objects,
joeverbout 0:ea44dc9ed014 182 double scaleFactor,
joeverbout 0:ea44dc9ed014 183 int minNeighbors, int flags,
joeverbout 0:ea44dc9ed014 184 Size minSize, Size maxSize ) = 0;
joeverbout 0:ea44dc9ed014 185
joeverbout 0:ea44dc9ed014 186 virtual void detectMultiScale( InputArray image,
joeverbout 0:ea44dc9ed014 187 CV_OUT std::vector<Rect>& objects,
joeverbout 0:ea44dc9ed014 188 CV_OUT std::vector<int>& numDetections,
joeverbout 0:ea44dc9ed014 189 double scaleFactor,
joeverbout 0:ea44dc9ed014 190 int minNeighbors, int flags,
joeverbout 0:ea44dc9ed014 191 Size minSize, Size maxSize ) = 0;
joeverbout 0:ea44dc9ed014 192
joeverbout 0:ea44dc9ed014 193 virtual void detectMultiScale( InputArray image,
joeverbout 0:ea44dc9ed014 194 CV_OUT std::vector<Rect>& objects,
joeverbout 0:ea44dc9ed014 195 CV_OUT std::vector<int>& rejectLevels,
joeverbout 0:ea44dc9ed014 196 CV_OUT std::vector<double>& levelWeights,
joeverbout 0:ea44dc9ed014 197 double scaleFactor,
joeverbout 0:ea44dc9ed014 198 int minNeighbors, int flags,
joeverbout 0:ea44dc9ed014 199 Size minSize, Size maxSize,
joeverbout 0:ea44dc9ed014 200 bool outputRejectLevels ) = 0;
joeverbout 0:ea44dc9ed014 201
joeverbout 0:ea44dc9ed014 202 virtual bool isOldFormatCascade() const = 0;
joeverbout 0:ea44dc9ed014 203 virtual Size getOriginalWindowSize() const = 0;
joeverbout 0:ea44dc9ed014 204 virtual int getFeatureType() const = 0;
joeverbout 0:ea44dc9ed014 205 virtual void* getOldCascade() = 0;
joeverbout 0:ea44dc9ed014 206
joeverbout 0:ea44dc9ed014 207 class CV_EXPORTS MaskGenerator
joeverbout 0:ea44dc9ed014 208 {
joeverbout 0:ea44dc9ed014 209 public:
joeverbout 0:ea44dc9ed014 210 virtual ~MaskGenerator() {}
joeverbout 0:ea44dc9ed014 211 virtual Mat generateMask(const Mat& src)=0;
joeverbout 0:ea44dc9ed014 212 virtual void initializeMask(const Mat& /*src*/) { }
joeverbout 0:ea44dc9ed014 213 };
joeverbout 0:ea44dc9ed014 214 virtual void setMaskGenerator(const Ptr<MaskGenerator>& maskGenerator) = 0;
joeverbout 0:ea44dc9ed014 215 virtual Ptr<MaskGenerator> getMaskGenerator() = 0;
joeverbout 0:ea44dc9ed014 216 };
joeverbout 0:ea44dc9ed014 217
joeverbout 0:ea44dc9ed014 218 /** @brief Cascade classifier class for object detection.
joeverbout 0:ea44dc9ed014 219 */
joeverbout 0:ea44dc9ed014 220 class CV_EXPORTS_W CascadeClassifier
joeverbout 0:ea44dc9ed014 221 {
joeverbout 0:ea44dc9ed014 222 public:
joeverbout 0:ea44dc9ed014 223 CV_WRAP CascadeClassifier();
joeverbout 0:ea44dc9ed014 224 /** @brief Loads a classifier from a file.
joeverbout 0:ea44dc9ed014 225
joeverbout 0:ea44dc9ed014 226 @param filename Name of the file from which the classifier is loaded.
joeverbout 0:ea44dc9ed014 227 */
joeverbout 0:ea44dc9ed014 228 CV_WRAP CascadeClassifier(const String& filename);
joeverbout 0:ea44dc9ed014 229 ~CascadeClassifier();
joeverbout 0:ea44dc9ed014 230 /** @brief Checks whether the classifier has been loaded.
joeverbout 0:ea44dc9ed014 231 */
joeverbout 0:ea44dc9ed014 232 CV_WRAP bool empty() const;
joeverbout 0:ea44dc9ed014 233 /** @brief Loads a classifier from a file.
joeverbout 0:ea44dc9ed014 234
joeverbout 0:ea44dc9ed014 235 @param filename Name of the file from which the classifier is loaded. The file may contain an old
joeverbout 0:ea44dc9ed014 236 HAAR classifier trained by the haartraining application or a new cascade classifier trained by the
joeverbout 0:ea44dc9ed014 237 traincascade application.
joeverbout 0:ea44dc9ed014 238 */
joeverbout 0:ea44dc9ed014 239 CV_WRAP bool load( const String& filename );
joeverbout 0:ea44dc9ed014 240 /** @brief Reads a classifier from a FileStorage node.
joeverbout 0:ea44dc9ed014 241
joeverbout 0:ea44dc9ed014 242 @note The file may contain a new cascade classifier (trained traincascade application) only.
joeverbout 0:ea44dc9ed014 243 */
joeverbout 0:ea44dc9ed014 244 CV_WRAP bool read( const FileNode& node );
joeverbout 0:ea44dc9ed014 245
joeverbout 0:ea44dc9ed014 246 /** @brief Detects objects of different sizes in the input image. The detected objects are returned as a list
joeverbout 0:ea44dc9ed014 247 of rectangles.
joeverbout 0:ea44dc9ed014 248
joeverbout 0:ea44dc9ed014 249 @param image Matrix of the type CV_8U containing an image where objects are detected.
joeverbout 0:ea44dc9ed014 250 @param objects Vector of rectangles where each rectangle contains the detected object, the
joeverbout 0:ea44dc9ed014 251 rectangles may be partially outside the original image.
joeverbout 0:ea44dc9ed014 252 @param scaleFactor Parameter specifying how much the image size is reduced at each image scale.
joeverbout 0:ea44dc9ed014 253 @param minNeighbors Parameter specifying how many neighbors each candidate rectangle should have
joeverbout 0:ea44dc9ed014 254 to retain it.
joeverbout 0:ea44dc9ed014 255 @param flags Parameter with the same meaning for an old cascade as in the function
joeverbout 0:ea44dc9ed014 256 cvHaarDetectObjects. It is not used for a new cascade.
joeverbout 0:ea44dc9ed014 257 @param minSize Minimum possible object size. Objects smaller than that are ignored.
joeverbout 0:ea44dc9ed014 258 @param maxSize Maximum possible object size. Objects larger than that are ignored.
joeverbout 0:ea44dc9ed014 259
joeverbout 0:ea44dc9ed014 260 The function is parallelized with the TBB library.
joeverbout 0:ea44dc9ed014 261
joeverbout 0:ea44dc9ed014 262 @note
joeverbout 0:ea44dc9ed014 263 - (Python) A face detection example using cascade classifiers can be found at
joeverbout 0:ea44dc9ed014 264 opencv_source_code/samples/python/facedetect.py
joeverbout 0:ea44dc9ed014 265 */
joeverbout 0:ea44dc9ed014 266 CV_WRAP void detectMultiScale( InputArray image,
joeverbout 0:ea44dc9ed014 267 CV_OUT std::vector<Rect>& objects,
joeverbout 0:ea44dc9ed014 268 double scaleFactor = 1.1,
joeverbout 0:ea44dc9ed014 269 int minNeighbors = 3, int flags = 0,
joeverbout 0:ea44dc9ed014 270 Size minSize = Size(),
joeverbout 0:ea44dc9ed014 271 Size maxSize = Size() );
joeverbout 0:ea44dc9ed014 272
joeverbout 0:ea44dc9ed014 273 /** @overload
joeverbout 0:ea44dc9ed014 274 @param image Matrix of the type CV_8U containing an image where objects are detected.
joeverbout 0:ea44dc9ed014 275 @param objects Vector of rectangles where each rectangle contains the detected object, the
joeverbout 0:ea44dc9ed014 276 rectangles may be partially outside the original image.
joeverbout 0:ea44dc9ed014 277 @param numDetections Vector of detection numbers for the corresponding objects. An object's number
joeverbout 0:ea44dc9ed014 278 of detections is the number of neighboring positively classified rectangles that were joined
joeverbout 0:ea44dc9ed014 279 together to form the object.
joeverbout 0:ea44dc9ed014 280 @param scaleFactor Parameter specifying how much the image size is reduced at each image scale.
joeverbout 0:ea44dc9ed014 281 @param minNeighbors Parameter specifying how many neighbors each candidate rectangle should have
joeverbout 0:ea44dc9ed014 282 to retain it.
joeverbout 0:ea44dc9ed014 283 @param flags Parameter with the same meaning for an old cascade as in the function
joeverbout 0:ea44dc9ed014 284 cvHaarDetectObjects. It is not used for a new cascade.
joeverbout 0:ea44dc9ed014 285 @param minSize Minimum possible object size. Objects smaller than that are ignored.
joeverbout 0:ea44dc9ed014 286 @param maxSize Maximum possible object size. Objects larger than that are ignored.
joeverbout 0:ea44dc9ed014 287 */
joeverbout 0:ea44dc9ed014 288 CV_WRAP_AS(detectMultiScale2) void detectMultiScale( InputArray image,
joeverbout 0:ea44dc9ed014 289 CV_OUT std::vector<Rect>& objects,
joeverbout 0:ea44dc9ed014 290 CV_OUT std::vector<int>& numDetections,
joeverbout 0:ea44dc9ed014 291 double scaleFactor=1.1,
joeverbout 0:ea44dc9ed014 292 int minNeighbors=3, int flags=0,
joeverbout 0:ea44dc9ed014 293 Size minSize=Size(),
joeverbout 0:ea44dc9ed014 294 Size maxSize=Size() );
joeverbout 0:ea44dc9ed014 295
joeverbout 0:ea44dc9ed014 296 /** @overload
joeverbout 0:ea44dc9ed014 297 if `outputRejectLevels` is `true` returns `rejectLevels` and `levelWeights`
joeverbout 0:ea44dc9ed014 298 */
joeverbout 0:ea44dc9ed014 299 CV_WRAP_AS(detectMultiScale3) void detectMultiScale( InputArray image,
joeverbout 0:ea44dc9ed014 300 CV_OUT std::vector<Rect>& objects,
joeverbout 0:ea44dc9ed014 301 CV_OUT std::vector<int>& rejectLevels,
joeverbout 0:ea44dc9ed014 302 CV_OUT std::vector<double>& levelWeights,
joeverbout 0:ea44dc9ed014 303 double scaleFactor = 1.1,
joeverbout 0:ea44dc9ed014 304 int minNeighbors = 3, int flags = 0,
joeverbout 0:ea44dc9ed014 305 Size minSize = Size(),
joeverbout 0:ea44dc9ed014 306 Size maxSize = Size(),
joeverbout 0:ea44dc9ed014 307 bool outputRejectLevels = false );
joeverbout 0:ea44dc9ed014 308
joeverbout 0:ea44dc9ed014 309 CV_WRAP bool isOldFormatCascade() const;
joeverbout 0:ea44dc9ed014 310 CV_WRAP Size getOriginalWindowSize() const;
joeverbout 0:ea44dc9ed014 311 CV_WRAP int getFeatureType() const;
joeverbout 0:ea44dc9ed014 312 void* getOldCascade();
joeverbout 0:ea44dc9ed014 313
joeverbout 0:ea44dc9ed014 314 CV_WRAP static bool convert(const String& oldcascade, const String& newcascade);
joeverbout 0:ea44dc9ed014 315
joeverbout 0:ea44dc9ed014 316 void setMaskGenerator(const Ptr<BaseCascadeClassifier::MaskGenerator>& maskGenerator);
joeverbout 0:ea44dc9ed014 317 Ptr<BaseCascadeClassifier::MaskGenerator> getMaskGenerator();
joeverbout 0:ea44dc9ed014 318
joeverbout 0:ea44dc9ed014 319 Ptr<BaseCascadeClassifier> cc;
joeverbout 0:ea44dc9ed014 320 };
joeverbout 0:ea44dc9ed014 321
joeverbout 0:ea44dc9ed014 322 CV_EXPORTS Ptr<BaseCascadeClassifier::MaskGenerator> createFaceDetectionMaskGenerator();
joeverbout 0:ea44dc9ed014 323
joeverbout 0:ea44dc9ed014 324 //////////////// HOG (Histogram-of-Oriented-Gradients) Descriptor and Object Detector //////////////
joeverbout 0:ea44dc9ed014 325
joeverbout 0:ea44dc9ed014 326 //! struct for detection region of interest (ROI)
joeverbout 0:ea44dc9ed014 327 struct DetectionROI
joeverbout 0:ea44dc9ed014 328 {
joeverbout 0:ea44dc9ed014 329 //! scale(size) of the bounding box
joeverbout 0:ea44dc9ed014 330 double scale;
joeverbout 0:ea44dc9ed014 331 //! set of requrested locations to be evaluated
joeverbout 0:ea44dc9ed014 332 std::vector<cv::Point> locations;
joeverbout 0:ea44dc9ed014 333 //! vector that will contain confidence values for each location
joeverbout 0:ea44dc9ed014 334 std::vector<double> confidences;
joeverbout 0:ea44dc9ed014 335 };
joeverbout 0:ea44dc9ed014 336
joeverbout 0:ea44dc9ed014 337 struct CV_EXPORTS_W HOGDescriptor
joeverbout 0:ea44dc9ed014 338 {
joeverbout 0:ea44dc9ed014 339 public:
joeverbout 0:ea44dc9ed014 340 enum { L2Hys = 0
joeverbout 0:ea44dc9ed014 341 };
joeverbout 0:ea44dc9ed014 342 enum { DEFAULT_NLEVELS = 64
joeverbout 0:ea44dc9ed014 343 };
joeverbout 0:ea44dc9ed014 344
joeverbout 0:ea44dc9ed014 345 CV_WRAP HOGDescriptor() : winSize(64,128), blockSize(16,16), blockStride(8,8),
joeverbout 0:ea44dc9ed014 346 cellSize(8,8), nbins(9), derivAperture(1), winSigma(-1),
joeverbout 0:ea44dc9ed014 347 histogramNormType(HOGDescriptor::L2Hys), L2HysThreshold(0.2), gammaCorrection(true),
joeverbout 0:ea44dc9ed014 348 free_coef(-1.f), nlevels(HOGDescriptor::DEFAULT_NLEVELS), signedGradient(false)
joeverbout 0:ea44dc9ed014 349 {}
joeverbout 0:ea44dc9ed014 350
joeverbout 0:ea44dc9ed014 351 CV_WRAP HOGDescriptor(Size _winSize, Size _blockSize, Size _blockStride,
joeverbout 0:ea44dc9ed014 352 Size _cellSize, int _nbins, int _derivAperture=1, double _winSigma=-1,
joeverbout 0:ea44dc9ed014 353 int _histogramNormType=HOGDescriptor::L2Hys,
joeverbout 0:ea44dc9ed014 354 double _L2HysThreshold=0.2, bool _gammaCorrection=false,
joeverbout 0:ea44dc9ed014 355 int _nlevels=HOGDescriptor::DEFAULT_NLEVELS, bool _signedGradient=false)
joeverbout 0:ea44dc9ed014 356 : winSize(_winSize), blockSize(_blockSize), blockStride(_blockStride), cellSize(_cellSize),
joeverbout 0:ea44dc9ed014 357 nbins(_nbins), derivAperture(_derivAperture), winSigma(_winSigma),
joeverbout 0:ea44dc9ed014 358 histogramNormType(_histogramNormType), L2HysThreshold(_L2HysThreshold),
joeverbout 0:ea44dc9ed014 359 gammaCorrection(_gammaCorrection), free_coef(-1.f), nlevels(_nlevels), signedGradient(_signedGradient)
joeverbout 0:ea44dc9ed014 360 {}
joeverbout 0:ea44dc9ed014 361
joeverbout 0:ea44dc9ed014 362 CV_WRAP HOGDescriptor(const String& filename)
joeverbout 0:ea44dc9ed014 363 {
joeverbout 0:ea44dc9ed014 364 load(filename);
joeverbout 0:ea44dc9ed014 365 }
joeverbout 0:ea44dc9ed014 366
joeverbout 0:ea44dc9ed014 367 HOGDescriptor(const HOGDescriptor& d)
joeverbout 0:ea44dc9ed014 368 {
joeverbout 0:ea44dc9ed014 369 d.copyTo(*this);
joeverbout 0:ea44dc9ed014 370 }
joeverbout 0:ea44dc9ed014 371
joeverbout 0:ea44dc9ed014 372 virtual ~HOGDescriptor() {}
joeverbout 0:ea44dc9ed014 373
joeverbout 0:ea44dc9ed014 374 CV_WRAP size_t getDescriptorSize() const;
joeverbout 0:ea44dc9ed014 375 CV_WRAP bool checkDetectorSize() const;
joeverbout 0:ea44dc9ed014 376 CV_WRAP double getWinSigma() const;
joeverbout 0:ea44dc9ed014 377
joeverbout 0:ea44dc9ed014 378 CV_WRAP virtual void setSVMDetector(InputArray _svmdetector);
joeverbout 0:ea44dc9ed014 379
joeverbout 0:ea44dc9ed014 380 virtual bool read(FileNode& fn);
joeverbout 0:ea44dc9ed014 381 virtual void write(FileStorage& fs, const String& objname) const;
joeverbout 0:ea44dc9ed014 382
joeverbout 0:ea44dc9ed014 383 CV_WRAP virtual bool load(const String& filename, const String& objname = String());
joeverbout 0:ea44dc9ed014 384 CV_WRAP virtual void save(const String& filename, const String& objname = String()) const;
joeverbout 0:ea44dc9ed014 385 virtual void copyTo(HOGDescriptor& c) const;
joeverbout 0:ea44dc9ed014 386
joeverbout 0:ea44dc9ed014 387 CV_WRAP virtual void compute(InputArray img,
joeverbout 0:ea44dc9ed014 388 CV_OUT std::vector<float>& descriptors,
joeverbout 0:ea44dc9ed014 389 Size winStride = Size(), Size padding = Size(),
joeverbout 0:ea44dc9ed014 390 const std::vector<Point>& locations = std::vector<Point>()) const;
joeverbout 0:ea44dc9ed014 391
joeverbout 0:ea44dc9ed014 392 //! with found weights output
joeverbout 0:ea44dc9ed014 393 CV_WRAP virtual void detect(const Mat& img, CV_OUT std::vector<Point>& foundLocations,
joeverbout 0:ea44dc9ed014 394 CV_OUT std::vector<double>& weights,
joeverbout 0:ea44dc9ed014 395 double hitThreshold = 0, Size winStride = Size(),
joeverbout 0:ea44dc9ed014 396 Size padding = Size(),
joeverbout 0:ea44dc9ed014 397 const std::vector<Point>& searchLocations = std::vector<Point>()) const;
joeverbout 0:ea44dc9ed014 398 //! without found weights output
joeverbout 0:ea44dc9ed014 399 virtual void detect(const Mat& img, CV_OUT std::vector<Point>& foundLocations,
joeverbout 0:ea44dc9ed014 400 double hitThreshold = 0, Size winStride = Size(),
joeverbout 0:ea44dc9ed014 401 Size padding = Size(),
joeverbout 0:ea44dc9ed014 402 const std::vector<Point>& searchLocations=std::vector<Point>()) const;
joeverbout 0:ea44dc9ed014 403
joeverbout 0:ea44dc9ed014 404 //! with result weights output
joeverbout 0:ea44dc9ed014 405 CV_WRAP virtual void detectMultiScale(InputArray img, CV_OUT std::vector<Rect>& foundLocations,
joeverbout 0:ea44dc9ed014 406 CV_OUT std::vector<double>& foundWeights, double hitThreshold = 0,
joeverbout 0:ea44dc9ed014 407 Size winStride = Size(), Size padding = Size(), double scale = 1.05,
joeverbout 0:ea44dc9ed014 408 double finalThreshold = 2.0,bool useMeanshiftGrouping = false) const;
joeverbout 0:ea44dc9ed014 409 //! without found weights output
joeverbout 0:ea44dc9ed014 410 virtual void detectMultiScale(InputArray img, CV_OUT std::vector<Rect>& foundLocations,
joeverbout 0:ea44dc9ed014 411 double hitThreshold = 0, Size winStride = Size(),
joeverbout 0:ea44dc9ed014 412 Size padding = Size(), double scale = 1.05,
joeverbout 0:ea44dc9ed014 413 double finalThreshold = 2.0, bool useMeanshiftGrouping = false) const;
joeverbout 0:ea44dc9ed014 414
joeverbout 0:ea44dc9ed014 415 CV_WRAP virtual void computeGradient(const Mat& img, CV_OUT Mat& grad, CV_OUT Mat& angleOfs,
joeverbout 0:ea44dc9ed014 416 Size paddingTL = Size(), Size paddingBR = Size()) const;
joeverbout 0:ea44dc9ed014 417
joeverbout 0:ea44dc9ed014 418 CV_WRAP static std::vector<float> getDefaultPeopleDetector();
joeverbout 0:ea44dc9ed014 419 CV_WRAP static std::vector<float> getDaimlerPeopleDetector();
joeverbout 0:ea44dc9ed014 420
joeverbout 0:ea44dc9ed014 421 CV_PROP Size winSize;
joeverbout 0:ea44dc9ed014 422 CV_PROP Size blockSize;
joeverbout 0:ea44dc9ed014 423 CV_PROP Size blockStride;
joeverbout 0:ea44dc9ed014 424 CV_PROP Size cellSize;
joeverbout 0:ea44dc9ed014 425 CV_PROP int nbins;
joeverbout 0:ea44dc9ed014 426 CV_PROP int derivAperture;
joeverbout 0:ea44dc9ed014 427 CV_PROP double winSigma;
joeverbout 0:ea44dc9ed014 428 CV_PROP int histogramNormType;
joeverbout 0:ea44dc9ed014 429 CV_PROP double L2HysThreshold;
joeverbout 0:ea44dc9ed014 430 CV_PROP bool gammaCorrection;
joeverbout 0:ea44dc9ed014 431 CV_PROP std::vector<float> svmDetector;
joeverbout 0:ea44dc9ed014 432 UMat oclSvmDetector;
joeverbout 0:ea44dc9ed014 433 float free_coef;
joeverbout 0:ea44dc9ed014 434 CV_PROP int nlevels;
joeverbout 0:ea44dc9ed014 435 CV_PROP bool signedGradient;
joeverbout 0:ea44dc9ed014 436
joeverbout 0:ea44dc9ed014 437
joeverbout 0:ea44dc9ed014 438 //! evaluate specified ROI and return confidence value for each location
joeverbout 0:ea44dc9ed014 439 virtual void detectROI(const cv::Mat& img, const std::vector<cv::Point> &locations,
joeverbout 0:ea44dc9ed014 440 CV_OUT std::vector<cv::Point>& foundLocations, CV_OUT std::vector<double>& confidences,
joeverbout 0:ea44dc9ed014 441 double hitThreshold = 0, cv::Size winStride = Size(),
joeverbout 0:ea44dc9ed014 442 cv::Size padding = Size()) const;
joeverbout 0:ea44dc9ed014 443
joeverbout 0:ea44dc9ed014 444 //! evaluate specified ROI and return confidence value for each location in multiple scales
joeverbout 0:ea44dc9ed014 445 virtual void detectMultiScaleROI(const cv::Mat& img,
joeverbout 0:ea44dc9ed014 446 CV_OUT std::vector<cv::Rect>& foundLocations,
joeverbout 0:ea44dc9ed014 447 std::vector<DetectionROI>& locations,
joeverbout 0:ea44dc9ed014 448 double hitThreshold = 0,
joeverbout 0:ea44dc9ed014 449 int groupThreshold = 0) const;
joeverbout 0:ea44dc9ed014 450
joeverbout 0:ea44dc9ed014 451 //! read/parse Dalal's alt model file
joeverbout 0:ea44dc9ed014 452 void readALTModel(String modelfile);
joeverbout 0:ea44dc9ed014 453 void groupRectangles(std::vector<cv::Rect>& rectList, std::vector<double>& weights, int groupThreshold, double eps) const;
joeverbout 0:ea44dc9ed014 454 };
joeverbout 0:ea44dc9ed014 455
joeverbout 0:ea44dc9ed014 456 //! @} objdetect
joeverbout 0:ea44dc9ed014 457
joeverbout 0:ea44dc9ed014 458 }
joeverbout 0:ea44dc9ed014 459
joeverbout 0:ea44dc9ed014 460 #include "opencv2/objdetect/detection_based_tracker.hpp"
joeverbout 0:ea44dc9ed014 461
joeverbout 0:ea44dc9ed014 462 #ifndef DISABLE_OPENCV_24_COMPATIBILITY
joeverbout 0:ea44dc9ed014 463 #include "opencv2/objdetect/objdetect_c.h"
joeverbout 0:ea44dc9ed014 464 #endif
joeverbout 0:ea44dc9ed014 465
joeverbout 0:ea44dc9ed014 466 #endif
joeverbout 0:ea44dc9ed014 467