Renesas / opencv-lib

Dependents:   RZ_A2M_Mbed_samples

Embed: (wiki syntax)

« Back to documentation index

Show/hide line numbers optim.hpp Source File

optim.hpp

00001 /*M///////////////////////////////////////////////////////////////////////////////////////
00002 //
00003 //  IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
00004 //
00005 //  By downloading, copying, installing or using the software you agree to this license.
00006 //  If you do not agree to this license, do not download, install,
00007 //  copy or use the software.
00008 //
00009 //
00010 //                           License Agreement
00011 //                For Open Source Computer Vision Library
00012 //
00013 // Copyright (C) 2013, OpenCV Foundation, all rights reserved.
00014 // Third party copyrights are property of their respective owners.
00015 //
00016 // Redistribution and use in source and binary forms, with or without modification,
00017 // are permitted provided that the following conditions are met:
00018 //
00019 //   * Redistribution's of source code must retain the above copyright notice,
00020 //     this list of conditions and the following disclaimer.
00021 //
00022 //   * Redistribution's in binary form must reproduce the above copyright notice,
00023 //     this list of conditions and the following disclaimer in the documentation
00024 //     and/or other materials provided with the distribution.
00025 //
00026 //   * The name of the copyright holders may not be used to endorse or promote products
00027 //     derived from this software without specific prior written permission.
00028 //
00029 // This software is provided by the copyright holders and contributors "as is" and
00030 // any express or implied warranties, including, but not limited to, the implied
00031 // warranties of merchantability and fitness for a particular purpose are disclaimed.
00032 // In no event shall the OpenCV Foundation or contributors be liable for any direct,
00033 // indirect, incidental, special, exemplary, or consequential damages
00034 // (including, but not limited to, procurement of substitute goods or services;
00035 // loss of use, data, or profits; or business interruption) however caused
00036 // and on any theory of liability, whether in contract, strict liability,
00037 // or tort (including negligence or otherwise) arising in any way out of
00038 // the use of this software, even if advised of the possibility of such damage.
00039 //
00040 //M*/
00041 
00042 #ifndef OPENCV_OPTIM_HPP
00043 #define OPENCV_OPTIM_HPP
00044 
00045 #include "opencv2/core.hpp"
00046 
00047 namespace cv
00048 {
00049 
00050 /** @addtogroup core_optim
00051 The algorithms in this section minimize or maximize function value within specified constraints or
00052 without any constraints.
00053 @{
00054 */
00055 
00056 /** @brief Basic interface for all solvers
00057  */
00058 class CV_EXPORTS MinProblemSolver : public Algorithm
00059 {
00060 public:
00061     /** @brief Represents function being optimized
00062      */
00063     class CV_EXPORTS Function
00064     {
00065     public:
00066         virtual ~Function() {}
00067         virtual int getDims() const = 0;
00068         virtual double getGradientEps() const;
00069         virtual double calc(const double* x) const = 0;
00070         virtual void getGradient(const double* x,double* grad);
00071     };
00072 
00073     /** @brief Getter for the optimized function.
00074 
00075     The optimized function is represented by Function interface, which requires derivatives to
00076     implement the sole method calc(double*) to evaluate the function.
00077 
00078     @return Smart-pointer to an object that implements Function interface - it represents the
00079     function that is being optimized. It can be empty, if no function was given so far.
00080      */
00081     virtual Ptr<Function> getFunction() const = 0;
00082 
00083     /** @brief Setter for the optimized function.
00084 
00085     *It should be called at least once before the call to* minimize(), as default value is not usable.
00086 
00087     @param f The new function to optimize.
00088      */
00089     virtual void setFunction(const Ptr<Function>& f) = 0;
00090 
00091     /** @brief Getter for the previously set terminal criteria for this algorithm.
00092 
00093     @return Deep copy of the terminal criteria used at the moment.
00094      */
00095     virtual TermCriteria getTermCriteria() const = 0;
00096 
00097     /** @brief Set terminal criteria for solver.
00098 
00099     This method *is not necessary* to be called before the first call to minimize(), as the default
00100     value is sensible.
00101 
00102     Algorithm stops when the number of function evaluations done exceeds termcrit.maxCount, when
00103     the function values at the vertices of simplex are within termcrit.epsilon range or simplex
00104     becomes so small that it can enclosed in a box with termcrit.epsilon sides, whatever comes
00105     first.
00106     @param termcrit Terminal criteria to be used, represented as cv::TermCriteria structure.
00107      */
00108     virtual void setTermCriteria(const TermCriteria& termcrit) = 0;
00109 
00110     /** @brief actually runs the algorithm and performs the minimization.
00111 
00112     The sole input parameter determines the centroid of the starting simplex (roughly, it tells
00113     where to start), all the others (terminal criteria, initial step, function to be minimized) are
00114     supposed to be set via the setters before the call to this method or the default values (not
00115     always sensible) will be used.
00116 
00117     @param x The initial point, that will become a centroid of an initial simplex. After the algorithm
00118     will terminate, it will be setted to the point where the algorithm stops, the point of possible
00119     minimum.
00120     @return The value of a function at the point found.
00121      */
00122     virtual double minimize(InputOutputArray x) = 0;
00123 };
00124 
00125 /** @brief This class is used to perform the non-linear non-constrained minimization of a function,
00126 
00127 defined on an `n`-dimensional Euclidean space, using the **Nelder-Mead method**, also known as
00128 **downhill simplex method**. The basic idea about the method can be obtained from
00129 <http://en.wikipedia.org/wiki/Nelder-Mead_method>.
00130 
00131 It should be noted, that this method, although deterministic, is rather a heuristic and therefore
00132 may converge to a local minima, not necessary a global one. It is iterative optimization technique,
00133 which at each step uses an information about the values of a function evaluated only at `n+1`
00134 points, arranged as a *simplex* in `n`-dimensional space (hence the second name of the method). At
00135 each step new point is chosen to evaluate function at, obtained value is compared with previous
00136 ones and based on this information simplex changes it's shape , slowly moving to the local minimum.
00137 Thus this method is using *only* function values to make decision, on contrary to, say, Nonlinear
00138 Conjugate Gradient method (which is also implemented in optim).
00139 
00140 Algorithm stops when the number of function evaluations done exceeds termcrit.maxCount, when the
00141 function values at the vertices of simplex are within termcrit.epsilon range or simplex becomes so
00142 small that it can enclosed in a box with termcrit.epsilon sides, whatever comes first, for some
00143 defined by user positive integer termcrit.maxCount and positive non-integer termcrit.epsilon.
00144 
00145 @note DownhillSolver is a derivative of the abstract interface
00146 cv::MinProblemSolver, which in turn is derived from the Algorithm interface and is used to
00147 encapsulate the functionality, common to all non-linear optimization algorithms in the optim
00148 module.
00149 
00150 @note term criteria should meet following condition:
00151 @code
00152     termcrit.type == (TermCriteria::MAX_ITER + TermCriteria::EPS) && termcrit.epsilon > 0 && termcrit.maxCount > 0
00153 @endcode
00154  */
00155 class CV_EXPORTS DownhillSolver : public MinProblemSolver
00156 {
00157 public:
00158     /** @brief Returns the initial step that will be used in downhill simplex algorithm.
00159 
00160     @param step Initial step that will be used in algorithm. Note, that although corresponding setter
00161     accepts column-vectors as well as row-vectors, this method will return a row-vector.
00162     @see DownhillSolver::setInitStep
00163      */
00164     virtual void getInitStep(OutputArray step) const=0;
00165 
00166     /** @brief Sets the initial step that will be used in downhill simplex algorithm.
00167 
00168     Step, together with initial point (givin in DownhillSolver::minimize) are two `n`-dimensional
00169     vectors that are used to determine the shape of initial simplex. Roughly said, initial point
00170     determines the position of a simplex (it will become simplex's centroid), while step determines the
00171     spread (size in each dimension) of a simplex. To be more precise, if \f$s,x_0\in\mathbb{R}^n\f$ are
00172     the initial step and initial point respectively, the vertices of a simplex will be:
00173     \f$v_0:=x_0-\frac{1}{2} s\f$ and \f$v_i:=x_0+s_i\f$ for \f$i=1,2,\dots,n\f$ where \f$s_i\f$ denotes
00174     projections of the initial step of *n*-th coordinate (the result of projection is treated to be
00175     vector given by \f$s_i:=e_i\cdot\left<e_i\cdot s\right>\f$, where \f$e_i\f$ form canonical basis)
00176 
00177     @param step Initial step that will be used in algorithm. Roughly said, it determines the spread
00178     (size in each dimension) of an initial simplex.
00179      */
00180     virtual void setInitStep(InputArray step)=0;
00181 
00182     /** @brief This function returns the reference to the ready-to-use DownhillSolver object.
00183 
00184     All the parameters are optional, so this procedure can be called even without parameters at
00185     all. In this case, the default values will be used. As default value for terminal criteria are
00186     the only sensible ones, MinProblemSolver::setFunction() and DownhillSolver::setInitStep()
00187     should be called upon the obtained object, if the respective parameters were not given to
00188     create(). Otherwise, the two ways (give parameters to createDownhillSolver() or miss them out
00189     and call the MinProblemSolver::setFunction() and DownhillSolver::setInitStep()) are absolutely
00190     equivalent (and will drop the same errors in the same way, should invalid input be detected).
00191     @param f Pointer to the function that will be minimized, similarly to the one you submit via
00192     MinProblemSolver::setFunction.
00193     @param initStep Initial step, that will be used to construct the initial simplex, similarly to the one
00194     you submit via MinProblemSolver::setInitStep.
00195     @param termcrit Terminal criteria to the algorithm, similarly to the one you submit via
00196     MinProblemSolver::setTermCriteria.
00197      */
00198     static Ptr<DownhillSolver> create(const Ptr<MinProblemSolver::Function>& f=Ptr<MinProblemSolver::Function>(),
00199                                       InputArray initStep=Mat_<double> (1,1,0.0),
00200                                       TermCriteria termcrit=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS,5000,0.000001));
00201 };
00202 
00203 /** @brief This class is used to perform the non-linear non-constrained minimization of a function
00204 with known gradient,
00205 
00206 defined on an *n*-dimensional Euclidean space, using the **Nonlinear Conjugate Gradient method**.
00207 The implementation was done based on the beautifully clear explanatory article [An Introduction to
00208 the Conjugate Gradient Method Without the Agonizing
00209 Pain](http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf) by Jonathan Richard
00210 Shewchuk. The method can be seen as an adaptation of a standard Conjugate Gradient method (see, for
00211 example <http://en.wikipedia.org/wiki/Conjugate_gradient_method>) for numerically solving the
00212 systems of linear equations.
00213 
00214 It should be noted, that this method, although deterministic, is rather a heuristic method and
00215 therefore may converge to a local minima, not necessary a global one. What is even more disastrous,
00216 most of its behaviour is ruled by gradient, therefore it essentially cannot distinguish between
00217 local minima and maxima. Therefore, if it starts sufficiently near to the local maximum, it may
00218 converge to it. Another obvious restriction is that it should be possible to compute the gradient of
00219 a function at any point, thus it is preferable to have analytic expression for gradient and
00220 computational burden should be born by the user.
00221 
00222 The latter responsibility is accompilished via the getGradient method of a
00223 MinProblemSolver::Function interface (which represents function being optimized). This method takes
00224 point a point in *n*-dimensional space (first argument represents the array of coordinates of that
00225 point) and comput its gradient (it should be stored in the second argument as an array).
00226 
00227 @note class ConjGradSolver thus does not add any new methods to the basic MinProblemSolver interface.
00228 
00229 @note term criteria should meet following condition:
00230 @code
00231     termcrit.type == (TermCriteria::MAX_ITER + TermCriteria::EPS) && termcrit.epsilon > 0 && termcrit.maxCount > 0
00232     // or
00233     termcrit.type == TermCriteria::MAX_ITER) && termcrit.maxCount > 0
00234 @endcode
00235  */
00236 class CV_EXPORTS ConjGradSolver : public MinProblemSolver
00237 {
00238 public:
00239     /** @brief This function returns the reference to the ready-to-use ConjGradSolver object.
00240 
00241     All the parameters are optional, so this procedure can be called even without parameters at
00242     all. In this case, the default values will be used. As default value for terminal criteria are
00243     the only sensible ones, MinProblemSolver::setFunction() should be called upon the obtained
00244     object, if the function was not given to create(). Otherwise, the two ways (submit it to
00245     create() or miss it out and call the MinProblemSolver::setFunction()) are absolutely equivalent
00246     (and will drop the same errors in the same way, should invalid input be detected).
00247     @param f Pointer to the function that will be minimized, similarly to the one you submit via
00248     MinProblemSolver::setFunction.
00249     @param termcrit Terminal criteria to the algorithm, similarly to the one you submit via
00250     MinProblemSolver::setTermCriteria.
00251     */
00252     static Ptr<ConjGradSolver> create(const Ptr<MinProblemSolver::Function>& f=Ptr<ConjGradSolver::Function>(),
00253                                       TermCriteria termcrit=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS,5000,0.000001));
00254 };
00255 
00256 //! return codes for cv::solveLP() function
00257 enum SolveLPResult
00258 {
00259     SOLVELP_UNBOUNDED    = -2, //!< problem is unbounded (target function can achieve arbitrary high values)
00260     SOLVELP_UNFEASIBLE    = -1, //!< problem is unfeasible (there are no points that satisfy all the constraints imposed)
00261     SOLVELP_SINGLE    = 0, //!< there is only one maximum for target function
00262     SOLVELP_MULTI    = 1 //!< there are multiple maxima for target function - the arbitrary one is returned
00263 };
00264 
00265 /** @brief Solve given (non-integer) linear programming problem using the Simplex Algorithm (Simplex Method).
00266 
00267 What we mean here by "linear programming problem" (or LP problem, for short) can be formulated as:
00268 
00269 \f[\mbox{Maximize } c\cdot x\\
00270  \mbox{Subject to:}\\
00271  Ax\leq b\\
00272  x\geq 0\f]
00273 
00274 Where \f$c\f$ is fixed `1`-by-`n` row-vector, \f$A\f$ is fixed `m`-by-`n` matrix, \f$b\f$ is fixed `m`-by-`1`
00275 column vector and \f$x\f$ is an arbitrary `n`-by-`1` column vector, which satisfies the constraints.
00276 
00277 Simplex algorithm is one of many algorithms that are designed to handle this sort of problems
00278 efficiently. Although it is not optimal in theoretical sense (there exist algorithms that can solve
00279 any problem written as above in polynomial time, while simplex method degenerates to exponential
00280 time for some special cases), it is well-studied, easy to implement and is shown to work well for
00281 real-life purposes.
00282 
00283 The particular implementation is taken almost verbatim from **Introduction to Algorithms, third
00284 edition** by T. H. Cormen, C. E. Leiserson, R. L. Rivest and Clifford Stein. In particular, the
00285 Bland's rule <http://en.wikipedia.org/wiki/Bland%27s_rule> is used to prevent cycling.
00286 
00287 @param Func This row-vector corresponds to \f$c\f$ in the LP problem formulation (see above). It should
00288 contain 32- or 64-bit floating point numbers. As a convenience, column-vector may be also submitted,
00289 in the latter case it is understood to correspond to \f$c^T\f$.
00290 @param Constr `m`-by-`n+1` matrix, whose rightmost column corresponds to \f$b\f$ in formulation above
00291 and the remaining to \f$A\f$. It should containt 32- or 64-bit floating point numbers.
00292 @param z The solution will be returned here as a column-vector - it corresponds to \f$c\f$ in the
00293 formulation above. It will contain 64-bit floating point numbers.
00294 @return One of cv::SolveLPResult
00295  */
00296 CV_EXPORTS_W int solveLP(const Mat& Func, const Mat& Constr, Mat& z);
00297 
00298 //! @}
00299 
00300 }// cv
00301 
00302 #endif