Classes
Class encapsulating matching parameters.
More...
Class encapsulating training parameters.
More...
Class encapsulating training samples.
More...
This class can be used for imposing a learned prior on the resulting optical flow. Solution will be regularized according to this prior. You need to generate appropriate prior file with "learn_prior.py" script beforehand.
More...
Typedefs
Enumerations
Functions
Calculates a global motion orientation in a selected region.
More...
Calculates a gradient orientation of a motion history image.
More...
void
cv::optflow::calcOpticalFlowSF (
InputArray from,
InputArray to,
OutputArray flow, int layers, int averaging_block_size, int max_flow, double sigma_dist, double sigma_color, int postprocess_window, double sigma_dist_fix, double sigma_color_fix, double occ_thr, int upscale_averaging_radius, double upscale_sigma_dist, double upscale_sigma_color, double speed_up_thr)
Calculate an optical flow using "SimpleFlow" algorithm.
More...
Fast dense optical flow based on PyrLK sparse matches interpolation.
More...
DeepFlow optical flow algorithm implementation.
More...
Creates an instance of PCAFlow.
More...
Find correspondences between two images.
More...
Splits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand).
More...
Updates the motion history image by a moving silhouette.
More...
Detailed Description
Dense optical flow algorithms compute motion for each point:
Motion templates is alternative technique for detecting motion and computing its direction. See samples/motempl.py.
Functions reading and writing .flo files in "Middlebury" format, see: http://vision.middlebury.edu/flow/code/flow-code/README.txt
- cv::optflow::readOpticalFlow
- cv::optflow::writeOpticalFlow
Typedef Documentation
§ GPCSamplesVector
Enumeration Type Documentation
§ GPCDescType
Descriptor types for the Global Patch Collider.
| Enumerator |
|---|
| GPC_DESCRIPTOR_DCT Python: cv.optflow.GPC_DESCRIPTOR_DCT | Better quality but slow.
|
| GPC_DESCRIPTOR_WHT Python: cv.optflow.GPC_DESCRIPTOR_WHT | Worse quality but much faster.
|
Function Documentation
§ calcGlobalOrientation()
double cv::motempl::calcGlobalOrientation
(
InputArray
orientation,
double
timestamp,
double
duration
)
| Python: |
|---|
| retval | = | cv.motempl.calcGlobalOrientation( | orientation, mask, mhi, timestamp, duration | ) |
Calculates a global motion orientation in a selected region.
- Parameters
-
orientation Motion gradient orientation image calculated by the function calcMotionGradient
mask Mask image. It may be a conjunction of a valid gradient mask, also calculated by calcMotionGradient , and the mask of a region whose direction needs to be calculated.
mhi Motion history image calculated by updateMotionHistory .
timestamp Timestamp passed to updateMotionHistory .
duration Maximum duration of a motion track in milliseconds, passed to updateMotionHistory
The function calculates an average motion direction in the selected region and returns the angle between 0 degrees and 360 degrees. The average direction is computed from the weighted orientation histogram, where a recent motion has a larger weight and the motion occurred in the past has a smaller weight, as recorded in mhi .
§ calcMotionGradient()
void cv::motempl::calcMotionGradient
(
InputArray
mhi,
double
delta1,
double
delta2,
int
apertureSize = 3
)
| Python: |
|---|
| mask, orientation | = | cv.motempl.calcMotionGradient( | mhi, delta1, delta2[, mask[, orientation[, apertureSize]]] | ) |
Calculates a gradient orientation of a motion history image.
- Parameters
-
mhi Motion history single-channel floating-point image.
mask Output mask image that has the type CV_8UC1 and the same size as mhi . Its non-zero elements mark pixels where the motion gradient data is correct.
orientation Output motion gradient orientation image that has the same type and the same size as mhi . Each pixel of the image is a motion orientation, from 0 to 360 degrees.
delta1 Minimal (or maximal) allowed difference between mhi values within a pixel neighborhood.
delta2 Maximal (or minimal) allowed difference between mhi values within a pixel neighborhood. That is, the function finds the minimum ( \(m(x,y)\) ) and maximum ( \(M(x,y)\) ) mhi values over \(3 \times 3\) neighborhood of each pixel and marks the motion orientation at \((x, y)\) as valid only if
\[\min ( \texttt{delta1} , \texttt{delta2} ) \le M(x,y)-m(x,y) \le \max ( \texttt{delta1} , \texttt{delta2} ).\]
apertureSize Aperture size of the Sobel operator.
The function calculates a gradient orientation at each pixel \((x, y)\) as:
\[\texttt{orientation} (x,y)= \arctan{\frac{d\texttt{mhi}/dy}{d\texttt{mhi}/dx}}\]
In fact, fastAtan2 and phase are used so that the computed angle is measured in degrees and covers the full range 0..360. Also, the mask is filled to indicate pixels where the computed angle is valid.
- Note
- (Python) An example on how to perform a motion template technique can be found at opencv_source_code/samples/python2/motempl.py
§ calcOpticalFlowSF() [1/2]
void cv::optflow::calcOpticalFlowSF
(
InputArray
from,
int
layers,
int
averaging_block_size,
int
max_flow
)
| Python: |
|---|
| flow | = | cv.optflow.calcOpticalFlowSF( | from, to, layers, averaging_block_size, max_flow[, flow] | ) |
| flow | = | cv.optflow.calcOpticalFlowSF( | from, to, layers, averaging_block_size, max_flow, sigma_dist, sigma_color, postprocess_window, sigma_dist_fix, sigma_color_fix, occ_thr, upscale_averaging_radius, upscale_sigma_dist, upscale_sigma_color, speed_up_thr[, flow] | ) |
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
§ calcOpticalFlowSF() [2/2]
void cv::optflow::calcOpticalFlowSF
(
InputArray
from,
int
layers,
int
averaging_block_size,
int
max_flow,
double
sigma_dist,
double
sigma_color,
int
postprocess_window,
double
sigma_dist_fix,
double
sigma_color_fix,
double
occ_thr,
int
upscale_averaging_radius,
double
upscale_sigma_dist,
double
upscale_sigma_color,
double
speed_up_thr
)
| Python: |
|---|
| flow | = | cv.optflow.calcOpticalFlowSF( | from, to, layers, averaging_block_size, max_flow[, flow] | ) |
| flow | = | cv.optflow.calcOpticalFlowSF( | from, to, layers, averaging_block_size, max_flow, sigma_dist, sigma_color, postprocess_window, sigma_dist_fix, sigma_color_fix, occ_thr, upscale_averaging_radius, upscale_sigma_dist, upscale_sigma_color, speed_up_thr[, flow] | ) |
Calculate an optical flow using "SimpleFlow" algorithm.
- Parameters
-
from First 8-bit 3-channel image.
to Second 8-bit 3-channel image of the same size as prev
flow computed flow image that has the same size as prev and type CV_32FC2
layers Number of layers
averaging_block_size Size of block through which we sum up when calculate cost function for pixel
max_flow maximal flow that we search at each level
sigma_dist vector smooth spatial sigma parameter
sigma_color vector smooth color sigma parameter
postprocess_window window size for postprocess cross bilateral filter
sigma_dist_fix spatial sigma for postprocess cross bilateralf filter
sigma_color_fix color sigma for postprocess cross bilateral filter
occ_thr threshold for detecting occlusions
upscale_averaging_radius window size for bilateral upscale operation
upscale_sigma_dist spatial sigma for bilateral upscale operation
upscale_sigma_color color sigma for bilateral upscale operation
speed_up_thr threshold to detect point with irregular flow - where flow should be recalculated after upscale
See [190] . And site of project - http://graphics.berkeley.edu/papers/Tao-SAN-2012-05/.
- Note
- An example using the simpleFlow algorithm can be found at samples/simpleflow_demo.cpp
§ calcOpticalFlowSparseToDense()
void cv::optflow::calcOpticalFlowSparseToDense
(
InputArray
from,
int
grid_step = 8,
int
k = 128,
float
sigma = 0.05f,
bool
use_post_proc = true,
float
fgs_lambda = 500.0f,
float
fgs_sigma = 1.5f
)
| Python: |
|---|
| flow | = | cv.optflow.calcOpticalFlowSparseToDense( | from, to[, flow[, grid_step[, k[, sigma[, use_post_proc[, fgs_lambda[, fgs_sigma]]]]]]] | ) |
Fast dense optical flow based on PyrLK sparse matches interpolation.
- Parameters
-
from first 8-bit 3-channel or 1-channel image.
to second 8-bit 3-channel or 1-channel image of the same size as from
flow computed flow image that has the same size as from and CV_32FC2 type
grid_step stride used in sparse match computation. Lower values usually result in higher quality but slow down the algorithm.
k number of nearest-neighbor matches considered, when fitting a locally affine model. Lower values can make the algorithm noticeably faster at the cost of some quality degradation.
sigma parameter defining how fast the weights decrease in the locally-weighted affine fitting. Higher values can help preserve fine details, lower values can help to get rid of the noise in the output flow.
§ createOptFlow_DeepFlow()
| Python: |
|---|
| retval | = | cv.optflow.createOptFlow_DeepFlow( | ) |
DeepFlow optical flow algorithm implementation.
The class implements the DeepFlow optical flow algorithm described in [216] . See also http://lear.inrialpes.fr/src/deepmatching/ . Parameters - class fields - that may be modified after creating a class instance:
- member float alpha Smoothness assumption weight
- member float delta Color constancy assumption weight
- member float gamma Gradient constancy weight
- member float sigma Gaussian smoothing parameter
- member int minSize Minimal dimension of an image in the pyramid (next, smaller images in the pyramid are generated until one of the dimensions reaches this size)
- member float downscaleFactor Scaling factor in the image pyramid (must be < 1)
- member int fixedPointIterations How many iterations on each level of the pyramid
- member int sorIterations Iterations of Succesive Over-Relaxation (solver)
- member float omega Relaxation factor in SOR
§ createOptFlow_DualTVL1()
| Python: |
|---|
| retval | = | cv.optflow.createOptFlow_DualTVL1( | ) |
§ createOptFlow_Farneback()
| Python: |
|---|
| retval | = | cv.optflow.createOptFlow_Farneback( | ) |
§ createOptFlow_PCAFlow()
| Python: |
|---|
| retval | = | cv.optflow.createOptFlow_PCAFlow( | ) |
Creates an instance of PCAFlow.
§ createOptFlow_SimpleFlow()
| Python: |
|---|
| retval | = | cv.optflow.createOptFlow_SimpleFlow( | ) |
§ createOptFlow_SparseToDense()
| Python: |
|---|
| retval | = | cv.optflow.createOptFlow_SparseToDense( | ) |
§ findCorrespondences()
Find correspondences between two images.
- Parameters
-
[in] imgFrom First image in a sequence.
[in] imgTo Second image in a sequence.
[out] corr Output vector with pairs of corresponding points.
[in] params Additional matching parameters for fine-tuning.
§ segmentMotion()
std::vector<
Rect > &
boundingRects,
double
timestamp,
double
segThresh
)
| Python: |
|---|
| segmask, boundingRects | = | cv.motempl.segmentMotion( | mhi, timestamp, segThresh[, segmask] | ) |
Splits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand).
- Parameters
-
mhi Motion history image.
segmask Image where the found mask should be stored, single-channel, 32-bit floating-point.
boundingRects Vector containing ROIs of motion connected components.
timestamp Current time in milliseconds or other units.
segThresh Segmentation threshold that is recommended to be equal to the interval between motion history "steps" or greater.
The function finds all of the motion segments and marks them in segmask with individual values (1,2,...). It also computes a vector with ROIs of motion connected components. After that the motion direction for every component can be calculated with calcGlobalOrientation using the extracted mask of the particular component.
§ updateMotionHistory()
void cv::motempl::updateMotionHistory
(
InputArray
silhouette,
double
timestamp,
double
duration
)
| Python: |
|---|
| mhi | = | cv.motempl.updateMotionHistory( | silhouette, mhi, timestamp, duration | ) |
Updates the motion history image by a moving silhouette.
- Parameters
-
silhouette Silhouette mask that has non-zero pixels where the motion occurs.
mhi Motion history image that is updated by the function (single-channel, 32-bit floating-point).
timestamp Current time in milliseconds or other units.
duration Maximal duration of the motion track in the same units as timestamp .
The function updates the motion history image as follows:
\[\texttt{mhi} (x,y)= \forkthree{\texttt{timestamp}}{if \(\texttt{silhouette}(x,y) \ne 0\)}{0}{if \(\texttt{silhouette}(x,y) = 0\) and \(\texttt{mhi} < (\texttt{timestamp} - \texttt{duration})\)}{\texttt{mhi}(x,y)}{otherwise}\]
That is, MHI pixels where the motion occurs are set to the current timestamp , while the pixels where the motion happened last time a long time ago are cleared.
The function, together with calcMotionGradient and calcGlobalOrientation , implements a motion templates technique described in [42] and [24] .