CamShift
Finds an object center, size, and orientation using the CAMSHIFT algorithm.Back projection of the object histogram. See
calcBackProject for details.Initial search window. The function updates this parameter with the new window position.
Stop criteria for the underlying meanShift algorithm.
RotatedRect structure that includes the object position, size, and orientation.
The function implements the CAMSHIFT object tracking algorithm. It first finds an object center using
meanShift, then adjusts the window size and finds the optimal rotation. The next position of the search window can be obtained with RotatedRect::boundingRect().Example
meanShift
Finds an object on a back projection image using iterative search.Back projection of the object histogram. See
calcBackProject for details.Initial search window. Updated with the final window position.
Stop criteria for the iterative search algorithm.
Unlike
CamShift, the search window size and orientation do not change during the search. For better results, pre-filter the back projection to remove noise using techniques like morphological operations or connected components analysis.Example
computeECC
Computes the Enhanced Correlation Coefficient (ECC) value between two images.Input template image; must have 1 or 3 channels and be of type CV_8U, CV_16U, CV_32F, or CV_64F.
Input image to be compared with the template; must have the same type and number of channels as templateImage.
Optional single-channel mask to specify the valid region of interest.
findTransformECC
Finds the geometric transform (warp) between two images in terms of the ECC criterion.Template image; 1 or 3 channels, CV_8U, CV_16U, CV_32F, or CV_64F type.
Input image to be warped; same type as templateImage.
Floating-point 2×3 or 3×3 mapping matrix (warp). Should be initialized with a rough alignment estimate.
Type of motion model. Default:
MOTION_AFFINETermination criteria of the ECC algorithm.
Optional mask indicating valid values of inputImage.
Motion Type Constants
- MOTION_TRANSLATION
- MOTION_EUCLIDEAN
- MOTION_AFFINE
- MOTION_HOMOGRAPHY
Translational motion model. The warpMatrix is 2×3 with the first 2×2 part being the identity matrix.
The function implements an area-based alignment that builds on intensity similarities. If images undergo strong displacements or rotations, provide a rough initial transformation. Use the identity matrix if no prior information is available.
Example
findTransformECCWithMask
Extended version offindTransformECC that supports validity masks for both template and input images.
Single-channel 8-bit mask for templateImage indicating valid pixels. Must have the same size as templateImage.
Single-channel 8-bit mask for inputImage indicating valid pixels before warping. Must have the same size as inputImage.
Size of the Gaussian blur filter used for smoothing images and masks before computing alignment. Default: 5
estimateRigidTransform (Deprecated)
Computes an optimal affine transformation between two 2D point sets.This function is deprecated. Use
cv::estimateAffine2D or cv::estimateAffinePartial2D instead. If using with images, extract points using cv::calcOpticalFlowPyrLK first, then use the estimation functions.First input 2D point set stored in std::vector or Mat, or an image stored in Mat.
Second input 2D point set of the same size and type as src, or another image.
If true, finds an optimal affine transformation with no restrictions (6 DOF). If false, limits transformations to translation, rotation, and uniform scaling (4 DOF).
