Classes
Modality that computes quantized gradient orientations from a color image.
More...
Modality that computes quantized surface normals from a dense depth map.
More...
Object detector using the LINE template matching algorithm with any set of modalities.
More...
Discriminant feature described by its location and label.
More...
Represents a successful template match.
More...
Interface for modalities that plug into the LINE template matching representation.
More...
Represents a modality operating over an image pyramid.
More...
Functions
Debug function to colormap a quantized image for viewing.
More...
void
cv::rgbd::depthTo3d (InputArray depth, InputArray K, OutputArray points3d, InputArray mask=noArray())
Factory function for detector using LINE algorithm with color gradients.
More...
Factory function for detector using LINE-MOD algorithm with color gradients and depth normals.
More...
void
cv::rgbd::warpFrame (const Mat &image, const Mat &depth, const Mat &mask, const Mat &Rt, const Mat &cameraMatrix, const Mat &distCoeff, Mat &warpedImage, Mat *warpedDepth=0, Mat *warpedMask=0)
Detailed Description
Function Documentation
cv::linemod::QuantizedPyramid::Candidate::Candidate
(
int
x,
int
y,
int
label,
float
score
)
inline
cv::linemod::Feature::Feature
(
int
x,
int
y,
int
label
)
inline
cv::linemod::Match::Match
(
int
x,
int
y,
float
similarity,
int
template_id
)
inline
void cv::linemod::colormap
(
const Mat &
quantized,
Mat &
dst
)
Debug function to colormap a quantized image for viewing.
void cv::rgbd::depthTo3d
(
InputArray
depth,
InputArray
K,
OutputArray
points3d,
InputArray
mask = noArray()
)
Converts a depth image to an organized set of 3d points. The coordinate system is x pointing left, y down and z away from the camera
- Parameters
-
depth the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters (as done with the Microsoft Kinect), otherwise, if given as CV_32F or CV_64F, it is assumed in meters)
K The calibration matrix
points3d the resulting 3d points. They are of depth the same as depth if it is CV_32F or CV_64F, and the depth of K if depth is of depth CV_U
mask the mask of the points to consider (can be empty)
void cv::rgbd::depthTo3dSparse
(
InputArray
depth,
InputArray
in_K,
InputArray
in_points,
OutputArray
points3d
)
- Parameters
-
depth the depth image
in_K
in_points the list of xy coordinates
points3d the resulting 3d points
Ptr<Detector> cv::linemod::getDefaultLINE
(
)
Factory function for detector using LINE algorithm with color gradients.
Default parameter settings suitable for VGA images.
Ptr<Detector> cv::linemod::getDefaultLINEMOD
(
)
Factory function for detector using LINE-MOD algorithm with color gradients and depth normals.
Default parameter settings suitable for VGA images.
bool cv::rgbd::isValidDepth
(
const float &
depth )
inline
Checks if the value is a valid depth. For CV_16U or CV_16S, the convention is to be invalid if it is a limit. For a float/double, we just check if it is a NaN
- Parameters
-
depth the depth to check for validity
bool cv::rgbd::isValidDepth
(
const double &
depth )
inline
bool cv::rgbd::isValidDepth
(
const short int &
depth )
inline
bool cv::rgbd::isValidDepth
(
const unsigned short int &
depth )
inline
bool cv::rgbd::isValidDepth
(
const int &
depth )
inline
bool cv::rgbd::isValidDepth
(
const unsigned int &
depth )
inline
void cv::rgbd::rescaleDepth
(
InputArray
in,
int
depth,
OutputArray
out
)
If the input image is of type CV_16UC1 (like the Kinect one), the image is converted to floats, divided by 1000 to get a depth in meters, and the values 0 are converted to std::numeric_limits<float>::quiet_NaN() Otherwise, the image is simply converted to floats
- Parameters
-
in the depth image (if given as short int CV_U, it is assumed to be the depth in millimeters (as done with the Microsoft Kinect), it is assumed in meters)
depth the desired output depth (floats or double)
out The rescaled float depth image
void cv::rgbd::warpFrame
(
const Mat &
image,
const Mat &
depth,
const Mat &
mask,
const Mat &
Rt,
const Mat &
cameraMatrix,
const Mat &
distCoeff,
Mat &
warpedImage,
Mat *
warpedDepth = 0,
Mat *
warpedMask = 0
)
Warp the image: compute 3d points from the depth, transform them using given transformation, then project color point cloud to an image plane. This function can be used to visualize results of the Odometry algorithm.
- Parameters
-
image The image (of CV_8UC1 or CV_8UC3 type)
depth The depth (of type used in depthTo3d fuction)
mask The mask of used pixels (of CV_8UC1), it can be empty
Rt The transformation that will be applied to the 3d points computed from the depth
cameraMatrix Camera matrix
distCoeff Distortion coefficients
warpedImage The warped image.
warpedDepth The warped depth.
warpedMask The warped mask.