toddlerbot.depth package¶
Submodules¶
toddlerbot.depth.depth_estimator_foundation_stereo module¶
TensorRT-accelerated stereo depth estimation using Foundation Stereo model.
This module provides stereo depth estimation capabilities using a TensorRT engine for high-performance inference with camera calibration and rectification support.
- class toddlerbot.depth.depth_estimator_foundation_stereo.DepthEstimatorFoundationStereo(calib_params_path, rec_params_path, engine_path, calib_width, calib_height, skip_rectify=False, debug=False)¶
Bases:
object
Depth estimation using Foundation Stereo model.
- get_depth(*, remove_invisible: bool = False, debug_images: Tuple[ndarray, ndarray] | None = None, return_all: bool = False) DepthResult ¶
Estimate per-pixel depth from the stereo pair.
- Parameters:
remove_invisible – If
True
, mask points that project outside the right-camera image.debug_images – Optional pre-captured (left, right) images. When
None
, frames are captured from the cameras.return_all – If
True
, populate every field inDepthResult
; otherwise onlydepth
is guaranteed to be non-None
.
- Returns:
An immutable
DepthResult
whose fields are either populated orNone
depending onreturn_all
.
- get_pcl(depth, resized_image, is_BGR=True, zmim=0.0, zmax=inf, denoise_cloud=False, denoise_nb_points=30, denoise_radius=0.03)¶
Convert depth map to point cloud. :param depth: Depth map. :param resized_image: Resized image. :param is_BGR: Whether the image is in BGR format. :param zmin: Minimum depth (meters). :param zmax: Maximum depth (meters). :param denoise_cloud: Whether to denoise the point cloud. :param denoise_nb_points: Number of points to use for radius outlier removal. :param denoise_radius: Radius for radius outlier removal.
- Returns:
Point cloud (open3d.geometry.PointCloud)
- class toddlerbot.depth.depth_estimator_foundation_stereo.DepthResult(depth: ndarray, disparity: ndarray | None = None, rectified_left: ndarray | None = None, rectified_right: ndarray | None = None, original_left: ndarray | None = None, original_right: ndarray | None = None)¶
Bases:
object
Container for all artifacts produced by depth estimation.
- depth: ndarray¶
- disparity: ndarray | None¶
- original_left: ndarray | None¶
- original_right: ndarray | None¶
- rectified_left: ndarray | None¶
- rectified_right: ndarray | None¶
- toddlerbot.depth.depth_estimator_foundation_stereo.gpu_preproc(img: Image) Tensor ¶
Preprocess PIL Image for TensorRT inference with GPU normalization.
- toddlerbot.depth.depth_estimator_foundation_stereo.pad_to_multiple(t: Tensor, k: int = 32) Tuple[Tensor, int, int] ¶
Pad tensor dimensions to nearest multiple of k for model requirements.
toddlerbot.depth.depth_utils module¶
Utility functions for depth processing, point cloud conversion, and stereo rectification.
This module provides core utilities for depth map processing including conversion to point clouds, stereo rectification, disparity visualization, and image padding.
- toddlerbot.depth.depth_utils.depth_to_xyzmap(depth: ndarray, K, uvs: ndarray = None, zmin=0.0)¶
Convert depth map to 3D coordinate map using camera intrinsics.
- toddlerbot.depth.depth_utils.get_rectification_maps(calib_params, rec_params, image_size)¶
Generate stereo rectification maps from calibration parameters.
- toddlerbot.depth.depth_utils.pad_images_np(img0, img1, divis_by=32)¶
Pad stereo image pair to be divisible by specified value.
- toddlerbot.depth.depth_utils.to_open3d_Cloud(points, colors=None, normals=None)¶
Convert point arrays to Open3D PointCloud with optional colors and normals.
- toddlerbot.depth.depth_utils.unpad_image_np(img, pad_shape, original_shape)¶
Remove padding from image to restore original dimensions.
- toddlerbot.depth.depth_utils.vis_disparity(disp, min_val=None, max_val=None, invalid_upper_thres=inf, invalid_bottom_thres=-inf, color_map=20, cmap=None, other_output={})¶
@disp: np array (H,W) @invalid_upper_thres: > thres is invalid @invalid_bottom_thres: < thres is invalid
Module contents¶
Depth estimation and stereo vision for ToddlerBot.
This package provides depth estimation capabilities using stereo camera systems and foundation models for depth prediction, including:
Foundation model-based stereo depth estimation
Depth processing and filtering utilities
Calibration parameter management
Depth map visualization and analysis tools
Integration with stereo camera hardware
The depth estimation system supports both traditional stereo vision methods and modern deep learning approaches for robust depth perception in various lighting conditions and environments.