Camera
Camera.set_poses(positions, orientations)
Camera.set_poses(positions, orientations)Update camera poses for all environments.
Parameters
positions (Sequence[numpy.ndarray]): Sequence of length
num_envs. Each entry represents the target camera position(x, y, z)in local coordinates.orientations (Sequence[numpy.ndarray]): Sequence of length
num_envs. Each entry represents the target camera orientation as a quaternion(w, x, y, z).
Camera.set_depth_noise(noise_mean: float = 0.0, noise_std: float = 0.0)
Camera.set_depth_noise(noise_mean: float = 0.0, noise_std: float = 0.0)Configure Gaussian noise applied to depth measurements.
Parameters
noise_mean (float):
Mean of the depth noise in world units.
noise_std (float):
Standard deviation of the depth noise in world units.
Camera.set_depth_mode(mode: str = "planar")
Camera.set_depth_mode(mode: str = "planar")Set the depth representation mode used during rendering.
Parameters
mode (str):
Depth encoding mode:
"planar": z-distance along the camera’s forward axis (default)."euclidean": Euclidean distance along the ray.
Camera.get_depth_map() -> torch.Tensor
Camera.get_depth_map() -> torch.TensorReturn the cached depth map from the last render(device) call. Every valid pixel in the depth map is inside [near_clip, far_clip], where the invalid ones are all zeros.
Returns
depth_map (torch.Tensor): A tensor of shape
(num_envs, height, width)with dtypetorch.float32on the same device as used in the lastrender(device)call, where:num_envsis the number of environments.height,widthare the camera’s height and width.
Camera.get_segmentation_map() -> torch.Tensor
Camera.get_segmentation_map() -> torch.TensorReturn the cached instance segmentation map from the last render(device) call.
Returns
segmentation (torch.Tensor):
A tensor of shape
(num_envs, height, width)withdtype=torch.int32on the same device as used in the lastrender(device)call, where each pixel stores an instance ID:0: background.>= 1: valid object instance IDs.
Camera.get_point_cloud() -> torch.Tensor
Camera.get_point_cloud() -> torch.TensorUnproject the cached depth map into a 3D point cloud in local coordinates.
Returns
points (torch.Tensor):
A tensor of shape
(num_envs, max_points, D)with dtypetorch.float32, where:max_pointsis the maximum number of valid points (i.e., with non-zero depth) across all environments. For environments that have fewer valid points thanmax_points, it is padded with zeros.Dis the dimension of vertex positions (= 3; they are XYZ positions).
Camera.name
Camera.name Return the name provided at construction via add_camera().
Last updated