OfflineSim
realsim_py.OfflineSim
realsim_py.OfflineSimOfflineSim has very similar functions to RealtimeSim, but runs in headless mode and can export simulation results as Alembic files for offline rendering.
Typical usage:
import realsim_py as rs
sim = rs.OfflineSim()
sim.load_config("path/to/config.json")
sim.initialize()
sim.set_envs(4)
n_steps = 100
for _ in range(n_steps):
sim.step()Class
OfflineSim()
OfflineSim()Create an empty offline simulator instance.
Call
load_config(),initialize()andset_envs()before running the simulation.
Simulation core
load_config(config_path: str)
load_config(config_path: str)Load the simulation configuration from a JSON file.
Parameters
config_path (str): Relative or absolute path to a JSON configuration file. We provide examples in
path_to_realsim_py/simulation/config/.
initialize(suppress_output: bool = False)
initialize(suppress_output: bool = False)Initialize the simulator after loading the configuration.
Parameters
suppress_output (bool, optional): whether to suppress the built-in simulation console output. This output is mainly for internal debugging, so feel free to suppress it.
set_envs(n_envs: int = 1)
set_envs(n_envs: int = 1)Set the number of environments simulated in parallel. Whether using multiple parallel environments or not, this must be called for full initialization.
Parameters
n_envs (int, optional): Number of environments. Must be a positive integer.
step()
step()Advance the simulation by a single time step, including physics update and active controllers.
State access
get_positions(device: str = "cpu") -> torch.Tensor
get_positions(device: str = "cpu") -> torch.TensorReturn the current global vertex positions.
Parameters
device (str, optional): Target device for the output tensor. Must be
"cpu"or"cuda".
Returns
positions (torch.Tensor): A tensor of shape
(N, D)on the requested device with dtypetorch.float64, where:Nis the total number of vertices across all environments.Dis the dimension of vertex positions (= 3; they are XYZ positions).
get_all_positions(device: str = "cpu", max_history: int = -1) -> torch.Tensor
get_all_positions(device: str = "cpu", max_history: int = -1) -> torch.TensorReturn a history of global vertex positions.
Parameters
device (str, optional): Target device for the output tensor. Must be
"cpu"or"cuda".max_history (int, optional):
If
max_history > 0: at most this many most recent frames are returned.If
max_history == -1: all recorded frames are returned.
Returns
positions_history (torch.Tensor):
A tensor of shape
(T, N, D)on the requested device with dtypetorch.float64, where:Tis the number of frames returned.Nis the total number of vertices across all environments.Dis the dimension of vertex positions (= 3; they are XYZ positions).
get_local_positions(device: str = "cpu") -> torch.Tensor
get_local_positions(device: str = "cpu") -> torch.TensorReturn vertex positions in local (per-environment) coordinates.
This subtracts the per-environment offset from the global positions, so that each vertex is expressed in its local environment frame.
Parameters
device (str, optional): Target device for the output tensor. Must be
"cpu"or"cuda".
Returns
local_positions (torch.Tensor):
A tensor of shape
(N, D)on the requested device with dtypetorch.float64, where:Nis the total number of vertices.Dis the dimension of vertex positions (= 3; they are XYZ positions).
get_offset(eid: int, oid: int) -> int
get_offset(eid: int, oid: int) -> intReturn the starting global vertex index for a given object in a given environment.
Parameters
eid (int): Environment index.
oid (int): Object index within that environment.
Returns
offset (int).
get_obj_vertex_length(eid: int, oid: int) -> int
get_obj_vertex_length(eid: int, oid: int) -> intReturn the number of vertices of a given object in a given environment.
Parameters
eid (int): Environment index.
oid (int): Object index within that environment.
Returns
num_vertices (int).
Reset
Resetreset_velocity()
reset_velocity()Reset all vertex velocities to zero.
set_positions(positions)
set_positions(positions)Set the full global vertex positions in the solver.
Parameters
positions (numpy.ndarray): A 2D array of shape
(N, D), where:Nis the total number of vertices.Dis the dimension of vertex positions (= 3; they are XYZ positions).
Controllers
Vertex controllers
add_positional_controller(indices, pos_dest, weights, tag: str = "")
add_positional_controller(indices, pos_dest, weights, tag: str = "")Add per-environment positional controllers on subsets of vertices.
Parameters
indices (Sequence[numpy.ndarray]): A sequence of length
num_envs. Each element is an integer array of shape(K_e,)giving the target local vertex indices in environmente.pos_dest (Sequence[numpy.ndarray]): A sequence of length
num_envs. Each element is a float array of shape(K_e, 3)giving the target local vertex positions in environmente.weights (Sequence[float]): A sequence of length
num_envs. Each entry is the controller inverse stiffness to use for environmente. We recommend simply setting them to all zeros for stability.tag (str, optional): An optional label used to group these controllers.
clear_controller_set(tag: str = "")
clear_controller_set(tag: str = "")Remove the controllers.
Parameters
tag (str, optional): An optional label used to group controllers.
If
tag == ""(default): all controllers are cleared.If
tagis non-empty: only controllers associated with this tag are cleared.
End-effector controllers
add_ee(ee_name: str, current_ee_7d_pos, ee_dims, cloth_thickness: float = 0.002)
add_ee(ee_name: str, current_ee_7d_pos, ee_dims, cloth_thickness: float = 0.002)Create an end effector (EE) with its EE-aligned oriented bounding box (OBB) in each environment. This is a simplified version to simulate a gripper. The added EE is open initially.
Parameters
ee_name (str): Unique name for the end effector.
current_ee_7d_pos (Sequence[numpy.ndarray]): Sequence of length
num_envs. Each entry is a 7D vector:First 3: EE center position
(x, y, z)in local environment coordinates.Last 4: EE orientation as a quaternion
(w, x, y, z).
ee_dims (numpy.ndarray): OBB extents
(dx, dy, dz)along the EE box axes.cloth_thickness (float, optional): Effective cloth thickness used to slightly enlarge the grasp region along all axes. Default is
0.002.
grasp_ee(ee_name: str, grasp_commands)
grasp_ee(ee_name: str, grasp_commands)Open or close the specified end effector per environment, where opening means capturing the vertices inside the EE-aligned OBB, and closing means removing any grasped vertices for that environment.
Parameters
ee_name (str): Name of an EE previously created via
add_ee().grasp_commands (Sequence[bool]): Sequence of length
num_envs. For each environmente:If command matches grasp state: nothing happens.
If command is
Trueand gripper is currently open: close the gripper, capturing vertices inside the OBB and applying control to hold grasped vertices at current positions.If command is
Falseand gripper is currently closed: open the gripper, releasing any previously grasped vertices and removing control applied by the EE.
move_ee(ee_name: str, target_ee_7d_pos, weights)
move_ee(ee_name: str, target_ee_7d_pos, weights) Move the specified end effector, and apply control to grasped vertices (if have) to follow the target EE pose for each environment.
Parameters
ee_name (str): Name of an EE previously created via
add_ee().target_ee_7d_pos (Sequence[numpy.ndarray]): Sequence of length
num_envs. Each entry is a 7D vector:First 3: target EE center position
(x, y, z)in local coordinates.Last 4: target EE orientation as a quaternion
(w, x, y, z).
weights (Sequence[float]): Sequence of length
num_envs. Refer toadd_positional_controller.
get_ee(ee_name: str) -> list[list[GraspInfo]]
get_ee(ee_name: str) -> list[list[GraspInfo]]Return the list of grasped vertices for the specified end effector.
Returns
ee_grasps (list[list[GraspInfo]]): A nested list where:
Outer list has length
num_envs.Each inner list contains
GraspInfoobjects for vertices captured bygrasp_ee()in that environment.
clear_ee(ee_name: str)
clear_ee(ee_name: str) Remove the specified end effector and its applied control.
Parameters
ee_name (str): Name of an EE previously created via
add_ee().
Cameras and sensing
add_camera(camera_name: str, width: int, height: int, fx: float, fy: float, cx: float, cy: float, position: numpy.ndarray, rotation: numpy.ndarray, clipping_range: numpy.ndarray)
add_camera(camera_name: str, width: int, height: int, fx: float, fy: float, cx: float, cy: float, position: numpy.ndarray, rotation: numpy.ndarray, clipping_range: numpy.ndarray)Register a virtual depth camera for all environments.
Parameters
camera_name (str): Unique name for the camera.
width (int), height (int): Image resolution in pixels.
fx, fy (float): Focal lengths in pixels.
cx, cy (float): Principal point in pixels.
position (numpy.ndarray): Camera position
(x, y, z)in local coordinates.rotation (numpy.ndarray): Camera orientation as a quaternion
(w, x, y, z).clipping_range (numpy.ndarray): Camera clipping distances
(near, far). Must satisfy0 < near < far.
move_camera(camera_name: str, position: numpy_ndarray, rotation: numpy.ndarray)
move_camera(camera_name: str, position: numpy_ndarray, rotation: numpy.ndarray) move an existing depth camera to the target pose.
Parameters
camera_name (str): Name of an existing camera registered via
add_camera().position (numpy.ndarray): Target camera position
(x, y, z)in local coordinates.rotation (numpy.ndarray): Target camera orientation as a quaternion
(w, x, y, z).
get_depth_map(camera_name: str, device: str = "cpu", noise_std: float = 0.0, enable_obstacles: bool = False) -> torch.Tensor
get_depth_map(camera_name: str, device: str = "cpu", noise_std: float = 0.0, enable_obstacles: bool = False) -> torch.TensorRender a depth map from the specified camera. Every valid pixel in the depth map is inside the clipping_range, where the invalid ones are all zeros.
Parameters
camera_name (str): Name of an existing camera registered via
add_camera().device (str, optional): Target device for the output tensor. Must be
"cpu"or"cuda".noise_std (float, optional): Standard deviation of additive depth noise in world units.
enable_obstacles (bool, optional): If
True, include obstacles in the depth map. IfFalse, only objects are considered.
Returns
depth_map (torch.Tensor): A tensor of shape
(num_envs, H, W)with dtypetorch.float32, where:H,Ware the camera’s height and width.
get_point_cloud(camera_name: str, depth_map: torch.Tensor, device: str = "cpu") -> torch.Tensor
get_point_cloud(camera_name: str, depth_map: torch.Tensor, device: str = "cpu") -> torch.TensorUnproject a depth map into a 3D point cloud in local coordinates.
Parameters
camera_name (str): Name of an existing camera registered via
add_camera().depth_map (torch.Tensor): Depth map obtained from
get_depth_map().device (str, optional): Target device for the output tensor. Must be
"cpu"or"cuda".
Returns
points (torch.Tensor): A tensor of shape
(num_envs, max_points, D)with dtypetorch.float32, where:max_pointsis the maximum number of valid points (i.e., with non-zero depth) across all environments. For environments that have fewer valid points thanmax_points, it is padded with zeros.Dis the dimension of vertex positions (= 3; they are XYZ positions).
Export
export_abc(output_path: str = "")
export_abc(output_path: str = "")Export every simulated body (object/obstacle) in every environment as a separate Alembic file.
Parameters
output_path (str, optional): Output directory for Alembic files. If:
non-empty: used directly,
empty: falls back to
output_abc]inside the configuration JSON file.
export_combined_abc(env_indices, output_path: str = "")
export_combined_abc(env_indices, output_path: str = "")Export all objects and obstacles in the selected environments as a single Alembic file.
Parameters
env_indices (Sequence[int]): List of environment indices to include in the combined export. Must be non-empty.
output_path (str, optional): Output directory for the combined Alembic file. If:
non-empty: used directly,
empty: falls back to
output_abcinside the configuration JSON file.
Last updated