RealtimeSim

realsim_py.RealtimeSim

RealtimeSim provides a real-time simulation interface with interactive visualization and access to simulation state, controllers, and virtual depth cameras.

Typical usage:

import realsim_py as rs

sim = rs.RealtimeSim()
sim.load_config("path/to/config.json")
sim.initialize()
sim.set_envs(4)

n_steps = 100
for _ in range(n_steps):
    sim.step()

Class

RealtimeSim()

Create an empty real-time simulator instance.

Call load_config() , initialize() and set_envs() before running the simulation.


Simulation core

load_config(config_path: str)

Load the simulation configuration from a JSON file.

Parameters

  • config_path (str): Relative or absolute path to a JSON configuration file. We provide examples in path_to_realsim_py/simulation/config/.


initialize(suppress_output: bool = False)

Initialize the simulator after loading the configuration.

Parameters

  • suppress_output (bool, optional): whether to suppress the built-in simulation console output. This output is mainly for internal debugging, so feel free to suppress it.


set_envs(n_envs: int = 1)

Set the number of environments simulated in parallel. Whether using multiple parallel environments or not, this must be called for full initialization.

Parameters

  • n_envs (int, optional): Number of environments. Must be a positive integer.


step()

Advance the simulation by a single time step, including physics update and active controllers.


State access

get_positions(device: str = "cpu") -> torch.Tensor

Return the current global vertex positions.

Parameters

  • device (str, optional): Target device for the output tensor. Must be "cpu" or "cuda".

Returns

  • positions (torch.Tensorarrow-up-right): A tensor of shape (N, D) on the requested device with dtype torch.float64, where:

    • N is the total number of vertices across all environments.

    • D is the dimension of vertex positions (= 3; they are XYZ positions).


get_all_positions(device: str = "cpu", max_history: int = -1) -> torch.Tensor

Return a history of global vertex positions.

Parameters

  • device (str, optional): Target device for the output tensor. Must be "cpu" or "cuda".

  • max_history (int, optional):

    • If max_history > 0: at most this many most recent frames are returned.

    • If max_history == -1: all recorded frames are returned.

Returns

  • positions_history (torch.Tensorarrow-up-right):

    • A tensor of shape (T, N, D) on the requested device with dtype torch.float64, where:

      • T is the number of frames returned.

      • N is the total number of vertices across all environments.

      • D is the dimension of vertex positions (= 3; they are XYZ positions).

circle-info

To avoid out of memory, the number of recorded frames in the simulation is capped at max_recorded_frames, if specified as a positive integer in the configuration file.


get_local_positions(device: str = "cpu") -> torch.Tensor

Return vertex positions in local (per-environment) coordinates.

This subtracts the per-environment offset from the global positions, so that each vertex is expressed in its local environment frame.

Parameters

  • device (str, optional): Target device for the output tensor. Must be "cpu" or "cuda".

Returns

  • local_positions (torch.Tensorarrow-up-right):

    A tensor of shape (N, D) on the requested device with dtype torch.float64, where:

    • N is the total number of vertices.

    • D is the dimension of vertex positions (= 3; they are XYZ positions).


get_offset(eid: int, oid: int) -> int

Return the starting global vertex index for a given object in a given environment.

Parameters

  • eid (int): Environment index.

  • oid (int): Object index within that environment.

Returns

  • offset (int).


get_obj_vertex_length(eid: int, oid: int) -> int

Return the number of vertices of a given object in a given environment.

Parameters

  • eid (int): Environment index.

  • oid (int): Object index within that environment.

Returns

  • num_vertices (int).


Reset

reset_velocity()

Reset all vertex velocities to zero.


set_positions(positions)

Set the full global vertex positions in the solver.

Parameters

  • positions (numpy.ndarrayarrow-up-right): A 2D array of shape (N, D), where:

    • N is the total number of vertices.

    • D is the dimension of vertex positions (= 3; they are XYZ positions).


Controllers

Vertex controllers

add_positional_controller(indices, pos_dest, weights, tag: str = "")

Add per-environment positional controllers on subsets of vertices.

Parameters

  • indices (Sequence[numpy.ndarrayarrow-up-right]): A sequence of length num_envs. Each element is an integer array of shape (K_e,) giving the target local vertex indices in environment e.

  • pos_dest (Sequence[numpy.ndarrayarrow-up-right]): A sequence of length num_envs. Each element is a float array of shape (K_e, 3) giving the target local vertex positions in environment e.

  • weights (Sequence[float]): A sequence of length num_envs. Each entry is the controller inverse stiffness to use for environment e. We recommend simply setting them to all zeros for stability.

  • tag (str, optional): An optional label used to group these controllers.


clear_controller_set(tag: str = "")

Remove the controllers.

Parameters

  • tag (str, optional): An optional label used to group controllers.

    • If tag == "" (default): all controllers are cleared.

    • If tag is non-empty: only controllers associated with this tag are cleared.


End-effector controllers

add_ee(ee_name: str, current_ee_7d_pos, ee_dims, contracted_dims, cloth_thickness: float = 0.002)

Create an end effector (EE) and visualize its EE-aligned oriented bounding box (OBB) in each environment. This is a simplified version to simulate a gripper. The added EE is open initially.

Parameters

  • ee_name (str): Unique name for the end effector.

  • current_ee_7d_pos (Sequence[numpy.ndarrayarrow-up-right]): Sequence of length num_envs. Each entry is a 7D vector:

    • First 3: EE center position (x, y, z) in local environment coordinates.

    • Last 4: EE orientation as a quaternion (w, x, y, z).

  • ee_dims (numpy.ndarrayarrow-up-right): OBB extents (dx, dy, dz) along the EE box axes.

  • contracted_dims (numpy.ndarrayarrow-up-right): Contracted OBB extents (cx, cy, cz) along the EE box axes. This should be smaller than or equal to ee_dims. When grasping the vertices in grasp_ee(), the vertices will be contracted proportionally to simulate the parallel-jaw gripper's grasp.

  • cloth_thickness (float, optional): Effective cloth thickness used to slightly enlarge the grasp region along all axes. Default is 0.002.


grasp_ee(ee_name: str, grasp_commands, success_flags)

Open or close the specified end effector per environment, where opening means capturing the vertices inside the EE-aligned OBB, and closing means removing any grasped vertices for that environment.

Parameters

  • ee_name (str): Name of an EE previously created via add_ee().

  • grasp_commands (Sequence[bool]): Sequence of length num_envs. For each environment e:

    • If command matches grasp state: nothing happens.

    • If command is True and gripper is currently open: close the gripper, capturing vertices inside the OBB and applying control to hold grasped vertices at current positions.

    • If command is False and gripper is currently closed: open the gripper, releasing any previously grasped vertices and removing control applied by the EE.

  • success_flags (Sequence[bool], optional): Sequence of length num_envs. This can only take effect when the corresponding grasp command is True. Default is all True. For each environment e:

    • If False and the corresponding grasp command is True: the grasp state will be closed, but no vertices can be successfully grasped (simulating an unsuccessful grasp), or the grasped vertices will be released (simulating an unsuccessful hold).

    • If True or the corresponding grasp command is False: successfully implement the grasp command.


move_ee(ee_name: str, target_ee_7d_pos, weights)

Move the specified end effector, and apply control to grasped vertices (if have) to follow the target EE pose for each environment.

Parameters

  • ee_name (str): Name of an EE previously created via add_ee().

  • target_ee_7d_pos (Sequence[numpy.ndarrayarrow-up-right]): Sequence of length num_envs. Each entry is a 7D vector:

    • First 3: target EE center position (x, y, z) in local coordinates.

    • Last 4: target EE orientation as a quaternion (w, x, y, z).

  • weights (Sequence[float]): Sequence of length num_envs. Refer to add_positional_controller.


get_ee(ee_name: str) -> list[list[GraspInfo]]

Return the list of grasped vertices for the specified end effector.

Returns

  • ee_grasps (list[list[GraspInfo]]): A nested list where:

    • Outer list has length num_envs.

    • Each inner list contains GraspInfo objects for vertices captured by grasp_ee() in that environment.

circle-info

GraspInfo is a helper data structure describing a selected grasp point.

Attributes:

  • env_rel_index: local vertex index.

  • x, y, z: coordinates of the grasp point in the OBB frame.


clear_ee(ee_name: str)

Remove the specified end effector and its applied control.

Parameters

  • ee_name (str): Name of an EE previously created via add_ee().


Cameras

add_camera(camera_name, width, height, fx, fy, cx, cy, near_clip, far_clip, positions, orientations) -> Camera

Add a virtual depth camera for all environments to the simulation and return a Camera class.

Parameters

  • camera_name (str): Unique name for the camera. Must be distinct within the simulation.

  • width (int), height (int): Image resolution in pixels.

  • fx, fy (float): Focal lengths in pixels.

  • cx, cy (float): Principal point in pixels.

  • near_clip (float): Near clipping distance (meters). Must be strictly positive and less than far_clip.

  • far_clip (float): Far clipping distance (meters). Must be greater than near_clip.

  • positions (Sequence[numpy.ndarrayarrow-up-right]): Sequence of length num_envs. Each entry represents the camera position (x, y, z) in local coordinates.

  • orientations (Sequence[numpy.ndarrayarrow-up-right]): Sequence of length num_envs. Each entry represents the camera orientation as a quaternion (w, x, y, z).

Returns

  • camera (Camera):

    A Camera object that can be used to query depth maps, instance segmentation maps, and point clouds.

circle-info

The initial camera axes follow the local coordinate convention: +x points to the right, +y points upward, and +z points backward (for a forward-facing camera).


get_camera(camera_name) -> Optional[Camera]

Retrieve an existing camera by name.

Parameters

  • camera_name (str):

    Name of an existing camera registered via add_camera().

Returns

  • camera (Camera or None):

    The corresponding Camera object if it exists, otherwise None.


render(device, enable_obstacles: bool = True)

Render all registered cameras for the current simulation state.

Parameters

  • device (str):

    Compute device used for rendering. Must be "cpu" or "cuda".

  • enable_obstacles (bool, optional): If True, also render against obstacles and analytical obstacles. If False, exclude obstacles and analytical obstacles in rendering.


Robots

These functions manage articulated robots for rendering only; robots do not participate in the physical simulation.

add_robot_from_urdf(robot_name, urdf_path, base_7d_pos, global_scale: float = 1.0, use_collision: bool = True)

Add an articulated robot from a URDF file.

Parameters

  • robot_name (str): Unique identifier for the robot instance.

  • urdf_path (str): Path to the URDF file.

  • base_7d_pos (Sequence[array-like]): Sequence of length num_envs. Each element is a 7D vector [x, y, z, qw, qx, qy, qz], where:

    • (x, y, z) is the robot base position in the local environment frame.

    • (qw, qx, qy, qz) is the base orientation as a unit quaternion.

  • global_scale (float, optional): Uniform scale factor applied to the robot geometry.

  • use_collision (bool, optional): If True, loads collision meshes (typically simpler, faster). If False, loads visual meshes.


get_robot_poses(robot_name) -> list[array-like]

Get the current base poses of the specified robot in all environments.

Parameters

Returns

  • base_7d_pos (list[array-like]): A list of length num_envs. Each element is a 7D vector [x, y, z, qw, qx, qy, qz], where:

    • (x, y, z) is the base position in the local environment frame.

    • (qw, qx, qy, qz) is the base orientation quaternion.


get_robot_joint_names(robot_name) -> list[str]

Get the ordered list of joint names for the specified robot.

Parameters

Returns

  • joint_names (list[str]): Ordered joint names for this robot. This ordering is used by:

    • get_robot_joints(robot_name)

    • set_robot_joints(robot_name, joint_states)


get_robot_joints(robot_name) -> list[list[float]]

Get the current joint positions of the specified robot in all environments.

Parameters

Returns

  • joint_states (list[list[float]]): A nested list of size (num_envs, num_joints).

circle-info

Note that the inner ordering of joints matches get_robot_joint_names(robot_name). The scale factor of the robot is considered internally for prismatic joints (following the same magnitude as defined in the URDF).


Get the list of link names for the specified robot.

Parameters

Returns


set_robot_poses(robot_name, base_7d_pos)

Set the base poses of the specified robot in all environments.

Parameters

  • robot_name (str): Name of a robot previously added via add_robot_from_urdf().

  • base_7d_pos (Sequence[array-like]): Sequence of length num_envs. Each element is a 7D vector [x, y, z, qw, qx, qy, qz], where:

    • (x, y, z) is the base position in the local environment frame.

    • (qw, qx, qy, qz) is the base orientation as a unit quaternion.


set_robot_joints(robot_name, joint_states)

Set joint positions of the specified robot in all environments.

Parameters

  • robot_name (str): Name of a robot previously added via add_robot_from_urdf().

  • joint_states (Sequence[Sequence[float]]): A nested sequence of size (num_envs, num_joints). Refer to get_robot_joints().


Set renderable links of the specified robot.

Parameters

  • robot_name (str): Name of a robot previously added via add_robot_from_urdf().

  • link_names (list[str]): Renderable robot link names. Only these links will be rendered in the depth map and point cloud.

  • show_complete_robot (bool, optional): If True, visualize the complete robot in Polyscope interface. If False, visualize the renderable links in Polyscope.

Last updated