RealtimeSim
realsim_py.RealtimeSim
realsim_py.RealtimeSimRealtimeSim provides a real-time simulation interface with interactive visualization and access to simulation state, controllers, and virtual depth cameras.
Typical usage:
import realsim_py as rs
sim = rs.RealtimeSim()
sim.load_config("path/to/config.json")
sim.initialize()
sim.set_envs(4)
n_steps = 100
for _ in range(n_steps):
sim.step()Class
RealtimeSim()
RealtimeSim()Create an empty real-time simulator instance.
Call
load_config(),initialize()andset_envs()before running the simulation.
Simulation core
load_config(config_path: str)
load_config(config_path: str)Load the simulation configuration from a JSON file.
Parameters
config_path (str): Relative or absolute path to a JSON configuration file. We provide examples in
path_to_realsim_py/simulation/config/.
initialize(suppress_output: bool = False)
initialize(suppress_output: bool = False)Initialize the simulator after loading the configuration.
Parameters
suppress_output (bool, optional): whether to suppress the built-in simulation console output. This output is mainly for internal debugging, so feel free to suppress it.
set_envs(n_envs: int = 1)
set_envs(n_envs: int = 1)Set the number of environments simulated in parallel. Whether using multiple parallel environments or not, this must be called for full initialization.
Parameters
n_envs (int, optional): Number of environments. Must be a positive integer.
step()
step()Advance the simulation by a single time step, including physics update and active controllers.
State access
get_positions(device: str = "cpu") -> torch.Tensor
get_positions(device: str = "cpu") -> torch.TensorReturn the current global vertex positions.
Parameters
device (str, optional): Target device for the output tensor. Must be
"cpu"or"cuda".
Returns
positions (torch.Tensor): A tensor of shape
(N, D)on the requested device with dtypetorch.float64, where:Nis the total number of vertices across all environments.Dis the dimension of vertex positions (= 3; they are XYZ positions).
get_all_positions(device: str = "cpu", max_history: int = -1) -> torch.Tensor
get_all_positions(device: str = "cpu", max_history: int = -1) -> torch.TensorReturn a history of global vertex positions.
Parameters
device (str, optional): Target device for the output tensor. Must be
"cpu"or"cuda".max_history (int, optional):
If
max_history > 0: at most this many most recent frames are returned.If
max_history == -1: all recorded frames are returned.
Returns
positions_history (torch.Tensor):
A tensor of shape
(T, N, D)on the requested device with dtypetorch.float64, where:Tis the number of frames returned.Nis the total number of vertices across all environments.Dis the dimension of vertex positions (= 3; they are XYZ positions).
To avoid out of memory, the number of recorded frames in the simulation is capped at max_recorded_frames, if specified as a positive integer in the configuration file.
get_local_positions(device: str = "cpu") -> torch.Tensor
get_local_positions(device: str = "cpu") -> torch.TensorReturn vertex positions in local (per-environment) coordinates.
This subtracts the per-environment offset from the global positions, so that each vertex is expressed in its local environment frame.
Parameters
device (str, optional): Target device for the output tensor. Must be
"cpu"or"cuda".
Returns
local_positions (torch.Tensor):
A tensor of shape
(N, D)on the requested device with dtypetorch.float64, where:Nis the total number of vertices.Dis the dimension of vertex positions (= 3; they are XYZ positions).
get_offset(eid: int, oid: int) -> int
get_offset(eid: int, oid: int) -> intReturn the starting global vertex index for a given object in a given environment.
Parameters
eid (int): Environment index.
oid (int): Object index within that environment.
Returns
offset (int).
get_obj_vertex_length(eid: int, oid: int) -> int
get_obj_vertex_length(eid: int, oid: int) -> intReturn the number of vertices of a given object in a given environment.
Parameters
eid (int): Environment index.
oid (int): Object index within that environment.
Returns
num_vertices (int).
Reset
reset_velocity()
reset_velocity()Reset all vertex velocities to zero.
set_positions(positions)
set_positions(positions)Set the full global vertex positions in the solver.
Parameters
positions (numpy.ndarray): A 2D array of shape
(N, D), where:Nis the total number of vertices.Dis the dimension of vertex positions (= 3; they are XYZ positions).
Controllers
Vertex controllers
add_positional_controller(indices, pos_dest, weights, tag: str = "")
add_positional_controller(indices, pos_dest, weights, tag: str = "")Add per-environment positional controllers on subsets of vertices.
Parameters
indices (Sequence[numpy.ndarray]): A sequence of length
num_envs. Each element is an integer array of shape(K_e,)giving the target local vertex indices in environmente.pos_dest (Sequence[numpy.ndarray]): A sequence of length
num_envs. Each element is a float array of shape(K_e, 3)giving the target local vertex positions in environmente.weights (Sequence[float]): A sequence of length
num_envs. Each entry is the controller inverse stiffness to use for environmente. We recommend simply setting them to all zeros for stability.tag (str, optional): An optional label used to group these controllers.
clear_controller_set(tag: str = "")
clear_controller_set(tag: str = "")Remove the controllers.
Parameters
tag (str, optional): An optional label used to group controllers.
If
tag == ""(default): all controllers are cleared.If
tagis non-empty: only controllers associated with this tag are cleared.
End-effector controllers
add_ee(ee_name: str, current_ee_7d_pos, ee_dims, contracted_dims, cloth_thickness: float = 0.002)
add_ee(ee_name: str, current_ee_7d_pos, ee_dims, contracted_dims, cloth_thickness: float = 0.002)Create an end effector (EE) and visualize its EE-aligned oriented bounding box (OBB) in each environment. This is a simplified version to simulate a gripper. The added EE is open initially.
Parameters
ee_name (str): Unique name for the end effector.
current_ee_7d_pos (Sequence[numpy.ndarray]): Sequence of length
num_envs. Each entry is a 7D vector:First 3: EE center position
(x, y, z)in local environment coordinates.Last 4: EE orientation as a quaternion
(w, x, y, z).
ee_dims (numpy.ndarray): OBB extents
(dx, dy, dz)along the EE box axes.contracted_dims (numpy.ndarray): Contracted OBB extents
(cx, cy, cz)along the EE box axes. This should be smaller than or equal toee_dims. When grasping the vertices in grasp_ee(), the vertices will be contracted proportionally to simulate the parallel-jaw gripper's grasp.cloth_thickness (float, optional): Effective cloth thickness used to slightly enlarge the grasp region along all axes. Default is
0.002.
grasp_ee(ee_name: str, grasp_commands, success_flags)
grasp_ee(ee_name: str, grasp_commands, success_flags)Open or close the specified end effector per environment, where opening means capturing the vertices inside the EE-aligned OBB, and closing means removing any grasped vertices for that environment.
Parameters
ee_name (str): Name of an EE previously created via
add_ee().grasp_commands (Sequence[bool]): Sequence of length
num_envs. For each environmente:If command matches grasp state: nothing happens.
If command is
Trueand gripper is currently open: close the gripper, capturing vertices inside the OBB and applying control to hold grasped vertices at current positions.If command is
Falseand gripper is currently closed: open the gripper, releasing any previously grasped vertices and removing control applied by the EE.
success_flags (Sequence[bool], optional): Sequence of length
num_envs. This can only take effect when the corresponding grasp command isTrue. Default is allTrue. For each environmente:If
Falseand the corresponding grasp command isTrue: the grasp state will be closed, but no vertices can be successfully grasped (simulating an unsuccessful grasp), or the grasped vertices will be released (simulating an unsuccessful hold).If
Trueor the corresponding grasp command isFalse: successfully implement the grasp command.
move_ee(ee_name: str, target_ee_7d_pos, weights)
move_ee(ee_name: str, target_ee_7d_pos, weights) Move the specified end effector, and apply control to grasped vertices (if have) to follow the target EE pose for each environment.
Parameters
ee_name (str): Name of an EE previously created via
add_ee().target_ee_7d_pos (Sequence[numpy.ndarray]): Sequence of length
num_envs. Each entry is a 7D vector:First 3: target EE center position
(x, y, z)in local coordinates.Last 4: target EE orientation as a quaternion
(w, x, y, z).
weights (Sequence[float]): Sequence of length
num_envs. Refer toadd_positional_controller.
get_ee(ee_name: str) -> list[list[GraspInfo]]
get_ee(ee_name: str) -> list[list[GraspInfo]]Return the list of grasped vertices for the specified end effector.
Returns
ee_grasps (list[list[GraspInfo]]): A nested list where:
Outer list has length
num_envs.Each inner list contains
GraspInfoobjects for vertices captured bygrasp_ee()in that environment.
GraspInfo is a helper data structure describing a selected grasp point.
Attributes:
env_rel_index: local vertex index.x,y,z: coordinates of the grasp point in the OBB frame.
clear_ee(ee_name: str)
clear_ee(ee_name: str) Remove the specified end effector and its applied control.
Parameters
ee_name (str): Name of an EE previously created via
add_ee().
Cameras
add_camera(camera_name, width, height, fx, fy, cx, cy, near_clip, far_clip, positions, orientations) -> Camera
add_camera(camera_name, width, height, fx, fy, cx, cy, near_clip, far_clip, positions, orientations) -> CameraAdd a virtual depth camera for all environments to the simulation and return a Camera class.
Parameters
camera_name (str): Unique name for the camera. Must be distinct within the simulation.
width (int), height (int): Image resolution in pixels.
fx, fy (float): Focal lengths in pixels.
cx, cy (float): Principal point in pixels.
near_clip (float): Near clipping distance (meters). Must be strictly positive and less than
far_clip.far_clip (float): Far clipping distance (meters). Must be greater than
near_clip.positions (Sequence[numpy.ndarray]): Sequence of length
num_envs. Each entry represents the camera position(x, y, z)in local coordinates.orientations (Sequence[numpy.ndarray]): Sequence of length
num_envs. Each entry represents the camera orientation as a quaternion(w, x, y, z).
Returns
The initial camera axes follow the local coordinate convention: +x points to the right, +y points upward, and +z points backward (for a forward-facing camera).
get_camera(camera_name) -> Optional[Camera]
get_camera(camera_name) -> Optional[Camera]Retrieve an existing camera by name.
Parameters
camera_name (str):
Name of an existing camera registered via
add_camera().
Returns
render(device, enable_obstacles: bool = True)
render(device, enable_obstacles: bool = True)Render all registered cameras for the current simulation state.
Parameters
device (str):
Compute device used for rendering. Must be
"cpu"or"cuda".enable_obstacles (bool, optional): If
True, also render against obstacles and analytical obstacles. IfFalse, exclude obstacles and analytical obstacles in rendering.
Robots
These functions manage articulated robots for rendering only; robots do not participate in the physical simulation.
add_robot_from_urdf(robot_name, urdf_path, base_7d_pos, global_scale: float = 1.0, use_collision: bool = True)
add_robot_from_urdf(robot_name, urdf_path, base_7d_pos, global_scale: float = 1.0, use_collision: bool = True)Add an articulated robot from a URDF file.
Parameters
robot_name (str): Unique identifier for the robot instance.
urdf_path (str): Path to the URDF file.
base_7d_pos (Sequence[array-like]): Sequence of length
num_envs. Each element is a 7D vector[x, y, z, qw, qx, qy, qz], where:(x, y, z)is the robot base position in the local environment frame.(qw, qx, qy, qz)is the base orientation as a unit quaternion.
global_scale (float, optional): Uniform scale factor applied to the robot geometry.
use_collision (bool, optional): If
True, loads collision meshes (typically simpler, faster). IfFalse, loads visual meshes.
get_robot_poses(robot_name) -> list[array-like]
get_robot_poses(robot_name) -> list[array-like]Get the current base poses of the specified robot in all environments.
Parameters
robot_name (str): Name of a robot previously added via
add_robot_from_urdf().
Returns
base_7d_pos (list[array-like]): A list of length
num_envs. Each element is a 7D vector[x, y, z, qw, qx, qy, qz], where:(x, y, z)is the base position in the local environment frame.(qw, qx, qy, qz)is the base orientation quaternion.
get_robot_joint_names(robot_name) -> list[str]
get_robot_joint_names(robot_name) -> list[str]Get the ordered list of joint names for the specified robot.
Parameters
robot_name (str): Name of a robot previously added via
add_robot_from_urdf().
Returns
joint_names (list[str]): Ordered joint names for this robot. This ordering is used by:
get_robot_joints(robot_name)set_robot_joints(robot_name, joint_states)
get_robot_joints(robot_name) -> list[list[float]]
get_robot_joints(robot_name) -> list[list[float]]Get the current joint positions of the specified robot in all environments.
Parameters
robot_name (str): Name of a robot previously added via
add_robot_from_urdf().
Returns
joint_states (list[list[float]]): A nested list of size
(num_envs, num_joints).
Note that the inner ordering of joints matches get_robot_joint_names(robot_name). The scale factor of the robot is considered internally for prismatic joints (following the same magnitude as defined in the URDF).
get_robot_link_names(robot_name) -> list[str]
get_robot_link_names(robot_name) -> list[str]Get the list of link names for the specified robot.
Parameters
robot_name (str): Name of a robot previously added via
add_robot_from_urdf().
Returns
link_names (list[str]): Link names for this robot. This is useful for
set_robot_renderable_links().
set_robot_poses(robot_name, base_7d_pos)
set_robot_poses(robot_name, base_7d_pos) Set the base poses of the specified robot in all environments.
Parameters
robot_name (str): Name of a robot previously added via
add_robot_from_urdf().base_7d_pos (Sequence[array-like]): Sequence of length
num_envs. Each element is a 7D vector[x, y, z, qw, qx, qy, qz], where:(x, y, z)is the base position in the local environment frame.(qw, qx, qy, qz)is the base orientation as a unit quaternion.
set_robot_joints(robot_name, joint_states)
set_robot_joints(robot_name, joint_states) Set joint positions of the specified robot in all environments.
Parameters
robot_name (str): Name of a robot previously added via
add_robot_from_urdf().joint_states (Sequence[Sequence[float]]): A nested sequence of size
(num_envs, num_joints). Refer toget_robot_joints().
set_robot_renderable_links(robot_name, link_names, show_complete_robot: bool = True)
set_robot_renderable_links(robot_name, link_names, show_complete_robot: bool = True)Set renderable links of the specified robot.
Parameters
robot_name (str): Name of a robot previously added via
add_robot_from_urdf().link_names (list[str]): Renderable robot link names. Only these links will be rendered in the depth map and point cloud.
show_complete_robot (bool, optional): If
True, visualize the complete robot in Polyscope interface. IfFalse, visualize the renderable links in Polyscope.
Last updated