Isaac Sim 4.0 support, Kinematics API doc, Windows support

This commit is contained in:
Balakumar Sundaralingam
2024-07-20 14:51:43 -07:00
parent 2ae381f328
commit 3690d28c54
83 changed files with 2818 additions and 497 deletions

View File

@@ -10,16 +10,58 @@ its affiliates is strictly prohibited.
-->
# Changelog
## Latest Commit
## Version 0.7.4
### Changes in Default Behavior
- Cuda graph capture of optimization iterations resets solver before recording.
- ``join_path(a, b)`` now requires ``a`` to not have a trailing slash to make the file compatible with Windows.
- Drop examples support for Isaac Sim < 4.0.0.
- asset_root_path can be either empty string or None.
- Order of variables in ``SelfCollisionKinematicsConfig`` has changed. Unused variables
moved to bottom.
- Remove requirement of warmup for using ``offset_waypoint`` in ``PoseCost``.
### New Features
- Interpolated metrics calculation now recreates cuda graph if interpolation steps exceed existing buffer size.
- Add experimental ``CUDAGraph.reset`` usage as ``cuda>=12.0`` is not crashing when an existing captured CUDAGraph is freed and recaptured with new memory pointers. Try this experimental feature by
setting an environment variable ``export CUROBO_TORCH_CUDA_GRAPH_RESET=1``. This feature will allow for changing the problem type in ``motion_gen`` and ``ik_solver`` without requiring recreation of the class.
- Add partial support for Windows.
- Add Isaac Sim 4.0.0 docker support.
- Examples now work with Isaac Sim 4.0.0.
- Add XRDF support.
- Add curobo.types.file_path.ContentPath to store paths for files representing robot and world. This
improves development on top of cuRobo with custom robots living external of cuRobo library.
- Add attach external objects to robot link API to CudaRobotModel.
- Add MotionGenStatus.DT_EXCEPTION to report failures due to trajectory exceeding user specified
maximum trajectory dt.
- Add reading of end-effector mesh if available when rendering trajectory with ``UsdHelper``, also
supports goalset rendering.
- Kinematics module (`curobo.cuda_robot_model`) has complete API documentation.
### BugFixes & Misc.
- Add support for older warp versions (<1.0.0) as it's not possible to run older isaac sim with
newer warp versions.
- Minor documentation fixes to install instructions.
- Add support for older warp versions (<1.0.0) as it's not possible to run older isaac sim with newer warp versions.
- Add override option to mpc dataclass.
- Fix bug in ``PoseCost.forward_pose()`` which caused ``torch_layers_example.py`` to fail.
- Add warp constants to make module hash depend on robot dof, for modules that generate runtime
warp kernels. This fixes issues using cuRobo in isaac sim.
- Add ``plan_config.timeout`` check to ``plan_single_js()``.
- Recreation of interpolation buffer now copies the joint names from raw trajectory.
- Fix bug in running captured cuda graph on deleted memory pointers
when getting metrics on interpolated trajectory
- Change order of operations in cuda graph capture of particle opt to get correct results
during graph capture phase.
- Franka Panda now works in Isaac Sim 4.0.0. The fix was to add inertial parameters to all links in
the urdf.
- Create new instances of rollouts in wrap classes to ensure cuda graph rollouts are not
accidentally used in other pipelines.
- Add cuda graph check for ``get_metrics``.
- Remove aligned address assumption for float arrays inside kernel (local memory).
- Add check for existing warp kernel in a module before creating a new one to avoid corruption of
existing cuda graphs.
## Version 0.7.3

View File

@@ -109,7 +109,7 @@ if __name__ == "__main__":
parser.add_argument(
"--save_path",
type=str,
default=".",
default=None,
help="path to save file",
)
parser.add_argument(
@@ -132,9 +132,9 @@ if __name__ == "__main__":
)
args = parser.parse_args()
b_list = [1, 10, 100, 500, 2000][:]
b_list = [1, 10, 100, 2000][-1:]
robot_list = get_motion_gen_robot_list() + get_multi_arm_robot_list()[:2]
robot_list = get_motion_gen_robot_list()
world_file = "collision_test.yml"
print("running...")
@@ -187,7 +187,13 @@ if __name__ == "__main__":
data["Orientation-Error-Collision-Free-IK"].append(q_err_c)
data["Collision-Free-IK-time(ms)"].append(dt_cu_ik_cfree * 1000.0)
write_yaml(data, join_path(args.save_path, args.file_name + ".yml"))
if args.save_path is not None:
file_path = join_path(args.save_path, args.file_name)
else:
file_path = args.file_name
write_yaml(data, file_path + ".yml")
try:
# Third Party
@@ -195,7 +201,7 @@ if __name__ == "__main__":
df = pd.DataFrame(data)
print("Reported errors are 98th percentile")
df.to_csv(join_path(args.save_path, args.file_name + ".csv"))
df.to_csv(file_path + ".csv")
try:
# Third Party
from tabulate import tabulate

View File

@@ -124,15 +124,16 @@ RUN pip3 install trimesh \
numpy-quaternion \
networkx \
pyyaml \
rospkg \
rospkg \
rosdep \
empy
# Add cache date to avoid using cached layers older than this
ARG CACHE_DATE=2024-04-11
ARG CACHE_DATE=2024-07-19
# install warp:
#
#
RUN pip3 install warp-lang
# install curobo:
@@ -165,7 +166,7 @@ RUN cd /pkgs && git clone https://github.com/nvlabs/nvblox_torch.git && \
sh install.sh $(python3 -c 'import torch.utils; print(torch.utils.cmake_prefix_path)') && \
python3 -m pip install -e .
RUN python3 -m pip install "robometrics[evaluator] @ git+https://github.com/fishbotics/robometrics.git"
# upgrade typing extensions:

View File

@@ -35,4 +35,4 @@ fi
echo $input_arg
echo $USER_ID
docker build --build-arg USERNAME=$USER --build-arg USER_ID=${USER_ID} --build-arg IMAGE_TAG=$input_arg -f $user_dockerfile --tag curobo_docker:user_$input_arg .
docker build --build-arg USERNAME=$USER --build-arg USER_ID=${USER_ID} --build-arg IMAGE_TAG=$input_arg -f $user_dockerfile --tag curobo_docker:user_$input_arg .

View File

@@ -32,16 +32,11 @@ if [ -z "$input_arg" ]; then
fi
fi
if [ "$input_arg" == "isaac_sim_2022.2.1" ]; then
echo "Building Isaac Sim docker"
dockerfile="isaac_sim.dockerfile"
image_tag="isaac_sim_2022.2.1"
isaac_sim_version="2022.2.1"
elif [ "$input_arg" == "isaac_sim_2023.1.0" ]; then
if [ "$input_arg" == "isaac_sim_4.0.0" ]; then
echo "Building Isaac Sim headless docker"
dockerfile="isaac_sim.dockerfile"
image_tag="isaac_sim_2023.1.0"
isaac_sim_version="2023.1.0"
image_tag="isaac_sim_4.0.0"
isaac_sim_version="4.0.0"
elif [ "$input_arg" == "x86" ]; then
echo "Building for X86 Architecture"
dockerfile="x86.dockerfile"
@@ -63,11 +58,11 @@ fi
# "nvidia": {
# "path": "/usr/bin/nvidia-container-runtime",
# "runtimeArgs": []
# }
# }
# },
# "default-runtime": "nvidia" # ADD this line (the above lines will already exist in your json file)
# }
#
#
echo "${dockerfile}"
docker build --build-arg ISAAC_SIM_VERSION=${isaac_sim_version} -t curobo_docker:${image_tag} -f ${dockerfile} .
docker build --build-arg ISAAC_SIM_VERSION=${isaac_sim_version} -t curobo_docker:${image_tag} -f ${dockerfile} .

View File

@@ -11,7 +11,7 @@
ARG DEBIAN_FRONTEND=noninteractive
ARG BASE_DIST=ubuntu20.04
ARG CUDA_VERSION=11.4.2
ARG ISAAC_SIM_VERSION=2022.2.1
ARG ISAAC_SIM_VERSION=4.0.0
FROM nvcr.io/nvidia/isaac-sim:${ISAAC_SIM_VERSION} AS isaac-sim
@@ -20,7 +20,7 @@ FROM nvcr.io/nvidia/cudagl:${CUDA_VERSION}-devel-${BASE_DIST}
# this does not work for 2022.2.1
#$FROM nvcr.io/nvidia/cuda:${CUDA_VERSION}-cudnn8-devel-${BASE_DIST}
#$FROM nvcr.io/nvidia/cuda:${CUDA_VERSION}-cudnn8-devel-${BASE_DIST}
LABEL maintainer "User Name"
@@ -102,7 +102,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
RUN wget -q --show-progress \
--progress=bar:force:noscroll \
https://sdk.lunarg.com/sdk/download/${VULKAN_SDK_VERSION}/linux/vulkansdk-linux-x86_64-${VULKAN_SDK_VERSION}.tar.gz \
-O /tmp/vulkansdk-linux-x86_64-${VULKAN_SDK_VERSION}.tar.gz \
-O /tmp/vulkansdk-linux-x86_64-${VULKAN_SDK_VERSION}.tar.gz \
&& echo "Installing Vulkan SDK ${VULKAN_SDK_VERSION}" \
&& mkdir -p /opt/vulkan \
&& tar -xf /tmp/vulkansdk-linux-x86_64-${VULKAN_SDK_VERSION}.tar.gz -C /opt/vulkan \
@@ -118,7 +118,7 @@ RUN wget -q --show-progress \
&& rm /tmp/vulkansdk-linux-x86_64-${VULKAN_SDK_VERSION}.tar.gz && rm -rf /opt/vulkan
# Setup the required capabilities for the container runtime
# Setup the required capabilities for the container runtime
ENV NVIDIA_VISIBLE_DEVICES=all NVIDIA_DRIVER_CAPABILITIES=all
# Open ports for live streaming
@@ -157,16 +157,15 @@ ENV TORCH_CUDA_ARCH_LIST="7.0+PTX"
# create an alias for omniverse python
ENV omni_python='/isaac-sim/python.sh'
RUN echo "alias omni_python='/isaac-sim/python.sh'" >> /.bashrc
RUN echo "alias omni_python='/isaac-sim/python.sh'" >> ~/.bashrc
# Add cache date to avoid using cached layers older than this
ARG CACHE_DATE=2024-04-11
RUN $omni_python -m pip install "robometrics[evaluator] @ git+https://github.com/fishbotics/robometrics.git"
# if you want to use a different version of curobo, create folder as docker/pkgs and put your
# version of curobo there. Then uncomment below line and comment the next line that clones from
# version of curobo there. Then uncomment below line and comment the next line that clones from
# github
# COPY pkgs /pkgs

View File

@@ -56,6 +56,7 @@ elif [ "$input_arg" == "aarch64" ]; then
elif [[ "$input_arg" == *isaac_sim* ]] ; then
docker run --name container_$input_arg --entrypoint bash -it --gpus all -e "ACCEPT_EULA=Y" --rm --network=host \
--privileged \
-e "PRIVACY_CONSENT=Y" \

View File

@@ -15,7 +15,7 @@ FROM curobo_docker:${IMAGE_TAG}
# Set variables
ARG USERNAME
ARG USER_ID
ARG CACHE_DATE=2024-04-25
ARG CACHE_DATE=2024-07-19
# Set environment variables

View File

@@ -10,11 +10,13 @@
##
# Check architecture and load:
ARG IMAGE_TAG
ARG IMAGE_TAG
FROM curobo_docker:${IMAGE_TAG}
# Set variables
ARG USERNAME
ARG USER_ID
ARG CACHE_DATE=2024-07-19
# Set environment variables
@@ -44,7 +46,7 @@ RUN chown -R $USERNAME:users /isaac-sim/exts/omni.isaac.synthetic_recorder/
RUN chown -R $USERNAME:users /isaac-sim/kit/exts/omni.gpu_foundation
RUN mkdir -p /home/$USERNAME/.cache && cp -r /root/.cache/* /home/$USERNAME/.cache && chown -R $USERNAME:users /home/$USERNAME/.cache
RUN mkdir -p /isaac-sim/kit/data/documents/Kit && mkdir -p /isaac-sim/kit/data/documents/Kit/apps/Isaac-Sim/scripts/ &&chown -R $USERNAME:users /isaac-sim/kit/data/documents/Kit /isaac-sim/kit/data/documents/Kit/apps/Isaac-Sim/scripts/
RUN mkdir -p /home/$USERNAME/.local
RUN mkdir -p /home/$USERNAME/.local
RUN echo "alias omni_python='/isaac-sim/python.sh'" >> /home/$USERNAME/.bashrc

View File

@@ -78,7 +78,7 @@ ENV TORCH_CUDA_ARCH_LIST "7.0+PTX"
ENV LD_LIBRARY_PATH="/usr/local/lib:${LD_LIBRARY_PATH}"
# Add cache date to avoid using cached layers older than this
ARG CACHE_DATE=2024-04-25
ARG CACHE_DATE=2024-07-19
RUN pip install "robometrics[evaluator] @ git+https://github.com/fishbotics/robometrics.git"

View File

@@ -9,6 +9,12 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch

View File

@@ -10,6 +10,12 @@
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch

View File

@@ -9,6 +9,12 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch

View File

@@ -9,6 +9,12 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch
@@ -156,7 +162,6 @@ if __name__ == "__main__":
tensor_args,
collision_checker_type=CollisionCheckerType.MESH,
collision_cache={"obb": n_obstacle_cuboids, "mesh": n_obstacle_mesh},
velocity_scale=0.75,
interpolation_dt=0.02,
ee_link_name="right_gripper",
)
@@ -180,7 +185,11 @@ if __name__ == "__main__":
cmd_plan = None
articulation_controller = robot.get_articulation_controller()
plan_config = MotionGenPlanConfig(
enable_graph=False, enable_graph_attempt=4, max_attempts=2, enable_finetune_trajopt=True
enable_graph=False,
enable_graph_attempt=4,
max_attempts=2,
enable_finetune_trajopt=True,
time_dilation_factor=0.5,
)
plan_idx = 0

View File

@@ -31,7 +31,7 @@ try:
from omni.isaac.urdf import _urdf # isaacsim 2022.2
except ImportError:
# Third Party
from omni.importer.urdf import _urdf # isaac sim 2023.1
from omni.importer.urdf import _urdf # isaac sim 2023.1 or above
ISAAC_SIM_23 = True
# Standard Library
@@ -68,22 +68,23 @@ def add_robot_to_scene(
robot_name: str = "robot",
position: np.array = np.array([0, 0, 0]),
):
urdf_interface = _urdf.acquire_urdf_interface()
urdf_interface = _urdf.acquire_urdf_interface()
# Set the settings in the import config
import_config = _urdf.ImportConfig()
import_config.merge_fixed_joints = False
import_config.convex_decomp = False
import_config.import_inertia_tensor = True
import_config.fix_base = True
import_config.make_default_prim = False
import_config.make_default_prim = True
import_config.self_collision = False
import_config.create_physics_scene = True
import_config.import_inertia_tensor = False
import_config.default_drive_strength = 20000
import_config.default_position_drive_damping = 500
import_config.default_drive_strength = 1047.19751
import_config.default_position_drive_damping = 52.35988
import_config.default_drive_type = _urdf.UrdfJointTargetType.JOINT_DRIVE_POSITION
import_config.distance_scale = 1
import_config.density = 0.0
asset_path = get_assets_path()
if (
"external_asset_path" in robot_config["kinematics"]
@@ -115,18 +116,7 @@ def add_robot_to_scene(
linkp = stage.GetPrimAtPath(robot_path)
set_prim_transform(linkp, [position[0], position[1], position[2], 1, 0, 0, 0])
if False and ISAAC_SIM_23: # this doesn't work in isaac sim 2023.1.1
robot_p.set_solver_velocity_iteration_count(0)
robot_p.set_solver_position_iteration_count(44)
my_world._physics_context.set_solver_type("PGS")
if ISAAC_SIM_23: # fix to load robot correctly in isaac sim 2023.1.1
linkp = stage.GetPrimAtPath(robot_path + "/" + base_link_name)
mass = UsdPhysics.MassAPI(linkp)
mass.GetMassAttr().Set(0)
robot = my_world.scene.add(robot_p)
# robot_path = robot.prim_path
return robot, robot_path

View File

@@ -9,6 +9,12 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch

View File

@@ -9,6 +9,12 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch

View File

@@ -10,6 +10,12 @@
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch
@@ -238,6 +244,7 @@ def main():
max_attempts=max_attempts,
enable_finetune_trajopt=True,
parallel_finetune=True,
time_dilation_factor=0.5,
)
usd_help.load_stage(my_world.stage)
@@ -407,7 +414,7 @@ def main():
cmd_idx = 0
else:
carb.log_warn("Plan did not converge to a solution. No action is being taken.")
carb.log_warn("Plan did not converge to a solution: " + str(result.status))
target_pose = cube_position
target_orientation = cube_orientation
past_pose = cube_position

View File

@@ -10,6 +10,12 @@
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch

View File

@@ -11,10 +11,11 @@
#
# script running (ubuntu):
#
############################################################
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party

View File

@@ -11,10 +11,11 @@
#
# script running (ubuntu):
#
############################################################
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
@@ -241,7 +242,7 @@ def main():
step_index = my_world.current_time_step_index
if step_index == 0:
if step_index <= 2:
my_world.reset()
idx_list = [robot.get_dof_index(x) for x in j_names]
robot.set_joint_positions(default_config, idx_list)

View File

@@ -9,6 +9,12 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch
@@ -138,14 +144,11 @@ def main():
tensor_args,
collision_checker_type=CollisionCheckerType.MESH,
use_cuda_graph=True,
num_trajopt_seeds=12,
num_graph_seeds=12,
interpolation_dt=0.03,
collision_cache={"obb": n_obstacle_cuboids, "mesh": n_obstacle_mesh},
collision_activation_distance=0.025,
acceleration_scale=1.0,
fixed_iters_trajopt=True,
trajopt_tsteps=40,
maximum_trajectory_dt=0.5,
)
motion_gen = MotionGen(motion_gen_config)
print("warming up...")
@@ -157,6 +160,7 @@ def main():
enable_graph=False,
enable_graph_attempt=4,
max_attempts=10,
time_dilation_factor=0.5,
)
usd_help.load_stage(my_world.stage)
@@ -329,7 +333,7 @@ def main():
cmd_idx = 0
else:
carb.log_warn("Plan did not converge to a solution. No action is being taken.")
carb.log_warn("Plan did not converge to a solution: " + str(result.status))
target_pose = cube_position
past_pose = cube_position
if cmd_plan is not None:

View File

@@ -9,6 +9,13 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch
@@ -38,11 +45,24 @@ from curobo.util_file import get_world_configs_path, join_path, load_yaml
from curobo.wrap.model.robot_world import RobotWorld, RobotWorldConfig
simulation_app.update()
# Standard Library
import argparse
# Third Party
from omni.isaac.core import World
from omni.isaac.core.materials import OmniPBR
from omni.isaac.core.objects import cuboid, sphere
parser = argparse.ArgumentParser()
parser.add_argument(
"--show-window",
action="store_true",
help="When True, shows camera image in a CV window",
default=False,
)
args = parser.parse_args()
def draw_points(voxels):
# Third Party
@@ -216,20 +236,21 @@ if __name__ == "__main__":
bounding = Cuboid("t", dims=[1, 1, 1], pose=[0, 0, 0, 1, 0, 0, 0])
voxels = model.world_model.get_voxels_in_bounding_box(bounding, 0.025)
# print(data_camera.depth_image)
depth_image = data["raw_depth"]
color_image = data["raw_rgb"]
depth_colormap = cv2.applyColorMap(
cv2.convertScaleAbs(depth_image, alpha=100), cv2.COLORMAP_VIRIDIS
)
images = np.hstack((color_image, depth_colormap))
if args.show_window:
depth_image = data["raw_depth"]
color_image = data["raw_rgb"]
depth_colormap = cv2.applyColorMap(
cv2.convertScaleAbs(depth_image, alpha=100), cv2.COLORMAP_VIRIDIS
)
images = np.hstack((color_image, depth_colormap))
cv2.namedWindow("Align Example", cv2.WINDOW_NORMAL)
cv2.imshow("Align Example", images)
key = cv2.waitKey(1)
# Press esc or 'q' to close the image window
if key & 0xFF == ord("q") or key == 27:
cv2.destroyAllWindows()
break
cv2.namedWindow("Align Example", cv2.WINDOW_NORMAL)
cv2.imshow("Align Example", images)
key = cv2.waitKey(1)
# Press esc or 'q' to close the image window
if key & 0xFF == ord("q") or key == 27:
cv2.destroyAllWindows()
break
draw_points(voxels)
d, d_vec = model.get_collision_vector(x_sph)

View File

@@ -9,6 +9,12 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch
@@ -64,9 +70,17 @@ parser = argparse.ArgumentParser()
parser.add_argument("--robot", type=str, default="franka.yml", help="robot configuration to load")
parser.add_argument(
"--waypoints", action="store_true", help="When True, sets robot in static mode", default=False
)
parser.add_argument(
"--show-window",
action="store_true",
help="When True, shows camera image in a CV window",
default=False,
)
parser.add_argument(
"--use-debug-draw",
action="store_true",
@@ -330,7 +344,7 @@ if __name__ == "__main__":
if cmd_step_idx == 0:
draw_rollout_points(mpc.get_visual_rollouts(), clear=not args.use_debug_draw)
if step_index < 2:
if step_index <= 2:
my_world.reset()
idx_list = [robot.get_dof_index(x) for x in j_names]
robot.set_joint_positions(default_config, idx_list)
@@ -374,7 +388,7 @@ if __name__ == "__main__":
if not args.use_debug_draw:
voxel_viewer.clear()
if True:
if args.show_window:
depth_image = data["raw_depth"]
color_image = data["raw_rgb"]
depth_colormap = cv2.applyColorMap(
@@ -384,7 +398,6 @@ if __name__ == "__main__":
depth_colormap = cv2.flip(depth_colormap, 1)
images = np.hstack((color_image, depth_colormap))
cv2.namedWindow("NVBLOX Example", cv2.WINDOW_NORMAL)
cv2.imshow("NVBLOX Example", images)
key = cv2.waitKey(1)

View File

@@ -9,6 +9,12 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch
@@ -60,6 +66,12 @@ from curobo.util.usd_helper import UsdHelper
parser = argparse.ArgumentParser()
parser.add_argument(
"--show-window",
action="store_true",
help="When True, shows camera image in a CV window",
default=False,
)
parser.add_argument("--robot", type=str, default="franka.yml", help="robot configuration to load")
parser.add_argument(
@@ -316,7 +328,7 @@ if __name__ == "__main__":
voxel_viewer.clear()
# draw_points(voxels)
if True:
if args.show_window:
depth_image = data["raw_depth"]
color_image = data["raw_rgb"]
depth_colormap = cv2.applyColorMap(

View File

@@ -9,6 +9,14 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch
@@ -154,7 +162,6 @@ class CuroboController(BaseController):
collision_cache={"obb": n_obstacle_cuboids, "mesh": n_obstacle_mesh},
store_ik_debug=self._save_log,
store_trajopt_debug=self._save_log,
velocity_scale=0.75,
)
self.motion_gen = MotionGen(motion_gen_config)
print("warming up...")
@@ -173,6 +180,7 @@ class CuroboController(BaseController):
partial_ik_opt=False,
parallel_finetune=True,
pose_cost_metric=pose_metric,
time_dilation_factor=0.75,
)
self.usd_help.load_stage(self.my_world.stage)
self.cmd_plan = None
@@ -455,7 +463,7 @@ print(articulation_controller.get_gains())
print(articulation_controller.get_max_efforts())
robot = my_franka
print("**********************")
if True:
if False:
robot.enable_gravity()
articulation_controller.set_gains(
kps=np.array(

View File

@@ -9,6 +9,13 @@
# its affiliates is strictly prohibited.
#
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch

View File

@@ -12,6 +12,13 @@
# This script downloads robot usd assets from isaac sim for using in CuRobo.
try:
# Third Party
import isaacsim
except ImportError:
pass
# Third Party
import torch

View File

@@ -55,12 +55,6 @@ def demo_full_config_mpc():
mpc_config = MpcSolverConfig.load_from_robot_config(
robot_cfg,
world_file,
use_cuda_graph=True,
use_cuda_graph_metrics=True,
use_cuda_graph_full_step=False,
use_lbfgs=False,
use_es=False,
use_mppi=True,
store_rollouts=True,
step_dt=0.03,
)

View File

@@ -17,6 +17,9 @@
# References:
# * https://setuptools.pypa.io/en/latest/setuptools.html#setup-cfg-only-projects
# Standard Library
import sys
# Third Party
import setuptools
from torch.utils.cpp_extension import BuildExtension, CUDAExtension
@@ -31,6 +34,10 @@ extra_cuda_args = {
"--prec-sqrt=false",
]
}
if sys.platform == "win32":
extra_cuda_args["nvcc"].append("--allow-unsupported-compiler")
# create a list of modules to be compiled:
ext_modules = [
CUDAExtension(

View File

@@ -19,6 +19,8 @@
<mesh filename="meshes/collision/link0.obj"/>
</geometry>
</collision>
</link>
<link name="panda_link1">
<visual>
@@ -31,6 +33,11 @@
<mesh filename="meshes/collision/link1.obj"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.00033 -0.02204 -0.04762"/>
<mass value="2.797"/>
<inertia ixx="0.0156" ixy="0.0" ixz="0.0" iyy="0.01439" iyz="-0.0024" izz="0.005"/>
</inertial>
</link>
<joint name="panda_joint1" type="revolute">
<safety_controller k_position="100.0" k_velocity="40.0" soft_lower_limit="-2.8973" soft_upper_limit="2.8973"/>
@@ -52,6 +59,11 @@
<mesh filename="meshes/collision/link2.obj"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.00038 -0.09211 0.01908"/>
<mass value="2.542"/>
<inertia ixx="0.0166" ixy="0.0" ixz="0.0" iyy="0.0046" iyz="0.0035" izz="0.015"/>
</inertial>
</link>
<joint name="panda_joint2" type="revolute">
<safety_controller k_position="100.0" k_velocity="40.0" soft_lower_limit="-1.7628" soft_upper_limit="1.7628"/>
@@ -73,6 +85,11 @@
<mesh filename="meshes/collision/link3.obj"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.05152 0.01696 -0.02971"/>
<mass value="2.2513"/>
<inertia ixx="0.006" ixy="0.009" ixz="0.003" iyy="0.0086" iyz="0.009" izz="0.0065"/>
</inertial>
</link>
<joint name="panda_joint3" type="revolute">
<safety_controller k_position="100.0" k_velocity="40.0" soft_lower_limit="-2.8973" soft_upper_limit="2.8973"/>
@@ -94,13 +111,18 @@
<mesh filename="meshes/collision/link4.obj"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.05113 0.05825 0.01698"/>
<mass value="2.2037"/>
<inertia ixx="0.0156" ixy="0.0" ixz="0.0" iyy="0.01439" iyz="-0.0024" izz="0.005"/>
</inertial>
</link>
<joint name="panda_joint4" type="revolute">
<safety_controller k_position="100.0" k_velocity="40.0" soft_lower_limit="-3.0718" soft_upper_limit="-0.0698"/>
<origin rpy="1.57079632679 0 0" xyz="0.0825 0 0"/>
<parent link="panda_link3"/>
<child link="panda_link4"/>
<axis xyz="0 0 1"/>
<axis xyz="0 0 1"/>
<dynamics damping="10.0"/>
<limit effort="87" lower="-3.0718" upper="-0.0698" velocity="2.1750"/>
<!-- something is weird with this joint limit config
@@ -118,6 +140,11 @@
<mesh filename="meshes/collision/link5.obj"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="-0.00005 0.03730 -0.09280"/>
<mass value="2.2855"/>
<inertia ixx="0.02297014781" ixy="0.0" ixz="0.0" iyy="0.02095060919" iyz="0.00382345782" izz="0.00430606551"/>
</inertial>
</link>
<joint name="panda_joint5" type="revolute">
<safety_controller k_position="100.0" k_velocity="40.0" soft_lower_limit="-2.8973" soft_upper_limit="2.8973"/>
@@ -139,6 +166,11 @@
<mesh filename="meshes/collision/link6.obj"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.06572 -0.00371 0.00153"/>
<mass value="1.353"/>
<inertia ixx="0.00087964522" ixy="-0.00021487814" ixz="-0.00011911662" iyy="0.00277796968" iyz="0.00001274322" izz="0.00286701969"/>
</inertial>
</link>
<joint name="panda_joint6" type="revolute">
<safety_controller k_position="100.0" k_velocity="40.0" soft_lower_limit="-0.0175" soft_upper_limit="3.7525"/>
@@ -162,6 +194,11 @@
<mesh filename="meshes/collision/link7.obj"/>
</geometry>
</collision>
<inertial>
<origin rpy="0 0 0" xyz="0.00089 -0.00044 0.05491"/>
<mass value="0.35973"/>
<inertia ixx="0.00019541063" ixy="0.00000165231" ixz="0.00000148826" iyy="0.00019210361" iyz="-0.00000131132" izz="0.00017936256"/>
</inertial>
</link>
<joint name="panda_joint7" type="revolute">
<safety_controller k_position="100.0" k_velocity="40.0" soft_lower_limit="-2.8973" soft_upper_limit="2.8973"/>
@@ -172,7 +209,7 @@
<dynamics damping="10.0"/>
<limit effort="12" lower="-2.8973" upper="2.8973" velocity="2.6100"/>
</joint>
<link name="panda_link8"/>
<joint name="panda_joint8" type="fixed">
<origin rpy="0 0 0" xyz="0 0 0.107"/>
@@ -183,7 +220,7 @@
<joint name="panda_hand_joint" type="fixed">
<parent link="panda_link8"/>
<child link="panda_hand"/>
<origin rpy="0 0 -0.785398163397" xyz="0 0 0"/>
</joint>
<link name="panda_hand">
@@ -240,7 +277,7 @@
<dynamics damping="10.0"/>
<limit effort="20" lower="0.0" upper="0.04" velocity="0.2"/>
</joint>
<link name="ee_link"/>
<joint name="ee_fixed_joint" type="fixed">

View File

@@ -60,7 +60,15 @@ robot_cfg:
}
urdf_path: robot/kinova/kinova_gen3_7dof.urdf
asset_root_path: robot/kinova
mesh_link_names:
- base_link
- shoulder_link
- half_arm_1_link
- half_arm_2_link
- forearm_link
- spherical_wrist_1_link
- spherical_wrist_2_link
- bracelet_link
cspace:
joint_names: ['joint_1', 'joint_2', 'joint_3', 'joint_4', 'joint_5', 'joint_6', 'joint_7']
cspace_distance_weight:
@@ -71,7 +79,7 @@ robot_cfg:
- 1.0
- 1.0
- 1.0
null_space_weight:
- 1.0
- 1.0

View File

@@ -0,0 +1,100 @@
format: xrdf
format_version: 1.0
modifiers:
- set_base_frame: "base_link"
default_joint_positions:
shoulder_pan_joint: 0.0
shoulder_lift_joint: -2.2
elbow_joint: 1.9
wrist_1_joint: -1.383
wrist_2_joint: -1.57
wrist_3_joint: 0.0
cspace:
joint_names:
- "shoulder_pan_joint"
- "shoulder_lift_joint"
- "elbow_joint"
- "wrist_1_joint"
- "wrist_2_joint"
- "wrist_3_joint"
acceleration_limits: [12.0, 12.0, 12.0, 12.0, 12.0, 12.0]
jerk_limits: [500.0, 500.0, 500.0, 500.0, 500.0, 500.0]
tool_frames: ["tool0"]
collision:
geometry: "ur10e_collision_spheres"
buffer_distance:
shoulder_link: 0.01
upper_arm_link: 0.01
forearm_link: 0.01
wrist_1_link: 0.01
wrist_2_link: 0.01
wrist_3_link: 0.01
tool0: 0.01
self_collision:
geometry: "ur10e_collision_spheres"
buffer_distance:
shoulder_link: 0.07
tool0: 0.05
ignore:
upper_arm_link: ["forearm_link", "shoulder_link"]
forearm_link: ["wrist_1_link"]
wrist_1_link: ["wrist_2_link","wrist_3_link"]
wrist_2_link: ["wrist_3_link", "tool0"]
wrist_3_link: ["tool0"]
geometry:
ur10e_collision_spheres:
spheres:
shoulder_link:
- center: [0, 0, 0]
radius: 0.05
upper_arm_link:
- center: [-0, -0, 0.18]
radius: 0.09
- center: [-0.102167, 0, 0.18]
radius: 0.05
- center: [-0.204333, 0, 0.18]
radius: 0.05
- center: [-0.3065, 0, 0.18]
radius: 0.05
- center: [-0.408667, 0, 0.18]
radius: 0.05
- center: [-0.510833, 0, 0.18]
radius: 0.05
- center: [-0.613, 0,0.18]
radius: 0.07
forearm_link:
- center: [-0, 0, 0.03]
radius: 0.05
- center: [-0.0951667, 0, 0.03]
radius: 0.05
- center: [-0.190333, 0, 0.03]
radius: 0.05
- center: [-0.2855, 0, 0.03]
radius: 0.05
- center: [-0.380667, 0,0.03]
radius: 0.05
- center: [-0.475833, 0,0.03]
radius: 0.05
- center: [-0.571, -1.19904e-17, 0.03]
radius: 0.05
wrist_1_link:
- center: [0, 0, 0]
radius: 0.05
wrist_2_link:
- center: [0, 0, 0]
radius: 0.05
wrist_3_link:
- center: [0, 0, 0]
radius: 0.05
- center: [0, 0, 0.06]
radius: 0.07
tool0:
- center: [0, 0, 0.12]
radius: -0.01

View File

@@ -10,34 +10,75 @@
#
"""
This module contains code for cuda accelerated kinematics.
This module contains GPU accelerated kinematics leveraging CUDA. Kinematics computations enable
mapping from a robot's joint configuration to the pose of the robot's links in Cartesian space
(with reference to the robot's base link). In cuRobo, robot's geometry is approximated with spheres
and their positions are also computed as part of kinematics. This mapping is differentiable,
enabling their use in optimization problems and as part of neural networks.
.. figure:: ../images/robot_representation.png
:width: 400px
:align: center
Robot representation in cuRobo is shown for the Franka Panda robot.
Kinematics in CuRobo currently supports single axis actuated joints, where the joint can be actuated
as prismatic or revolute joints. Continuous joints are approximated to revolute joints with limits
at [-6, +6] radians. Mimic joints are not supported, so convert mimic joints to independent joints.
CuRobo loads a robot's kinematic tree from :class:`types.KinematicsTensorConfig`. This config is
generated using :class:`cuda_robot_generator.CudaRobotGenerator`. A parser base class
:class:`kinematics_parser.KinematicsParer` is provided to help with parsing kinematics from
standard formats. Kinematics parsing from URDF is implemented in
:class:`urdf_kinematics_parser.UrdfKinematicsParser`. An experimental USD kinematics parser is
provided in :class:`usd_kinematics_parser.UsdKinematicsParser`, which is missing an additional
transform between the joint origin and link origin, so this might not work for all robots.
CuRobo loads a robot's kinematic tree from :class:`~types.KinematicsTensorConfig`. This config is
generated using :class:`~cuda_robot_generator.CudaRobotGenerator`. A parser base class
:class:`~kinematics_parser.KinematicsParser` is provided to help with parsing kinematics from
standard formats. Kinematics parsing from URDF is implemented in
:class:`~urdf_kinematics_parser.UrdfKinematicsParser`. An experimental USD kinematics parser is
provided in :class:`~usd_kinematics_parser.UsdKinematicsParser`, which is missing an additional
transform between the joint origin and link origin, so this might not work for all robots. An
example workflow for setting up a robot from URDF is shown below:
In addition to parsing data from a kinematics file (urdf, usd), CuRobo also needs a sphere
representation of the robot that approximates the volume of the robot's links with spheres.
Several other parameters are also needed to represent kinematics in CuRobo. A tutorial on setting up a
robot is provided in :ref:`tut_robot_configuration`.
.. graphviz::
Once a robot configuration file is setup, you can pass this to
:class:`cuda_robot_model.CudaRobotModelConfig` to generate an instance of kinematics configuraiton.
:class:`cuda_robot_model.CudaRobotModel` takes this configuration and provides access to kinematics
digraph {
rankdir=LR;
bgcolor="#808080";
edge [color = "#FFFFFF"; fontsize=10];
node [shape="box", style="rounded, filled", fontsize=12, color="#76b900", fontcolor="#FFFFFF"];
"CudaRobotGenerator" [color="#FFFFFF", fontcolor="#000000"]
"UrdfKinematicsParser" [fillcolor="#FFFFFF", fontcolor="#000000", style="box, filled", color="#000000"]
"CudaRobotGenerator" [color="#FFFFFF", fontcolor="#000000"]
"URDF" [fillcolor="#FFFFFF", fontcolor="#000000", style="box, filled, dashed", color="#000000"]
"XRDF" [fillcolor="#FFFFFF", fontcolor="#000000", style="box, filled, dashed", color="#000000"]
"cuRobo YML" [fillcolor="#FFFFFF", fontcolor="#000000", style="box, filled", color="#000000"]
"CudaRobotGeneratorConfig" -> "CudaRobotGenerator";
"CudaRobotGenerator" -> "UrdfKinematicsParser" [dir="both"];
"CudaRobotGenerator" -> "CudaRobotModelConfig";
"URDF" -> "cuRobo YML";
"XRDF" -> "cuRobo YML" [style="dashed",label="Optional", fontcolor="#FFFFFF"];
"cuRobo YML" -> "CudaRobotGeneratorConfig";
}
In addition to parsing data from a kinematics file (urdf, usd), CuRobo also needs a sphere
representation of the robot that approximates the volume of the robot's links with spheres.
Several other parameters are also needed to represent kinematics in CuRobo. A tutorial on setting up a
robot is provided in :ref:`tut_robot_configuration`. cuRobo also supports using
`XRDF <https://nvidia-isaac-ros.github.io/concepts/manipulation/xrdf.html>`_ for representing
the additional parameters of the robot that are not available in URDF.
Once a robot configuration file is setup, you can pass this to
:class:`~cuda_robot_model.CudaRobotModelConfig` to generate an instance of kinematics configuraiton.
:class:`~cuda_robot_model.CudaRobotModel` takes this configuration and provides access to kinematics
computations.
.. note::
:class:`cuda_robot_model.CudaRobotModel` creates memory tensors that are used by CUDA kernels
while :class:`cuda_robot_model.CudaRobotModelConfig` contains only the robot kinematics
configuration. To reduce memory overhead, you can pass one instance of
:class:`cuda_robot_model.CudaRobotModelConfig` to many instances of
:class:`cuda_robot_model.CudaRobotModel`.
:class:`~cuda_robot_model.CudaRobotModel` creates memory tensors that are used by CUDA kernels
while :class:`~cuda_robot_model.CudaRobotModelConfig` contains only the robot kinematics
configuration. To reduce memory overhead, you can pass one instance of
:class:`~cuda_robot_model.CudaRobotModelConfig` to many instances of
:class:`~cuda_robot_model.CudaRobotModel`.
"""

View File

@@ -8,16 +8,23 @@
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
"""
Generates a Tensor representation of kinematics for use in
:class:`~curobo.cuda_robot_model.CudaRobotModel`. This module reads the robot from a
:class:`~curobo.cuda_robot_model.kinematics_parser.KinematicsParser` and
generates the necessary tensors for kinematics computation.
"""
from __future__ import annotations
# Standard Library
import copy
import os
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Union
from typing import Any, Dict, List, Optional, Tuple, Union
# Third Party
import importlib_resources
import torch
import torch.autograd.profiler as profiler
@@ -45,13 +52,13 @@ try:
except ImportError:
log_info(
"USDParser failed to import, install curobo with pip install .[usd] "
+ "or pip install usd-core, NOTE: Do not install this if using with ISAAC SIM."
+ "or pip install usd-core, NOTE: Do not install this if using with Isaac Sim."
)
@dataclass
class CudaRobotGeneratorConfig:
"""Create Cuda Robot Model Configuration."""
"""Robot representation generator configuration, loads from a dictionary."""
#: Name of base link for kinematic tree.
base_link: str
@@ -69,7 +76,7 @@ class CudaRobotGeneratorConfig:
collision_link_names: Optional[List[str]] = None
#: Collision spheres that fill the volume occupied by the links of the robot.
#: Collision spheres can be generated for robot using https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/advanced_tutorials/tutorial_motion_generation_robot_description_editor.html#collision-spheres
#: Collision spheres can be generated for robot using `Isaac Sim Robot Description Editor <https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/advanced_tutorials/tutorial_motion_generation_robot_description_editor.html#collision-spheres>`_.
collision_spheres: Union[None, str, Dict[str, Any]] = None
#: Radius buffer to add to collision spheres as padding.
@@ -79,13 +86,17 @@ class CudaRobotGeneratorConfig:
compute_jacobian: bool = False
#: Padding to add for self collision between links. Some robots use a large padding
#: for self collision avoidance (https://github.com/ros-planning/panda_moveit_config/pull/35#issuecomment-671333863)
#: for self collision avoidance (e.g., `MoveIt Panda Issue <https://github.com/ros-planning/panda_moveit_config/pull/35#issuecomment-671333863>`_).
self_collision_buffer: Optional[Dict[str, float]] = None
#: Self-collision ignore
#: Dictionary with each key as a link name and value as a list of link names to ignore self
#: collision. E.g., {"link1": ["link2", "link3"], "link2": ["link3", "link4"]} will
#: ignore self collision between link1 and link2, link1 and link3, link2 and link3, link2 and
#: link4. The mapping is bidirectional so it's sufficient to mention the mapping in one
#: direction (i.e., not necessary to mention "link1" in ignore list for "link2").
self_collision_ignore: Optional[Dict[str, List[str]]] = None
#: debug config
#: Debugging information to pass to kinematics module.
debug: Optional[Dict[str, Any]] = None
#: Enabling this flag writes out the cumulative transformation matrix to global memory. This
@@ -129,33 +140,46 @@ class CudaRobotGeneratorConfig:
#: joint angle given from this dictionary.
lock_joints: Optional[Dict[str, float]] = None
#: Additional links to add to parsed kinematics tree. This is useful for adding fixed links
#: that are not present in the URDF or USD.
extra_links: Optional[Dict[str, LinkParams]] = None
#: this is deprecated
#: Deprecated way to add a fixed link.
add_object_link: bool = False
#: Deprecated flag to load assets from external module. Now, pass absolute path to
#: asset_root_path or use :class:`~curobo.util.file_path.ContentPath`.
use_external_assets: bool = False
#: Deprecated path to load assets from external module. Use
#: :class:`~curobo.util.file_path.ContentPath` instead.
external_asset_path: Optional[str] = None
#: Deprecated path to load robot configs from external module. Use
#: :class:`~curobo.util.file_path.ContentPath` instead.
external_robot_configs_path: Optional[str] = None
#: Create n collision spheres for links with name
extra_collision_spheres: Optional[Dict[str, int]] = None
#: cspace config
#: Configuration space parameters for robot (e.g, acceleration, jerk limits).
cspace: Union[None, CSpaceConfig, Dict[str, List[Any]]] = None
#: Enable loading meshes from kinematics parser.
load_meshes: bool = False
def __post_init__(self):
"""Post initialization adds absolute paths, converts dictionaries to objects."""
# add root path:
# Check if an external asset path is provided:
asset_path = get_assets_path()
robot_path = get_robot_configs_path()
if self.external_asset_path is not None:
log_warn("Deprecated: external_asset_path is deprecated, use ContentPath")
asset_path = self.external_asset_path
if self.external_robot_configs_path is not None:
log_warn("Deprecated: external_robot_configs_path is deprecated, use ContentPath")
robot_path = self.external_robot_configs_path
if self.urdf_path is not None:
@@ -214,30 +238,45 @@ class CudaRobotGeneratorConfig:
class CudaRobotGenerator(CudaRobotGeneratorConfig):
"""Robot Kinematics Representation Generator.
The word "Chain" is used interchangeably with "Tree" in this class.
"""
def __init__(self, config: CudaRobotGeneratorConfig) -> None:
"""Initialize the robot generator.
Args:
config: Parameters to initialize the robot generator.
"""
super().__init__(**vars(config))
self.cpu_tensor_args = self.tensor_args.cpu()
self._self_collision_data = None
self.non_fixed_joint_names = []
self._n_dofs = 1
self._kinematics_config = None
self.initialize_tensors()
@property
def kinematics_config(self):
def kinematics_config(self) -> KinematicsTensorConfig:
"""Kinematics representation as Tensors."""
return self._kinematics_config
@property
def self_collision_config(self):
def self_collision_config(self) -> SelfCollisionKinematicsConfig:
"""Self collision configuration for robot."""
return self._self_collision_data
@property
def kinematics_parser(self):
"""Kinematics parser used to generate robot parameters."""
return self._kinematics_parser
@profiler.record_function("robot_generator/initialize_tensors")
def initialize_tensors(self):
"""Initialize tensors for kinematics representatiobn."""
self._joint_limits = None
self._self_collision_data = None
self.lock_jointstate = None
@@ -335,10 +374,15 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
lock_jointstate=self.lock_jointstate,
mimic_joints=self._mimic_joint_data,
)
if self.asset_root_path != "":
if self.asset_root_path is not None and self.asset_root_path != "":
self._kinematics_parser.add_absolute_path_to_link_meshes(self.asset_root_path)
def add_link(self, link_params: LinkParams):
"""Add an extra link to the robot kinematics tree.
Args:
link_params: Parameters of the link to add.
"""
self.extra_links[link_params.link_name] = link_params
def add_fixed_link(
@@ -348,6 +392,14 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
joint_name: Optional[str] = None,
transform: Optional[Pose] = None,
):
"""Add a fixed link to the robot kinematics tree.
Args:
link_name: Name of the link to add.
parent_link_name: Parent link to add the fixed link to.
joint_name: Name of fixed to joint to create.
transform: Offset transform of the fixed link from the joint.
"""
if transform is None:
transform = (
Pose.from_list([0, 0, 0, 1, 0, 0, 0], self.tensor_args)
@@ -368,7 +420,22 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
self.add_link(link_params)
@profiler.record_function("robot_generator/build_chain")
def _build_chain(self, base_link, ee_link, other_links, link_names):
def _build_chain(
self,
base_link: str,
ee_link: str,
other_links: List[str],
) -> List[str]:
"""Build kinematic tree of the robot.
Args:
base_link: Name of base link for the chain.
ee_link: Name of end-effector link for the chain.
other_links: List of other links to add to the chain.
Returns:
List[str]: List of link names in the chain.
"""
self._n_dofs = 0
self._controlled_links = []
self._bodies = []
@@ -404,7 +471,13 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
self.non_fixed_joint_names = self.joint_names.copy()
return chain_link_names
def _get_mimic_joint_data(self):
def _get_mimic_joint_data(self) -> Dict[str, List[int]]:
"""Get joints that are mimicked from actuated joints joints.
Returns:
Dict[str, List[int]]: Dictionary containing name of actuated joint and list of mimic
joint indices.
"""
# get joint types:
mimic_joint_data = {}
for i in range(1, len(self._bodies)):
@@ -417,7 +490,16 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
return mimic_joint_data
@profiler.record_function("robot_generator/build_kinematics_tensors")
def _build_kinematics_tensors(self, base_link, ee_link, link_names, chain_link_names):
def _build_kinematics_tensors(self, base_link, link_names, chain_link_names):
"""Create kinematic tensors for robot given kinematic tree.
Args:
base_link: Name of base link for the tree.
link_names: Namer of links to compute kinematics for. This is used to determine link
indices to store pose during forward kinematics.
chain_link_names: List of link names in the kinematic tree. Used to traverse the
kinematic tree.
"""
self._active_joints = []
self._mimic_joint_data = {}
link_map = [0 for i in range(len(self._bodies))]
@@ -493,9 +575,19 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
self._all_joint_names = all_joint_names
@profiler.record_function("robot_generator/build_kinematics")
def _build_kinematics(self, base_link, ee_link, other_links, link_names):
chain_link_names = self._build_chain(base_link, ee_link, other_links, link_names)
self._build_kinematics_tensors(base_link, ee_link, link_names, chain_link_names)
def _build_kinematics(
self, base_link: str, ee_link: str, other_links: List[str], link_names: List[str]
):
"""Build kinematics tensors given base link, end-effector link and other links.
Args:
base_link: Name of base link for the kinematic tree.
ee_link: Name of end-effector link for the kinematic tree.
other_links: List of other links to add to the kinematic tree.
link_names: List of link names to store poses after kinematics computation.
"""
chain_link_names = self._build_chain(base_link, ee_link, other_links)
self._build_kinematics_tensors(base_link, link_names, chain_link_names)
if self.collision_spheres is not None and len(self.collision_link_names) > 0:
self._build_collision_model(
self.collision_spheres, self.collision_link_names, self.collision_sphere_buffer
@@ -504,13 +596,26 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
@profiler.record_function("robot_generator/build_kinematics_with_lock_joints")
def _build_kinematics_with_lock_joints(
self,
base_link,
ee_link,
other_links,
link_names,
base_link: str,
ee_link: str,
other_links: List[str],
link_names: List[str],
lock_joints: Dict[str, float],
):
chain_link_names = self._build_chain(base_link, ee_link, other_links, link_names)
"""Build kinematics with locked joints.
This function will first build the chain with no locked joints, find the transforms
when the locked joints are set to the given values, and then use these transforms as
fixed transforms for the locked joints.
Args:
base_link: Base link of the kinematic tree.
ee_link: End-effector link of the kinematic tree.
other_links: Other links to add to the kinematic tree.
link_names: List of link names to store poses after kinematics computation.
lock_joints: Joints to lock in the kinematic tree with value to lock at.
"""
chain_link_names = self._build_chain(base_link, ee_link, other_links)
# find links attached to lock joints:
lock_joint_names = list(lock_joints.keys())
@@ -531,7 +636,7 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
new_link_names = list(set(link_names + lock_links))
# rebuild kinematic tree with link names added to link pose computation:
self._build_kinematics_tensors(base_link, ee_link, new_link_names, chain_link_names)
self._build_kinematics_tensors(base_link, new_link_names, chain_link_names)
if self.collision_spheres is not None and len(self.collision_link_names) > 0:
self._build_collision_model(
self.collision_spheres, self.collision_link_names, self.collision_sphere_buffer
@@ -628,12 +733,12 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
collision_link_names: List[str],
collision_sphere_buffer: Union[float, Dict[str, float]] = 0.0,
):
"""
"""Build collision model for robot.
Args:
collision_spheres (_type_): _description_
collision_link_names (_type_): _description_
collision_sphere_buffer (float, optional): _description_. Defaults to 0.0.
collision_spheres: Spheres for each link of the robot.
collision_link_names: Name of links to load spheres for.
collision_sphere_buffer: Additional padding to add to collision spheres.
"""
# We create all tensors on cpu and then finally move them to gpu
@@ -680,7 +785,7 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
)
# build self collision distance tensor:
self.self_collision_distance = (
self_collision_distance = (
torch.zeros(
(self.total_spheres, self.total_spheres),
dtype=cpu_tensor_args.dtype,
@@ -691,7 +796,6 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
self.self_collision_offset = torch.zeros(
(self.total_spheres), dtype=cpu_tensor_args.dtype, device=cpu_tensor_args.device
)
with profiler.record_function("robot_generator/self_collision_distance"):
# iterate through each link:
for j_idx, j in enumerate(collision_link_names):
@@ -724,44 +828,66 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
sp1 = link1_spheres_idx[k1]
for k2 in range(len(rad2)):
sp2 = link2_spheres_idx[k2]
self.self_collision_distance[sp1, sp2] = rad1[k1] + rad2[k2] + c1 + c2
self_collision_distance[sp1, sp2] = rad1[k1] + rad2[k2] + c1 + c2
self_collision_distance = self_collision_distance.to(device=self.tensor_args.device)
with profiler.record_function("robot_generator/self_collision_min"):
d_mat = self.self_collision_distance
self.self_collision_distance = torch.minimum(d_mat, d_mat.transpose(0, 1))
d_mat = self_collision_distance
self_collision_distance = torch.minimum(d_mat, d_mat.transpose(0, 1))
if self.debug is not None and "self_collision_experimental" in self.debug:
use_experimental_kernel = self.debug["self_collision_experimental"]
self.self_collision_distance = self.self_collision_distance.to(
device=self.tensor_args.device
)
(
self._self_coll_thread_locations,
self._self_coll_idx,
valid_data,
checks_per_thread,
) = self._create_self_collision_thread_data(self.self_collision_distance)
self._self_coll_matrix = (self.self_collision_distance != -(torch.inf)).to(
dtype=torch.uint8
)
) = self._create_self_collision_thread_data(self_collision_distance)
use_experimental_kernel = True
if (
self.debug is not None
and "self_collision_experimental" in self.debug
and self.debug["self_collision_experimental"] is not None
):
use_experimental_kernel = self.debug["self_collision_experimental"]
if not valid_data:
use_experimental_kernel = False
log_warn("Self Collision checks are greater than 32 * 512, using slower kernel")
if use_experimental_kernel:
self_coll_matrix = torch.zeros((2), device=self.tensor_args.device, dtype=torch.uint8)
else:
self_coll_matrix = (self_collision_distance != -(torch.inf)).to(dtype=torch.uint8)
# self_coll_matrix = (self_collision_distance != -(torch.inf)).to(dtype=torch.uint8)
# convert all tensors to gpu:
self._link_sphere_idx_map = self._link_sphere_idx_map.to(device=self.tensor_args.device)
self._link_spheres_tensor = self._link_spheres_tensor.to(device=self.tensor_args.device)
self.self_collision_offset = self.self_collision_offset.to(device=self.tensor_args.device)
self._self_collision_data = SelfCollisionKinematicsConfig(
offset=self.self_collision_offset,
distance_threshold=self.self_collision_distance,
thread_location=self._self_coll_thread_locations,
thread_max=self._self_coll_idx,
collision_matrix=self._self_coll_matrix,
experimental_kernel=valid_data and use_experimental_kernel,
collision_matrix=self_coll_matrix,
experimental_kernel=use_experimental_kernel,
checks_per_thread=checks_per_thread,
)
@profiler.record_function("robot_generator/create_self_collision_thread_data")
def _create_self_collision_thread_data(self, collision_threshold):
def _create_self_collision_thread_data(
self, collision_threshold: torch.Tensor
) -> Tuple[torch.Tensor, int, bool, int]:
"""Create thread data for self collision checks.
Args:
collision_threshold: Collision distance between spheres of the robot. Used to
skip self collision checks when distance is -inf.
Returns:
Tuple[torch.Tensor, int, bool, int]: Thread location for self collision checks,
number of self collision checks, if thread calculation was successful,
and number of checks per thread.
"""
coll_cpu = collision_threshold.cpu()
max_checks_per_thread = 512
thread_loc = torch.zeros((2 * 32 * max_checks_per_thread), dtype=torch.int16) - 1
@@ -815,7 +941,10 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
elif val < 512:
max_checks_per_thread = 512
else:
raise ValueError("Self Collision not supported")
log_error(
"Self Collision not supported as checks are greater than 32 * 512, \
reduce number of spheres used to approximate the robot."
)
if max_checks_per_thread < 2:
max_checks_per_thread = 2
@@ -829,7 +958,13 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
)
@profiler.record_function("robot_generator/add_body_to_tree")
def _add_body_to_tree(self, link_name, base=False):
def _add_body_to_tree(self, link_name: str, base=False):
"""Add link to kinematic tree.
Args:
link_name: Name of the link to add.
base: Is this the base link of the kinematic tree?
"""
body_idx = len(self._bodies)
rigid_body_params = self._kinematics_parser.get_link_parameters(link_name, base=base)
@@ -848,7 +983,17 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
)
self._name_to_idx_map[rigid_body_params.link_name] = body_idx
def _get_joint_links(self, joint_names: List[str]):
def _get_joint_links(self, joint_names: List[str]) -> Dict[str, Dict[str, Union[str, int]]]:
"""Get data (parent link, child link, mimic, link_index) for joints given in the list.
Args:
joint_names: Names of joints to get data for.
Returns:
Dict[str, Dict[str, Union[str, int]]]: Dictionary containing joint name as key and
dictionary containing parent link, child link, and link index as
values. Also includes mimic joint data if present.
"""
j_data = {}
for j in joint_names:
@@ -878,6 +1023,22 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
def _get_link_poses(
self, q: torch.Tensor, link_names: List[str], kinematics_config: KinematicsTensorConfig
) -> Pose:
"""Get Pose of links at given joint angles using forward kinematics.
This is implemented here to avoid circular dependencies with
:class:`~curobo.cuda_robot_model.cuda_robot_model.CudaRobotModel` module. This is used
to calculate fixed transforms for locked joints in this class. This implementation
does not compute position of robot spheres.
Args:
q: Joint angles to compute forward kinematics for.
link_names: Name of links to return pose.
kinematics_config: Tensor Configuration for kinematics computation.
Returns:
Pose: Pose of links at given joint angles.
"""
q = q.view(1, -1)
link_pos_seq = torch.zeros(
(1, len(self.link_names), 3),
device=self.tensor_args.device,
@@ -889,7 +1050,7 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
dtype=self.tensor_args.dtype,
)
batch_robot_spheres = torch.zeros(
(1, self.total_spheres, 4),
(1, 0, 4),
device=self.tensor_args.device,
dtype=self.tensor_args.dtype,
)
@@ -903,23 +1064,24 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
device=self.tensor_args.device,
dtype=self.tensor_args.dtype,
)
link_pos_seq, link_quat_seq, _ = get_cuda_kinematics(
link_pos_seq,
link_quat_seq,
batch_robot_spheres,
batch_robot_spheres.contiguous(),
global_cumul_mat,
q,
kinematics_config.fixed_transforms,
kinematics_config.link_spheres,
kinematics_config.fixed_transforms.contiguous(),
kinematics_config.link_spheres.contiguous(),
kinematics_config.link_map, # tells which link is attached to which link i
kinematics_config.joint_map, # tells which joint is attached to a link i
kinematics_config.joint_map_type, # joint type
kinematics_config.store_link_map,
kinematics_config.link_sphere_idx_map, # sphere idx map
kinematics_config.link_sphere_idx_map.contiguous(), # sphere idx map
kinematics_config.link_chain_map,
kinematics_config.joint_offset_map,
grad_out_q,
self.use_global_cumul,
False,
)
position = torch.zeros(
(q.shape[0], len(link_names), 3),
@@ -936,14 +1098,21 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
i = self.link_names.index(l)
position[:, li, :] = link_pos_seq[:, i, :]
quaternion[:, li, :] = link_quat_seq[:, i, :]
return Pose(position=position, quaternion=quaternion)
return Pose(position=position.clone(), quaternion=quaternion.clone())
@property
def get_joint_limits(self):
def get_joint_limits(self) -> JointLimits:
"""Get joint limits for the robot."""
return self._joint_limits
@profiler.record_function("robot_generator/get_joint_limits")
def _get_joint_position_velocity_limits(self):
def _get_joint_position_velocity_limits(self) -> Dict[str, torch.Tensor]:
"""Compute joint position and velocity limits for the robot.
Returns:
Dict[str, torch.Tensor]: Dictionary containing position and velocity limits for the
robot. Each value is a tensor of shape (2, n_joints) with first row containing
minimum limits and second row containing maximum limits.
"""
joint_limits = {"position": [[], []], "velocity": [[], []]}
for idx in self._active_joints:
@@ -959,6 +1128,7 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
@profiler.record_function("robot_generator/update_joint_limits")
def _update_joint_limits(self):
"""Update limits from CSpaceConfig (acceleration, jerk limits and position clips)."""
joint_limits = self._get_joint_position_velocity_limits()
joint_limits["jerk"] = torch.cat(
[-1.0 * self.cspace.max_jerk.unsqueeze(0), self.cspace.max_jerk.unsqueeze(0)]
@@ -970,7 +1140,6 @@ class CudaRobotGenerator(CudaRobotGeneratorConfig):
]
)
# clip joint position:
# NOTE: change this to be per joint
joint_limits["position"][0] += self.cspace.position_limit_clip
joint_limits["position"][1] -= self.cspace.position_limit_clip
joint_limits["velocity"][0] *= self.cspace.velocity_scale

View File

@@ -8,6 +8,14 @@
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
"""
This module builds a kinematic representation of a robot on the GPU and provides
differentiable mapping from it's joint configuration to Cartesian pose of it's links
(forward kinematics). This module also computes the position of the spheres of the robot as part
of the forward kinematics function.
"""
from __future__ import annotations
# Standard Library
@@ -24,28 +32,75 @@ from curobo.cuda_robot_model.cuda_robot_generator import (
CudaRobotGeneratorConfig,
)
from curobo.cuda_robot_model.kinematics_parser import KinematicsParser
from curobo.cuda_robot_model.types import KinematicsTensorConfig, SelfCollisionKinematicsConfig
from curobo.cuda_robot_model.types import (
CSpaceConfig,
JointLimits,
KinematicsTensorConfig,
SelfCollisionKinematicsConfig,
)
from curobo.cuda_robot_model.util import load_robot_yaml
from curobo.curobolib.kinematics import get_cuda_kinematics
from curobo.geom.types import Mesh, Sphere
from curobo.geom.sphere_fit import SphereFitType
from curobo.geom.types import Mesh, Obstacle, Sphere
from curobo.types.base import TensorDeviceType
from curobo.types.file_path import ContentPath
from curobo.types.math import Pose
from curobo.types.state import JointState
from curobo.util.logger import log_error
from curobo.util_file import get_robot_path, join_path, load_yaml
from curobo.util.logger import log_error, log_info, log_warn
from curobo.util_file import is_file_xrdf
@dataclass
class CudaRobotModelConfig:
"""
Configuration for robot kinematics on GPU.
Helper functions are provided to load this configuration from an URDF file or from
a cuRobo robot configuration file (:ref:`tut_robot_configuration`). To create from a XRDF, use
:ref:`curobo.util.xrdf_utils.convert_xrdf_to_curobo`.
"""
#: Device and floating point precision to use for kinematics.
tensor_args: TensorDeviceType
#: Names of links to compute poses with forward kinematics.
link_names: List[str]
#: Tensors representing kinematics of the robot. This can be created using
#: :class:`~curobo.cuda_robot_model.cuda_robot_generator.CudaRobotGenerator`.
kinematics_config: KinematicsTensorConfig
#: Collision pairs to ignore when computing self collision between spheres across all
#: robot links. This also contains distance threshold between spheres pairs and which thread
#: indices for calculating the distances. More details on computing these parameters is in
#: :func:`~curobo.cuda_robot_model.cuda_robot_generator.CudaRobotGenerator._build_collision_model`.
self_collision_config: Optional[SelfCollisionKinematicsConfig] = None
#: Parser to load kinematics from URDF or USD files. This is used to load kinematics
#: representation of the robot. This is created using
#: :class:`~curobo.cuda_robot_model.kinematics_parser.KinematicsParser`.
#: USD is an experimental feature and might not work for all robots.
kinematics_parser: Optional[KinematicsParser] = None
#: Output jacobian during forward kinematics. This is not implemented. The forward kinematics
#: function does use Jacobian during backward pass. What's not supported is
compute_jacobian: bool = False
use_global_cumul: bool = False
#: Store transformation matrix of every link during forward kinematics call in global memory.
#: This helps speed up backward pass as we don't need to recompute the transformation matrices.
#: However, this increases memory usage and also slightly slows down forward kinematics.
#: Enabling this is recommended for getting the best performance.
use_global_cumul: bool = True
#: Generator config used to create this robot kinematics model.
generator_config: Optional[CudaRobotGeneratorConfig] = None
def get_joint_limits(self):
def get_joint_limits(self) -> JointLimits:
"""Get limits of actuated joints of the robot.
Returns:
JointLimits: Joint limits of the robot's actuated joints.
"""
return self.kinematics_config.joint_limits
@staticmethod
@@ -99,29 +154,96 @@ class CudaRobotModelConfig:
return CudaRobotModelConfig.from_config(config)
@staticmethod
def from_robot_yaml_file(
file_path: str,
def from_content_path(
content_path: ContentPath,
ee_link: Optional[str] = None,
tensor_args: TensorDeviceType = TensorDeviceType(),
):
config_file = load_yaml(join_path(get_robot_path(), file_path))["robot_cfg"]["kinematics"]
) -> CudaRobotModelConfig:
"""Load robot from Contentpath containing paths to robot description files.
Args:
content_path: Path to robot configuration files.
ee_link: End-effector link name. If None, it is read from the file.
tensor_args: Device to load robot model, defaults to cuda:0.
Returns:
CudaRobotModelConfig: cuda robot model configuration.
"""
config_file = load_robot_yaml(content_path)
if "robot_cfg" in config_file:
config_file = config_file["robot_cfg"]
if "kinematics" in config_file:
config_file = config_file["kinematics"]
if ee_link is not None:
config_file["ee_link"] = ee_link
return CudaRobotModelConfig.from_config(
CudaRobotGeneratorConfig(**config_file, tensor_args=tensor_args)
)
@staticmethod
def from_robot_yaml_file(
file_path: Union[str, Dict],
ee_link: Optional[str] = None,
tensor_args: TensorDeviceType = TensorDeviceType(),
urdf_path: Optional[str] = None,
) -> CudaRobotModelConfig:
"""Load robot from a yaml file that is in cuRobo's format (:ref:`tut_robot_configuration`).
Args:
file_path: Path to robot configuration file (yml or xrdf).
ee_link: End-effector link name. If None, it is read from the file.
tensor_args: Device to load robot model, defaults to cuda:0.
urdf_path: Path to urdf file. This is required when loading a xrdf file.
Returns:
CudaRobotModelConfig: cuda robot model configuration.
"""
if isinstance(file_path, dict):
content_path = ContentPath(robot_urdf_file=urdf_path, robot_config_file=file_path)
else:
if is_file_xrdf(file_path):
content_path = ContentPath(robot_urdf_file=urdf_path, robot_xrdf_file=file_path)
else:
content_path = ContentPath(robot_urdf_file=urdf_path, robot_config_file=file_path)
return CudaRobotModelConfig.from_content_path(content_path, ee_link, tensor_args)
@staticmethod
def from_data_dict(
data_dict: Dict[str, Any],
tensor_args: TensorDeviceType = TensorDeviceType(),
):
) -> CudaRobotModelConfig:
"""Load robot from a dictionary containing data for :class:`~curobo.cuda_robot_model.cuda_robot_generator.CudaRobotGeneratorConfig`.
:tut_robot_configuration discusses the data required to load a robot.
Args:
data_dict: Input dictionary containing robot configuration.
tensor_args: Device to load robot model, defaults to cuda:0.
Returns:
CudaRobotModelConfig: cuda robot model configuration.
"""
if "robot_cfg" in data_dict:
data_dict = data_dict["robot_cfg"]
if "kinematics" in data_dict:
data_dict = data_dict["kinematics"]
return CudaRobotModelConfig.from_config(
CudaRobotGeneratorConfig(**data_dict, tensor_args=tensor_args)
)
@staticmethod
def from_config(config: CudaRobotGeneratorConfig):
def from_config(config: CudaRobotGeneratorConfig) -> CudaRobotModelConfig:
"""Create a robot model configuration from a generator configuration.
Args:
config: Input robot generator configuration.
Returns:
CudaRobotModelConfig: robot model configuration.
"""
# create a config generator and load all values
generator = CudaRobotGenerator(config)
return CudaRobotModelConfig(
@@ -136,17 +258,19 @@ class CudaRobotModelConfig:
)
@property
def cspace(self):
def cspace(self) -> CSpaceConfig:
"""Get cspace parameters of the robot."""
return self.kinematics_config.cspace
@property
def dof(self) -> int:
"""Get the number of actuated joints (degrees of freedom) of the robot"""
return self.kinematics_config.n_dof
@dataclass
class CudaRobotModelState:
"""Dataclass that stores kinematics information."""
"""Kinematic state of robot."""
#: End-effector position stored as x,y,z in meters [b, 3]. End-effector is defined by
#: :attr:`CudaRobotModel.ee_link`.
@@ -207,20 +331,38 @@ class CudaRobotModel(CudaRobotModelConfig):
"""
CUDA Accelerated Robot Model
Load basic kinematics from an URDF with :func:`~CudaRobotModelConfig.from_basic_urdf`.
Check :ref:`tut_robot_configuration` for details on how to also create a geometric
representation of the robot.
Currently dof is created only for links that we need to compute kinematics. E.g., for robots
with many serial chains, add all links of the robot to get the correct dof. This is not an
issue if you are loading collision spheres as that will cover the full geometry of the robot.
"""
def __init__(self, config: CudaRobotModelConfig):
"""Initialize kinematics instance with a robot model configuration.
Args:
config: Input robot model configuration.
"""
super().__init__(**vars(config))
self._batch_size = 0
self.update_batch_size(1, reset_buffers=True)
def update_batch_size(self, batch_size, force_update=False, reset_buffers=False):
def update_batch_size(
self, batch_size: int, force_update: bool = False, reset_buffers: bool = False
):
"""Update batch size of the robot model.
Args:
batch_size: Batch size to update the robot model.
force_update: Detach gradients of tensors. This is not supported.
reset_buffers: Recreate the tensors even if the batch size is same.
"""
if batch_size == 0:
log_error("batch size is zero")
if force_update and self._batch_size == batch_size and self.compute_jacobian:
log_error("Outputting jacobian is not supported")
self.lin_jac = self.lin_jac.detach() # .requires_grad_(True)
self.ang_jac = self.ang_jac.detach() # .requires_grad_(True)
elif self._batch_size != batch_size or reset_buffers:
@@ -252,6 +394,7 @@ class CudaRobotModel(CudaRobotModelConfig):
dtype=self.tensor_args.dtype,
)
if self.compute_jacobian:
log_error("Outputting jacobian is not supported")
self.lin_jac = torch.zeros(
[batch_size, 3, self.kinematics_config.n_dofs],
device=self.tensor_args.device,
@@ -264,10 +407,27 @@ class CudaRobotModel(CudaRobotModelConfig):
)
@profiler.record_function("cuda_robot_model/forward_kinematics")
def forward(self, q, link_name=None, calculate_jacobian=False):
# pos, rot = self.compute_forward_kinematics(q, qd, link_name)
def forward(
self, q, link_name=None, calculate_jacobian=False
) -> Tuple[Tensor, Tensor, None, None, Tensor, Tensor, Tensor]:
"""Compute forward kinematics of the robot.
Use :func:`~get_state` to get a structured output.
Args:
q: Joint configuration of the robot. Shape should be [batch_size, dof].
link_name: Name of link to return pose of. If None, returns end-effector pose.
calculate_jacobian: Calculate jacobian of the robot. Not supported.
Returns:
Tuple[Tensor, Tensor, None, None, Tensor, Tensor, Tensor]: End-effector position,
end-effector quaternion (wxyz), linear jacobian(None), angular jacobian(None),
link positions, link quaternion (wxyz), link spheres.
"""
if len(q.shape) > 2:
raise ValueError("q shape should be [batch_size, dof]")
log_error("q shape should be [batch_size, dof]")
if len(q.shape) == 1:
q = q.unsqueeze(0)
batch_size = q.shape[0]
self.update_batch_size(batch_size, force_update=q.requires_grad)
@@ -287,7 +447,7 @@ class CudaRobotModel(CudaRobotModelConfig):
# compute jacobians?
if calculate_jacobian:
raise NotImplementedError
log_error("Outputting jacobian is not supported")
return (
ee_pos,
ee_quat,
@@ -298,7 +458,19 @@ class CudaRobotModel(CudaRobotModelConfig):
link_spheres_tensor,
)
def get_state(self, q, link_name=None, calculate_jacobian=False) -> CudaRobotModelState:
def get_state(
self, q: torch.Tensor, link_name: str = None, calculate_jacobian: bool = False
) -> CudaRobotModelState:
"""Get kinematic state of the robot by computing forward kinematics.
Args:
q: Joint configuration of the robot. Shape should be [batch_size, dof].
link_name: Name of link to return pose of. If None, returns end-effector pose.
calculate_jacobian: Calculate jacobian of the robot. Not supported.
Returns:
CudaRobotModelState: Kinematic state of the robot.
"""
out = self.forward(q, link_name, calculate_jacobian)
state = CudaRobotModelState(
out[0],
@@ -312,12 +484,46 @@ class CudaRobotModel(CudaRobotModelConfig):
)
return state
def get_robot_link_meshes(self):
def compute_kinematics(
self, js: JointState, link_name: Optional[str] = None, calculate_jacobian: bool = False
) -> CudaRobotModelState:
"""Compute forward kinematics of the robot.
Args:
js: Joint state of robot.
link_name: Name of link to return pose of. If None, returns end-effector pose.
calculate_jacobian: Calculate jacobian of the robot. Not supported.
Returns:
CudaRobotModelState: Kinematic state of the robot.
"""
if js.joint_names is not None:
if js.joint_names != self.kinematics_config.joint_names:
log_error("Joint names do not match, reoder joints before forward kinematics")
return self.get_state(js.position, link_name, calculate_jacobian)
def get_robot_link_meshes(self) -> List[Mesh]:
"""Get meshes of all links of the robot.
Returns:
List[Mesh]: List of all link meshes.
"""
m_list = [self.get_link_mesh(l) for l in self.kinematics_config.mesh_link_names]
return m_list
def get_robot_as_mesh(self, q: torch.Tensor):
def get_robot_as_mesh(self, q: torch.Tensor) -> List[Mesh]:
"""Transform robot links to Cartesian poses using forward kinematics and return as meshes.
Args:
q: Joint configuration of the robot, shape should be [1, dof].
Returns:
List[Mesh]: List of all link meshes.
"""
# get all link meshes:
m_list = self.get_robot_link_meshes()
pose = self.get_link_poses(q, self.kinematics_config.mesh_link_names)
@@ -328,7 +534,16 @@ class CudaRobotModel(CudaRobotModelConfig):
return m_list
def get_robot_as_spheres(self, q: torch.Tensor, filter_valid: bool = True):
def get_robot_as_spheres(self, q: torch.Tensor, filter_valid: bool = True) -> List[Sphere]:
"""Get robot spheres using forward kinematics on given joint configuration q.
Args:
q: Joint configuration of the robot, shape should be [1, dof].
filter_valid: Filter out spheres with radius <= 0.
Returns:
List[Sphere]: List of all robot spheres.
"""
state = self.get_state(q)
# state has sphere position and radius
@@ -361,6 +576,18 @@ class CudaRobotModel(CudaRobotModelConfig):
return sph_traj
def get_link_poses(self, q: torch.Tensor, link_names: List[str]) -> Pose:
"""Get Pose of links at given joint configuration q using forward kinematics.
Note that only the links specified in :var:`~link_names` are returned.
Args:
q: Joint configuration of the robot, shape should be [batch_size, dof].
link_names: Names of links to get pose of. This should be a subset of
:var:`~link_names`.
Returns:
Pose: Poses of links at given joint configuration.
"""
state = self.get_state(q)
position = torch.zeros(
(q.shape[0], len(link_names), 3),
@@ -379,7 +606,15 @@ class CudaRobotModel(CudaRobotModelConfig):
quaternion[:, li, :] = state.links_quaternion[:, i, :]
return Pose(position=position, quaternion=quaternion)
def _cuda_forward(self, q):
def _cuda_forward(self, q: torch.Tensor) -> Tuple[Tensor, Tensor, Tensor]:
"""Compute forward kinematics on GPU. Use :func:`~get_state` or :func:`~forward` instead.
Args:
q: Joint configuration of the robot, shape should be [batch_size, dof].
Returns:
Tuple[Tensor, Tensor, Tensor]: Link positions, link quaternions, link
"""
link_pos, link_quat, robot_spheres = get_cuda_kinematics(
self._link_pos_seq,
self._link_quat_seq,
@@ -401,22 +636,36 @@ class CudaRobotModel(CudaRobotModelConfig):
return link_pos, link_quat, robot_spheres
@property
def all_articulated_joint_names(self):
def all_articulated_joint_names(self) -> List[str]:
"""Names of all articulated joints of the robot."""
return self.kinematics_config.non_fixed_joint_names
def get_self_collision_config(self) -> SelfCollisionKinematicsConfig:
"""Get self collision configuration parameters of the robot."""
return self.self_collision_config
def get_link_mesh(self, link_name: str) -> Mesh:
"""Get mesh of a link of the robot."""
mesh = self.kinematics_parser.get_link_mesh(link_name)
return mesh
def get_link_transform(self, link_name: str) -> Pose:
mat = self.kinematics_config.fixed_transforms[self.kinematics_config.link_name_to_idx_map[link_name]]
"""Get pose offset of a link from it's parent joint.
Args:
link_name: Name of link to get pose of.
Returns:
Pose: Pose of the link.
"""
mat = self.kinematics_config.fixed_transforms[
self.kinematics_config.link_name_to_idx_map[link_name]
]
pose = Pose(position=mat[:3, 3], rotation=mat[:3, :3])
return pose
def get_all_link_transforms(self) -> Pose:
"""Get offset pose of all links with respect to their parent joint."""
pose = Pose(
self.kinematics_config.fixed_transforms[:, :3, 3],
rotation=self.kinematics_config.fixed_transforms[:, :3, :3],
@@ -424,32 +673,59 @@ class CudaRobotModel(CudaRobotModelConfig):
return pose
def get_dof(self) -> int:
"""Get degrees of freedom of the robot."""
return self.kinematics_config.n_dof
@property
def dof(self) -> int:
"""Degrees of freedom of the robot."""
return self.kinematics_config.n_dof
@property
def joint_names(self) -> List[str]:
"""Names of actuated joints."""
return self.kinematics_config.joint_names
@property
def total_spheres(self) -> int:
"""Number of spheres used to approximate robot geometry."""
return self.kinematics_config.total_spheres
@property
def lock_jointstate(self):
def lock_jointstate(self) -> JointState:
"""State of joints that are locked in the kinematic representation."""
return self.kinematics_config.lock_jointstate
def get_full_js(self, js: JointState):
def get_full_js(self, js: JointState) -> JointState:
"""Get state of all joints, including locked joints.
This function will not provide state of mimic joints. If you need mimic joints, use
:func:`~get_mimic_js`.
Args:
js: State containing articulated joints.
Returns:
JointState: State of all joints.
"""
all_joint_names = self.all_articulated_joint_names
lock_joint_state = self.lock_jointstate
new_js = js.get_augmented_joint_state(all_joint_names, lock_joint_state)
return new_js
def get_mimic_js(self, js: JointState):
def get_mimic_js(self, js: JointState) -> JointState:
"""Get state of mimic joints from active joints.
Current implementation uses a for loop over joints to calculate the state. This can be
optimized by using a custom CUDA kernel or a matrix multiplication.
Args:
js: State containing articulated joints.
Returns:
JointState: State of active, locked, and mimic joints.
"""
if self.kinematics_config.mimic_joints is None:
return None
extra_joints = {"position": [], "joint_names": []}
@@ -467,21 +743,118 @@ class CudaRobotModel(CudaRobotModelConfig):
new_js = js.get_augmented_joint_state(js.joint_names + extra_js.joint_names, extra_js)
return new_js
def update_kinematics_config(self, new_kin_config: KinematicsTensorConfig):
"""Update kinematics representation of the robot.
A kinematics representation can be updated with new parameters. Some parameters that could
require updating are state of locked joints, when a robot grasps an object. Another instance
is when using different planners for different parts of the robot, example updating the
state of robot base or another arm. Updations should result in the same tensor dimensions,
if not then the instance of this class requires reinitialization.
Args:
new_kin_config: New kinematics representation of the robot.
"""
self.kinematics_config.copy_(new_kin_config)
def attach_external_objects_to_robot(
self,
joint_state: JointState,
external_objects: List[Obstacle],
surface_sphere_radius: float = 0.001,
link_name: str = "attached_object",
sphere_fit_type: SphereFitType = SphereFitType.VOXEL_VOLUME_SAMPLE_SURFACE,
voxelize_method: str = "ray",
world_objects_pose_offset: Optional[Pose] = None,
) -> bool:
"""Attach external objects to a robot's link. See :ref:`attach_object_note` for details.
Args:
joint_state: Joint state of the robot.
external_objects: List of external objects to attach to the robot.
surface_sphere_radius: Radius (in meters) to use for points sampled on surface of the
object. A smaller radius will allow for generating motions very close to obstacles.
link_name: Name of the link (frame) to attach the objects to. The assumption is that
this link does not have any geometry and all spheres of this link represent
attached objects.
sphere_fit_type: Sphere fit algorithm to use. See :ref:`attach_object_note` for more
details. The default method :attr:`SphereFitType.VOXEL_VOLUME_SAMPLE_SURFACE`
voxelizes the volume of the objects and adds spheres representing the voxels, then
samples points on the surface of the object, adds :attr:`surface_sphere_radius` to
these points. This should be used for most cases.
voxelize_method: Method to use for voxelization, passed to
:py:func:`trimesh.voxel.creation.voxelize`.
world_objects_pose_offset: Offset to apply to the object poses before attaching to the
robot. This is useful when attaching an object that's in contact with the world.
The offset is applied in the world frame before attaching to the robot.
"""
log_info("Attach objects to robot")
if len(external_objects) == 0:
log_error("no object in external_objects")
kin_state = self.compute_kinematics(joint_state)
ee_pose = kin_state.ee_pose # w_T_ee
if world_objects_pose_offset is not None:
# add offset from ee:
ee_pose = world_objects_pose_offset.inverse().multiply(ee_pose)
# new ee_pose:
# w_T_ee = offset_T_w * w_T_ee
# ee_T_w
ee_pose = ee_pose.inverse() # ee_T_w to multiply all objects later
max_spheres = self.kinematics_config.get_number_of_spheres(link_name)
object_names = [x.name for x in external_objects]
n_spheres = int(max_spheres / len(object_names))
sphere_tensor = torch.zeros((max_spheres, 4))
sphere_tensor[:, 3] = -10.0
sph_list = []
if n_spheres == 0:
log_warn(
"No spheres found, max_spheres: "
+ str(max_spheres)
+ " n_objects: "
+ str(len(object_names))
)
return False
for i, x in enumerate(object_names):
obs = external_objects[i]
sph = obs.get_bounding_spheres(
n_spheres,
surface_sphere_radius,
pre_transform_pose=ee_pose,
tensor_args=self.tensor_args,
fit_type=sphere_fit_type,
voxelize_method=voxelize_method,
)
sph_list += [s.position + [s.radius] for s in sph]
log_info("MG: Computed spheres for attach objects to robot")
spheres = self.tensor_args.to_device(torch.as_tensor(sph_list))
if spheres.shape[0] > max_spheres:
spheres = spheres[: spheres.shape[0]]
sphere_tensor[: spheres.shape[0], :] = spheres.contiguous()
self.kinematics_config.attach_object(sphere_tensor=sphere_tensor, link_name=link_name)
return True
@property
def ee_link(self):
def ee_link(self) -> str:
"""End-effector link of the robot. Changing requires reinitializing this class."""
return self.kinematics_config.ee_link
@property
def base_link(self):
def base_link(self) -> str:
"""Base link of the robot. Changing requires reinitializing this class."""
return self.kinematics_config.base_link
@property
def robot_spheres(self):
"""Spheres representing robot geometry."""
return self.kinematics_config.link_spheres
def update_kinematics_config(self, new_kin_config: KinematicsTensorConfig):
self.kinematics_config.copy_(new_kin_config)
@property
def retract_config(self):
def retract_config(self) -> torch.Tensor:
"""Retract configuration of the robot. Use :func:`~joint_names` to get joint names."""
return self.kinematics_config.cspace.retract_config

View File

@@ -8,10 +8,21 @@
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
"""
Base module for parsing kinematics from different representations.
cuRobo provides kinematics parsing from an URDF and a partial implementation for parsing from
a USD. To parse from other representations, an user can extend the :class:`~KinematicsParser`
class and implement only the abstract methods. Optionally, user can also provide functions for
reading meshes, useful for debugging and visualization.
"""
from __future__ import annotations
# Standard Library
from abc import abstractmethod
from dataclasses import dataclass, field
from typing import Dict, List, Optional
from typing import Any, Dict, List, Optional
# Third Party
import numpy as np
@@ -25,6 +36,8 @@ from curobo.types.math import Pose
@dataclass
class LinkParams:
"""Parameters of a link in the kinematic tree."""
link_name: str
joint_name: str
joint_type: JointType
@@ -38,7 +51,15 @@ class LinkParams:
mimic_joint_name: Optional[str] = None
@staticmethod
def from_dict(dict_data):
def from_dict(dict_data: Dict[str, Any]) -> LinkParams:
"""Create a LinkParams object from a dictionary.
Args:
dict_data: Dictionary containing link parameters.
Returns:
LinkParams: Link parameters object.
"""
dict_data["joint_type"] = JointType[dict_data["joint_type"]]
dict_data["fixed_transform"] = (
Pose.from_list(dict_data["fixed_transform"], tensor_args=TensorDeviceType())
@@ -50,7 +71,21 @@ class LinkParams:
class KinematicsParser:
"""
Base class for parsing kinematics.
Implement abstractmethods to parse kinematics from any representation. Optionally, implement
methods for reading meshes for visualization and debugging.
"""
def __init__(self, extra_links: Optional[Dict[str, LinkParams]] = None) -> None:
"""Initialize the KinematicsParser.
Args:
extra_links: Additional links to be added to the kinematic tree.
"""
#: Parent link for all link in the kinematic tree.
self._parent_map = {}
self.extra_links = extra_links
self.build_link_parent()
@@ -59,10 +94,45 @@ class KinematicsParser:
for i in self.extra_links:
self._parent_map[i] = {"parent": self.extra_links[i].parent_link_name}
@abstractmethod
def build_link_parent(self):
"""Build a map of parent links to each link in the kinematic tree.
NOTE: Use this function to fill self._parent_map.
Use this function to fill ``_parent_map``. Check
:meth:`curobo.cuda_robot_model.urdf_kinematics_parser.UrdfKinematicsParser.build_link_parent`
for an example implementation.
"""
pass
@abstractmethod
def get_link_parameters(self, link_name: str, base: bool = False) -> LinkParams:
"""Get parameters of a link in the kinematic tree.
Args:
link_name: Name of the link.
base: Is this the base link of the robot?
Returns:
LinkParams: Parameters of the link.
"""
pass
def add_absolute_path_to_link_meshes(self, mesh_dir: str = ""):
"""Add absolute path to link meshes.
Args:
mesh_dir: Absolute path to the directory containing link meshes.
"""
pass
def get_link_mesh(self, link_name: str) -> Mesh:
"""Get mesh of a link.
Args:
link_name: Name of the link.
Returns:
Mesh: Mesh of the link.
"""
pass
@@ -85,18 +155,31 @@ class KinematicsParser:
chain_links.reverse()
return chain_links
def get_controlled_joint_names(self) -> List[str]:
"""Get names of all controlled joints in the robot.
Returns:
Names of all controlled joints in the robot.
"""
j_list = []
for k in self._parent_map.keys():
joint_name = self._parent_map[k]["joint_name"]
joint = self._robot.joint_map[joint_name]
if joint.type != "fixed" and joint.mimic is None:
j_list.append(joint_name)
return j_list
def _get_from_extra_links(self, link_name: str) -> LinkParams:
"""Get link parameters for extra links.
Args:
link_name: Name of the link.
Returns:
LinkParams: Link parameters if found, else None.
"""
if self.extra_links is None:
return None
if link_name in self.extra_links.keys():
return self.extra_links[link_name]
return None
def get_link_parameters(self, link_name: str, base: bool = False) -> LinkParams:
pass
def add_absolute_path_to_link_meshes(self, mesh_dir: str = ""):
pass
def get_link_mesh(self, link_name: str) -> Mesh:
pass

View File

@@ -8,6 +8,8 @@
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
"""Common structures used for Kinematics are defined in this module."""
from __future__ import annotations
# Standard Library
@@ -28,33 +30,87 @@ from curobo.util.tensor_util import clone_if_not_none, copy_if_not_none
class JointType(Enum):
"""Type of Joint. Arbitrary axis of change is not supported."""
#: Fixed joint.
FIXED = -1
#: Prismatic joint along x-axis.
X_PRISM = 0
#: Prismatic joint along y-axis.
Y_PRISM = 1
#: Prismatic joint along z-axis.
Z_PRISM = 2
#: Revolute joint along x-axis.
X_ROT = 3
#: Revolute joint along y-axis.
Y_ROT = 4
#: Revolute joint along z-axis.
Z_ROT = 5
#: Prismatic joint along negative x-axis.
X_PRISM_NEG = 6
#: Prismatic joint along negative y-axis.
Y_PRISM_NEG = 7
#: Prismatic joint along negative z-axis.
Z_PRISM_NEG = 8
#: Revolute joint along negative x-axis.
X_ROT_NEG = 9
#: Revolute joint along negative y-axis.
Y_ROT_NEG = 10
#: Revolute joint along negative z-axis.
Z_ROT_NEG = 11
@dataclass
class JointLimits:
"""Joint limits for a robot."""
#: Names of the joints. All tensors are indexed by joint names.
joint_names: List[str]
#: Position limits for each joint. Shape [n_joints, 2] with columns having [min, max] values.
position: torch.Tensor
#: Velocity limits for each joint. Shape [n_joints, 2] with columns having [min, max] values.
velocity: torch.Tensor
#: Acceleration limits for each joint. Shape [n_joints, 2] with columns having [min, max]
#: values.
acceleration: torch.Tensor
#: Jerk limits for each joint. Shape [n_joints, 2] with columns having [min, max] values.
jerk: torch.Tensor
#: Effort limits for each joint. This is not used.
effort: Optional[torch.Tensor] = None
#: Device and floating point precision for tensors.
tensor_args: TensorDeviceType = TensorDeviceType()
@staticmethod
def from_data_dict(data: Dict, tensor_args: TensorDeviceType = TensorDeviceType()):
def from_data_dict(
data: Dict, tensor_args: TensorDeviceType = TensorDeviceType()
) -> JointLimits:
"""Create JointLimits from a dictionary.
Args:
data: Dictionary containing joint limits. E.g., {"position": [0, 1], ...}.
tensor_args: Device and floating point precision for tensors.
Returns:
JointLimits: Joint limits instance.
"""
p = tensor_args.to_device(data["position"])
v = tensor_args.to_device(data["velocity"])
a = tensor_args.to_device(data["acceleration"])
@@ -66,6 +122,7 @@ class JointLimits:
return JointLimits(data["joint_names"], p, v, a, j, e)
def clone(self) -> JointLimits:
"""Clone joint limits."""
return JointLimits(
self.joint_names.copy(),
self.position.clone(),
@@ -76,7 +133,15 @@ class JointLimits:
self.tensor_args,
)
def copy_(self, new_jl: JointLimits):
def copy_(self, new_jl: JointLimits) -> JointLimits:
"""Copy joint limits from another instance. This maintains reference and copies the data.
Args:
new_jl: JointLimits instance to copy from.
Returns:
JointLimits: Data copied joint limits.
"""
self.joint_names = new_jl.joint_names.copy()
self.position.copy_(new_jl.position)
self.velocity.copy_(new_jl.velocity)
@@ -87,19 +152,53 @@ class JointLimits:
@dataclass
class CSpaceConfig:
"""Configuration space parameters of the robot."""
#: Names of the joints.
joint_names: List[str]
#: Retract configuration for the robot. This is the configuration used to bias graph search
#: and also regularize inverse kinematics. This configuration is also used to initialize
#: the robot during warmup phase of an optimizer. Set this to a collision-free configuration
#: for good performance. When this configuration is in collision, it's not used to bias
#: graph search.
retract_config: Optional[T_DOF] = None
#: Weight for each joint in configuration space. Used to measure distance between nodes in
#: graph search-based planning.
cspace_distance_weight: Optional[T_DOF] = None
#: Weight for each joint, used in regularization cost term for inverse kinematics.
null_space_weight: Optional[T_DOF] = None
#: Device and floating point precision for tensors.
tensor_args: TensorDeviceType = TensorDeviceType()
#: Maximum acceleration for each joint. Accepts a scalar or a list of values for each joint.
max_acceleration: Union[float, List[float]] = 10.0
#: Maximum jerk for each joint. Accepts a scalar or a list of values for each joint.
max_jerk: Union[float, List[float]] = 500.0
#: Velocity scale for each joint. Accepts a scalar or a list of values for each joint.
#: This is used to scale the velocity limits for each joint.
velocity_scale: Union[float, List[float]] = 1.0
#: Acceleration scale for each joint. Accepts a scalar or a list of values for each joint.
#: This is used to scale the acceleration limits for each joint.
acceleration_scale: Union[float, List[float]] = 1.0
#: Jerk scale for each joint. Accepts a scalar or a list of values for each joint.
#: This is used to scale the jerk limits for each joint.
jerk_scale: Union[float, List[float]] = 1.0
#: Position limit clip value. This is used to clip the position limits for each joint.
#: Accepts a scalar or a list of values for each joint. This is useful to truncate limits
#: to account for any safety margins imposed by real robot controllers.
position_limit_clip: Union[float, List[float]] = 0.0
def __post_init__(self):
"""Post initialization checks and data transfer to device tensors."""
if self.retract_config is not None:
self.retract_config = self.tensor_args.to_device(self.retract_config)
if self.cspace_distance_weight is not None:
@@ -139,6 +238,8 @@ class CSpaceConfig:
self.acceleration_scale = self.tensor_args.to_device(self.acceleration_scale)
if isinstance(self.jerk_scale, List):
self.jerk_scale = self.tensor_args.to_device(self.jerk_scale)
if isinstance(self.position_limit_clip, List):
self.position_limit_clip = self.tensor_args.to_device(self.position_limit_clip)
# check shapes:
if self.retract_config is not None:
dof = self.retract_config.shape
@@ -148,6 +249,12 @@ class CSpaceConfig:
log_error("null_space_weight shape does not match retract_config")
def inplace_reindex(self, joint_names: List[str]):
"""Change order of joints in configuration space tensors to match given order of names.
Args:
joint_names: New order of joint names.
"""
new_index = [self.joint_names.index(j) for j in joint_names]
if self.retract_config is not None:
self.retract_config = self.retract_config[new_index].clone()
@@ -163,7 +270,18 @@ class CSpaceConfig:
joint_names = [self.joint_names[n] for n in new_index]
self.joint_names = joint_names
def copy_(self, new_config: CSpaceConfig):
def copy_(self, new_config: CSpaceConfig) -> CSpaceConfig:
"""Copy parameters from another instance.
This maintains reference and copies the data. Assumes that the new instance has the same
number of joints as the current instance and also same shape of tensors.
Args:
new_config: New parameters to copy into current instance.
Returns:
CSpaceConfig: Same instance of cspace configuration which has updated parameters.
"""
self.joint_names = new_config.joint_names.copy()
self.retract_config = copy_if_not_none(new_config.retract_config, self.retract_config)
self.null_space_weight = copy_if_not_none(
@@ -183,6 +301,8 @@ class CSpaceConfig:
return self
def clone(self) -> CSpaceConfig:
"""Clone configuration space parameters."""
return CSpaceConfig(
joint_names=self.joint_names.copy(),
retract_config=clone_if_not_none(self.retract_config),
@@ -194,9 +314,22 @@ class CSpaceConfig:
velocity_scale=self.velocity_scale.clone(),
acceleration_scale=self.acceleration_scale.clone(),
jerk_scale=self.jerk_scale.clone(),
position_limit_clip=(
self.position_limit_clip.clone()
if isinstance(self.position_limit_clip, torch.Tensor)
else self.position_limit_clip
),
)
def scale_joint_limits(self, joint_limits: JointLimits):
def scale_joint_limits(self, joint_limits: JointLimits) -> JointLimits:
"""Scale joint limits by the given scale factors.
Args:
joint_limits: Joint limits to scale.
Returns:
JointLimits: Scaled joint limits.
"""
if self.velocity_scale is not None:
joint_limits.velocity = joint_limits.velocity * self.velocity_scale
if self.acceleration_scale is not None:
@@ -212,7 +345,20 @@ class CSpaceConfig:
joint_position_lower: torch.Tensor,
joint_names: List[str],
tensor_args: TensorDeviceType = TensorDeviceType(),
):
) -> CSpaceConfig:
"""Load CSpace configuration from joint limits.
Args:
joint_position_upper: Upper position limits for each joint.
joint_position_lower: Lower position limits for each joint.
joint_names: Names of the joints. This should match the order of joints in the upper
and lower limits.
tensor_args: Device and floating point precision for tensors.
Returns:
CSpaceConfig: CSpace configuration with retract configuration set to the middle of the
joint limits and all weights set to 1.
"""
retract_config = ((joint_position_upper + joint_position_lower) / 2).flatten()
n_dof = retract_config.shape[-1]
null_space_weight = torch.ones(n_dof, **(tensor_args.as_torch_dict()))
@@ -238,25 +384,26 @@ class KinematicsTensorConfig:
#: Static Homogenous Transform from parent link to child link for all links [n_links,4,4].
fixed_transforms: torch.Tensor
#: index of fixed_transform given link index [n_links].
#: Index of fixed_transform given link index [n_links].
link_map: torch.Tensor
#: joint index given link index [n_links].
#: Joint index given link index [n_links].
joint_map: torch.Tensor
#: type of joint given link index [n_links].
#: Type of joint given link index [n_links].
joint_map_type: torch.Tensor
#: Joint offset to store scalars for mimic joints and negative axis joints.
joint_offset_map: torch.Tensor
#: index of link to write out pose [n_store_links].
#: Index of link to write out pose [n_store_links].
store_link_map: torch.Tensor
#: Mapping between each link to every other link, this is used to check
#: if a link is part of a serial chain formed by another link [n_links, n_links].
link_chain_map: torch.Tensor
#: Name of links whose pose will be stored [n_store_links].
#: Name of links to compute pose during kinematics computation [n_store_links].
link_names: List[str]
#: Joint limits
@@ -274,31 +421,32 @@ class KinematicsTensorConfig:
#: Name of all actuated joints.
joint_names: Optional[List[str]] = None
#:
#: Name of joints to lock to a fixed value along with the locked value
lock_jointstate: Optional[JointState] = None
#:
#: Joints that mimic other joints. This will be populated by :class:~`CudaRobotGenerator`
# when parsing the kinematics of the robot.
mimic_joints: Optional[dict] = None
#:
#: Sphere representation of the robot's geometry. This is used for collision detection.
link_spheres: Optional[torch.Tensor] = None
#:
#: Mapping of link index to sphere index. This is used to get spheres for a link.
link_sphere_idx_map: Optional[torch.Tensor] = None
#:
#: Mapping of link name to link index. This is used to get link index from link name.
link_name_to_idx_map: Optional[Dict[str, int]] = None
#: total number of spheres that represent the robot's geometry.
#: Total number of spheres that represent the robot's geometry.
total_spheres: int = 0
#: Additional debug parameters.
debug: Optional[Any] = None
#: index of end-effector in stored link poses.
#: Index of end-effector in stored link poses.
ee_idx: int = 0
#: Cspace configuration
#: Cspace parameters for the robot.
cspace: Optional[CSpaceConfig] = None
#: Name of base link. This is the root link from which all kinematic parameters were computed.
@@ -312,6 +460,7 @@ class KinematicsTensorConfig:
reference_link_spheres: Optional[torch.Tensor] = None
def __post_init__(self):
"""Post initialization checks and data transfer to device tensors."""
if self.cspace is None and self.joint_limits is not None:
self.load_cspace_cfg_from_kinematics()
if self.joint_limits is not None and self.cspace is not None:
@@ -320,6 +469,18 @@ class KinematicsTensorConfig:
self.reference_link_spheres = self.link_spheres.clone()
def copy_(self, new_config: KinematicsTensorConfig) -> KinematicsTensorConfig:
"""Copy parameters from another instance into current instance.
This maintains reference and copies the data. Assumes that the new instance has the same
number of joints as the current instance and also same shape of tensors.
Args:
new_config: New parameters to copy into current instance.
Returns:
KinematicsTensorConfig: Same instance of kinematics configuration which has updated
parameters.
"""
self.fixed_transforms.copy_(new_config.fixed_transforms)
self.link_map.copy_(new_config.link_map)
self.joint_map.copy_(new_config.joint_map)
@@ -350,10 +511,17 @@ class KinematicsTensorConfig:
self.link_names = new_config.link_names
self.mesh_link_names = new_config.mesh_link_names
self.total_spheres = new_config.total_spheres
if self.lock_jointstate is not None and new_config.lock_jointstate is not None:
self.lock_jointstate.copy_(new_config.lock_jointstate)
self.mimic_joints = new_config.mimic_joints
return self
def load_cspace_cfg_from_kinematics(self):
"""Load CSpace configuration from joint limits.
This sets the retract configuration to the middle of the joint limits and all weights to 1.
"""
retract_config = (
(self.joint_limits.position[1] + self.joint_limits.position[0]) / 2
).flatten()
@@ -371,22 +539,27 @@ class KinematicsTensorConfig:
)
def get_sphere_index_from_link_name(self, link_name: str) -> torch.Tensor:
"""Get indices of spheres for a link.
Args:
link_name: Name of the link.
Returns:
torch.Tensor: Indices of spheres for the link.
"""
link_idx = self.link_name_to_idx_map[link_name]
# get sphere index for this link:
link_spheres_idx = torch.nonzero(self.link_sphere_idx_map == link_idx).view(-1)
return link_spheres_idx
def update_link_spheres(
self, link_name: str, sphere_position_radius: torch.Tensor, start_sph_idx: int = 0
):
"""Update sphere parameters
NOTE: This currently does not update self collision distances.
"""Update sphere parameters of a specific link given by name.
Args:
link_name: _description_
sphere_position_radius: _description_
start_sph_idx: _description_. Defaults to 0.
link_name: Name of the link.
sphere_position_radius: Tensor of shape [n_spheres, 4] with columns [x, y, z, r].
start_sph_idx: If providing a subset of spheres, this is the starting index.
"""
# get sphere indices from link name:
link_sphere_index = self.get_sphere_index_from_link_name(link_name)[
@@ -398,14 +571,31 @@ class KinematicsTensorConfig:
def get_link_spheres(
self,
link_name: str,
):
) -> torch.Tensor:
"""Get spheres of a link.
Args:
link_name: Name of link.
Returns:
torch.Tensor: Spheres of the link with shape [n_spheres, 4] with columns [x, y, z, r].
"""
link_sphere_index = self.get_sphere_index_from_link_name(link_name)
return self.link_spheres[link_sphere_index, :]
def get_reference_link_spheres(
self,
link_name: str,
):
) -> torch.Tensor:
"""Get link spheres from the original robot configuration data before any modifications.
Args:
link_name: Name of link.
Returns:
torch.Tensor: Spheres of the link with shape [n_spheres, 4] with columns [x, y, z, r].
"""
link_sphere_index = self.get_sphere_index_from_link_name(link_name)
return self.reference_link_spheres[link_sphere_index, :]
@@ -415,47 +605,45 @@ class KinematicsTensorConfig:
sphere_tensor: Optional[torch.Tensor] = None,
link_name: str = "attached_object",
) -> bool:
"""_summary_
"""Attach object approximated by spheres to a link of the robot.
This function updates the sphere parameters of the link to represent the attached object.
Args:
sphere_radius: _description_. Defaults to None.
sphere_tensor: _description_. Defaults to None.
link_name: _description_. Defaults to "attached_object".
Raises:
ValueError: _description_
ValueError: _description_
sphere_radius: Radius to change for existing spheres. If changing position of spheres
as well, then set this to None.
sphere_tensor: New sphere tensor to replace existing spheres. Shape [n_spheres, 4] with
columns [x, y, z, r]. If changing only radius, set this to None and use
sphere_radius.
link_name: Name of the link to attach object to. Defaults to "attached_object".
Returns:
_description_
bool: True if successful.
"""
if link_name not in self.link_name_to_idx_map.keys():
raise ValueError(link_name + " not found in spheres")
log_error(link_name + " not found in spheres")
curr_spheres = self.get_link_spheres(link_name)
if sphere_radius is not None:
curr_spheres[:, 3] = sphere_radius
if sphere_tensor is not None:
if sphere_tensor.shape != curr_spheres.shape and sphere_tensor.shape[0] != 1:
raise ValueError("sphere_tensor shape does not match current spheres")
log_error("sphere_tensor shape does not match current spheres")
curr_spheres[:, :] = sphere_tensor
self.update_link_spheres(link_name, curr_spheres)
return True
def detach_object(self, link_name: str = "attached_object") -> bool:
"""Detach object from robot end-effector
"""Detach object spheres from a link by setting all spheres to zero with negative radius.
Args:
link_name: _description_. Defaults to "attached_object".
Raises:
ValueError: _description_
link_name: Name of the link to detach object from.
Returns:
_description_
bool: True if successful.
"""
if link_name not in self.link_name_to_idx_map.keys():
raise ValueError(link_name + " not found in spheres")
log_error(link_name + " not found in spheres")
curr_spheres = self.get_link_spheres(link_name)
curr_spheres[:, 3] = -100.0
curr_spheres[:, :3] = 0.0
@@ -472,15 +660,25 @@ class KinematicsTensorConfig:
return self.get_link_spheres(link_name).shape[0]
def disable_link_spheres(self, link_name: str):
"""Disable spheres of a link by setting all spheres to zero with negative radius.
Args:
link_name: Name of the link to disable spheres.
"""
if link_name not in self.link_name_to_idx_map.keys():
raise ValueError(link_name + " not found in spheres")
log_error(link_name + " not found in spheres")
curr_spheres = self.get_link_spheres(link_name)
curr_spheres[:, 3] = -100.0
self.update_link_spheres(link_name, curr_spheres)
def enable_link_spheres(self, link_name: str):
"""Enable spheres of a link by resetting to values from initial robot configuration data.
Args:
link_name: Name of the link to enable spheres.
"""
if link_name not in self.link_name_to_idx_map.keys():
raise ValueError(link_name + " not found in spheres")
log_error(link_name + " not found in spheres")
curr_spheres = self.get_reference_link_spheres(link_name)
self.update_link_spheres(link_name, curr_spheres)
@@ -489,10 +687,29 @@ class KinematicsTensorConfig:
class SelfCollisionKinematicsConfig:
"""Dataclass that stores self collision attributes to pass to cuda kernel."""
#: Offset radii for each sphere. This is used to inflate the spheres for self collision
#: detection.
offset: Optional[torch.Tensor] = None
distance_threshold: Optional[torch.Tensor] = None
#: Sphere index to use for a given thread.
thread_location: Optional[torch.Tensor] = None
#: Maximum number of threads to launch for computing self collision between spheres.
thread_max: Optional[int] = None
collision_matrix: Optional[torch.Tensor] = None
#: Distance threshold for self collision detection. This is currently not used.
distance_threshold: Optional[torch.Tensor] = None
#: Two kernel implementations are available. Set this to True to use the experimental kernel
#: which is faster. Set this to False to use the collision matrix based kernel which is slower.
experimental_kernel: bool = True
#: Collision matrix containing information about which pair of spheres need to be checked for
#: collision. This is only used when experimental_kernel is set to False.
collision_matrix: Optional[torch.Tensor] = None
#: Number of collision checks to perform per thread. Each thread loads a sphere and is allowed
#: to check upto checks_per_thread other spheres for collision. Note that all checks have to
#: be performed within 1024 threads as shared memory is used. So,
# checks_per_thread * n_spheres <= 1024.
checks_per_thread: int = 32

View File

@@ -8,8 +8,13 @@
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
"""
Parses Kinematics from an `URDF <https://wiki.ros.org/urdf>`__ file that describes the
kinematic tree of a robot.
"""
# Standard Library
from typing import Dict, List, Optional
from typing import Dict, List, Optional, Tuple
# Third Party
import numpy as np
@@ -20,13 +25,14 @@ from lxml import etree
from curobo.cuda_robot_model.kinematics_parser import KinematicsParser, LinkParams
from curobo.cuda_robot_model.types import JointType
from curobo.geom.types import Mesh as CuroboMesh
from curobo.types.base import TensorDeviceType
from curobo.types.math import Pose
from curobo.util.logger import log_error, log_warn
from curobo.util_file import join_path
class UrdfKinematicsParser(KinematicsParser):
"""Parses Kinematics from an URDF file and provides access to the kinematic tree."""
def __init__(
self,
urdf_path,
@@ -35,6 +41,16 @@ class UrdfKinematicsParser(KinematicsParser):
extra_links: Optional[Dict[str, LinkParams]] = None,
build_scene_graph: bool = False,
) -> None:
"""Initialize instance with URDF file path.
Args:
urdf_path: Path to the URDF file.
load_meshes: Load meshes for links from the URDF file.
mesh_root: Absolute path to the directory where link meshes are stored.
extra_links: Extra links to add to the kinematic tree.
build_scene_graph: Build scene graph for the robot. Set this to True if you want to
determine the root link of the robot.
"""
# load robot from urdf:
self._robot = yourdfpy.URDF.load(
urdf_path,
@@ -46,6 +62,7 @@ class UrdfKinematicsParser(KinematicsParser):
super().__init__(extra_links)
def build_link_parent(self):
"""Build parent map for the robot."""
self._parent_map = {}
for jid, j in enumerate(self._robot.joint_map):
self._parent_map[self._robot.joint_map[j].child] = {
@@ -54,11 +71,31 @@ class UrdfKinematicsParser(KinematicsParser):
"joint_name": j,
}
def _get_joint_name(self, idx):
def _get_joint_name(self, idx) -> str:
"""Get the name of the joint at the given index.
Args:
idx: Index of the joint.
Returns:
str: Name of the joint.
"""
joint = self._robot.joint_names[idx]
return joint
def _get_joint_limits(self, joint):
def _get_joint_limits(self, joint: yourdfpy.Joint) -> Tuple[Dict[str, float], str]:
"""Get the limits of a joint.
This function converts continuous joints to revolute joints with limits [-6.28, 6.28].
Args:
joint: Instance of the joint.
Returns:
Tuple[Dict[str, float], str]: Limits of the joint and the type of the joint
(revolute or prismatic).
"""
joint_type = joint.type
if joint_type != "continuous":
joint_limits = {
@@ -79,6 +116,16 @@ class UrdfKinematicsParser(KinematicsParser):
return joint_limits, joint_type
def get_link_parameters(self, link_name: str, base=False) -> LinkParams:
"""Get parameters of a link in the kinematic tree.
Args:
link_name: Name of the link.
base: Is this the base link of the robot?
Returns:
LinkParams: Parameters of the link.
"""
link_params = self._get_from_extra_links(link_name)
if link_params is not None:
return link_params
@@ -122,7 +169,7 @@ class UrdfKinematicsParser(KinematicsParser):
mimic_joint_name = joint_name
active_joint_name = joint.mimic.joint
active_joint = self._robot.joint_map[active_joint_name]
active_joint_limits, active_joint_type = self._get_joint_limits(active_joint)
active_joint_limits, _ = self._get_joint_limits(active_joint)
# check to make sure mimic joint limits are not larger than active joint:
if (
active_joint_limits["lower"] * joint_offset[0] + joint_offset[1]
@@ -196,6 +243,11 @@ class UrdfKinematicsParser(KinematicsParser):
return link_params
def add_absolute_path_to_link_meshes(self, mesh_dir: str = ""):
"""Add absolute path to link meshes.
Args:
mesh_dir: Absolute path to the directory containing link meshes.
"""
# read all link meshes and update their mesh paths by prepending mesh_dir
links = self._robot.link_map
for k in links.keys():
@@ -211,11 +263,21 @@ class UrdfKinematicsParser(KinematicsParser):
if m is not None:
m.filename = join_path(mesh_dir, m.filename)
def get_urdf_string(self):
def get_urdf_string(self) -> str:
"""Get the contents of URDF as a string."""
txt = etree.tostring(self._robot.write_xml(), method="xml", encoding="unicode")
return txt
def get_link_mesh(self, link_name):
def get_link_mesh(self, link_name: str) -> CuroboMesh:
"""Get mesh of a link.
Args:
link_name: Name of the link.
Returns:
Mesh: Mesh of the link.
"""
link_data = self._robot.link_map[link_name]
if len(link_data.visuals) == 0:
@@ -237,14 +299,12 @@ class UrdfKinematicsParser(KinematicsParser):
)
@property
def root_link(self):
return self._robot.base_link
def root_link(self) -> str:
"""Returns the name of the base link of the robot.
def get_controlled_joint_names(self) -> List[str]:
j_list = []
for k in self._parent_map.keys():
joint_name = self._parent_map[k]["joint_name"]
joint = self._robot.joint_map[joint_name]
if joint.type != "fixed" and joint.mimic is None:
j_list.append(joint_name)
return j_list
Only available when the URDF is loaded with build_scene_graph=True.
Returns:
str: Name of the base link.
"""
return self._robot.base_link

View File

@@ -8,6 +8,15 @@
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
"""
An experimental kinematics parser that reads the robot from a USD file or stage.
Basic loading of simple robots should work but more complex robots may not be supported. E.g.,
mimic joints cannot be parsed correctly. Use the URDF parser (:class:`~UrdfKinematicsParser`)
for more complex robots.
"""
# Standard Library
from typing import Dict, List, Optional, Tuple
@@ -33,8 +42,10 @@ except ImportError:
class UsdKinematicsParser(KinematicsParser):
"""An experimental kinematics parser from USD.
NOTE: A more complete solution will be available in a future release. Current implementation
does not account for link geometry transformations after a joints.
Current implementation does not account for link geometry transformations after a joints.
Also, cannot read mimic joints.
"""
def __init__(
@@ -46,7 +57,17 @@ class UsdKinematicsParser(KinematicsParser):
tensor_args: TensorDeviceType = TensorDeviceType(),
extra_links: Optional[Dict[str, LinkParams]] = None,
) -> None:
# load usd file:
"""Initialize instance with USD file path.
Args:
usd_path: path to usd reference. This will opened as a Usd Stage.
flip_joints: list of joint names to flip axis. This is required as current
implementation does not read transformations from joint to link correctly.
flip_joint_limits: list of joint names to flip joint limits.
usd_robot_root: Root prim of the robot in the Usd Stage.
tensor_args: Device and floating point precision for tensors.
extra_links: Additional links to add to the robot kinematics structure.
"""
# create a usd stage
self._flip_joints = flip_joints
@@ -59,9 +80,11 @@ class UsdKinematicsParser(KinematicsParser):
@property
def robot_prim_root(self):
"""Root prim of the robot in the Usd Stage."""
return self._usd_robot_root
def build_link_parent(self):
"""Build a dictionary containing parent link for each link in the robot."""
self._parent_map = {}
all_joints = [
x
@@ -77,16 +100,16 @@ class UsdKinematicsParser(KinematicsParser):
def get_link_parameters(self, link_name: str, base: bool = False) -> LinkParams:
"""Get Link parameters from usd stage.
NOTE: USD kinematics "X" axis joints map to "Z" in URDF. Specifically,
USD kinematics "X" axis joints map to "Z" in URDF. Specifically,
uniform token physics:axis = "X" value only matches "Z" in URDF. This is because of usd
files assuming Y axis as up while urdf files assume Z axis as up.
Args:
link_name (str): Name of link.
base (bool, optional): flag to specify base link. Defaults to False.
base (bool, optional): Is this the base link of the robot?
Returns:
LinkParams: obtained link parameters.
LinkParams: Obtained link parameters.
"""
link_params = self._get_from_extra_links(link_name)
if link_params is not None:
@@ -157,7 +180,15 @@ class UsdKinematicsParser(KinematicsParser):
)
return link_params
def _get_joint_transform(self, prim: Usd.Prim):
def _get_joint_transform(self, prim: Usd.Prim) -> Pose:
"""Get pose of link from joint prim.
Args:
prim: joint prim in the usd stage.
Returns:
Pose: pose of the link from joint origin.
"""
j_prim = UsdPhysics.Joint(prim)
position = np.ravel(j_prim.GetLocalPos0Attr().Get())
quatf = j_prim.GetLocalRot0Attr().Get()
@@ -188,10 +219,16 @@ class UsdKinematicsParser(KinematicsParser):
def get_links_for_joint(prim: Usd.Prim) -> Tuple[Optional[Usd.Prim], Optional[Usd.Prim]]:
"""Get all link prims from the given joint prim.
Note:
This assumes that the `body0_rel_targets` and `body1_rel_targets` are configured such
that the parent link is specified in `body0_rel_targets` and the child links is specified
in `body1_rel_targets`.
This assumes that the `body0_rel_targets` and `body1_rel_targets` are configured such
that the parent link is specified in `body0_rel_targets` and the child links is specified
in `body1_rel_targets`.
Args:
prim: joint prim in the usd stage.
Returns:
Tuple[Optional[Usd.Prim], Optional[Usd.Prim]]: parent link prim and child link prim.
"""
stage = prim.GetStage()
joint_api = UsdPhysics.Joint(prim)

View File

@@ -0,0 +1,63 @@
#
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
""" This module contains a function to load robot representation from a yaml or xrdf file. """
# Standard Library
from typing import Optional, Union
# CuRobo
from curobo.types.file_path import ContentPath
from curobo.util.logger import log_error
from curobo.util.xrdf_utils import convert_xrdf_to_curobo
from curobo.util_file import join_path, load_yaml
def load_robot_yaml(content_path: ContentPath = ContentPath()) -> dict:
"""Load robot representation from a yaml or xrdf file.
Args:
content_path: Path to the robot configuration files.
Returns:
dict: Robot representation as a dictionary.
"""
if isinstance(content_path, str):
log_error("content_path should be of type ContentPath")
robot_data = load_yaml(content_path.get_robot_configuration_path())
if "format" in robot_data and robot_data["format"] == "xrdf":
robot_data = convert_xrdf_to_curobo(
content_path=content_path,
)
robot_data["robot_cfg"]["kinematics"][
"asset_root_path"
] = content_path.robot_asset_absolute_path
if "robot_cfg" not in robot_data:
robot_data["robot_cfg"] = robot_data
if "kinematics" not in robot_data["robot_cfg"]:
robot_data["robot_cfg"]["kinematics"] = robot_data
if content_path.robot_urdf_absolute_path is not None:
robot_data["robot_cfg"]["kinematics"]["urdf_path"] = content_path.robot_urdf_absolute_path
if content_path.robot_usd_absolute_path is not None:
robot_data["robot_cfg"]["kinematics"]["usd_path"] = content_path.robot_usd_absolute_path
if content_path.robot_asset_absolute_path is not None:
robot_data["robot_cfg"]["kinematics"][
"asset_root_path"
] = content_path.robot_asset_absolute_path
if isinstance(robot_data["robot_cfg"]["kinematics"]["collision_spheres"], str):
robot_data["robot_cfg"]["kinematics"]["collision_spheres"] = join_path(
content_path.robot_config_root_path,
robot_data["robot_cfg"]["kinematics"]["collision_spheres"],
)
return robot_data

View File

@@ -67,6 +67,20 @@ namespace Curobo
q[0] = length * q[0];
}
__device__ __forceinline__ void normalize_quaternion(float4 &q)
{
// get length:
float inv_length = 1.0 / length(q);
if (q.w < 0.0)
{
inv_length = -1.0 * inv_length;
}
q = inv_length * q;
}
/**
* @brief get quaternion from transformation matrix
*
@@ -129,8 +143,9 @@ namespace Curobo
* @param t # rotation matrix 3x3
* @param q quaternion in wxyz format
*/
__device__ __forceinline__ void rot_to_quat(float *t, float *q)
__device__ __forceinline__ void rot_to_quat(float *t, float4 &q)
{
// q.x = w, q.y = x, q.z = y, q.w = z,
float n;
float n_sqrt;
@@ -140,19 +155,19 @@ namespace Curobo
{
n = 1 + t[0] - t[4] - t[8];
n_sqrt = 0.5 * rsqrtf(n);
q[1] = n * n_sqrt;
q[2] = (t[1] + t[3]) * n_sqrt;
q[3] = (t[6] + t[2]) * n_sqrt;
q[0] = -1 * (t[5] - t[7]) * n_sqrt; // * -1 ; // this is the wrong one?
q.x = n * n_sqrt;
q.y = (t[1] + t[3]) * n_sqrt;
q.z = (t[6] + t[2]) * n_sqrt;
q.w = -1 * (t[5] - t[7]) * n_sqrt; // * -1 ; // this is the wrong one?
}
else
{
n = 1 - t[0] + t[4] - t[8];
n_sqrt = 0.5 * rsqrtf(n);
q[1] = (t[1] + t[3]) * n_sqrt;
q[2] = n * n_sqrt;
q[3] = (t[5] + t[7]) * n_sqrt;
q[0] = -1 * (t[6] - t[2]) * n_sqrt;
q.x = (t[1] + t[3]) * n_sqrt;
q.y = n * n_sqrt;
q.z = (t[5] + t[7]) * n_sqrt;
q.w = -1 * (t[6] - t[2]) * n_sqrt;
}
}
else
@@ -161,19 +176,19 @@ namespace Curobo
{
n = 1 - t[0] - t[4] + t[8];
n_sqrt = 0.5 * rsqrtf(n);
q[1] = (t[6] + t[2]) * n_sqrt;
q[2] = (t[5] + t[7]) * n_sqrt;
q[3] = n * n_sqrt;
q[0] = -1 * (t[1] - t[3]) * n_sqrt;
q.x = (t[6] + t[2]) * n_sqrt;
q.y = (t[5] + t[7]) * n_sqrt;
q.z = n * n_sqrt;
q.w = -1 * (t[1] - t[3]) * n_sqrt;
}
else
{
n = 1 + t[0] + t[4] + t[8];
n_sqrt = 0.5 * rsqrtf(n);
q[1] = (t[5] - t[7]) * n_sqrt;
q[2] = (t[6] - t[2]) * n_sqrt;
q[3] = (t[1] - t[3]) * n_sqrt;
q[0] = -1 * n * n_sqrt;
q.x = (t[5] - t[7]) * n_sqrt;
q.y = (t[6] - t[2]) * n_sqrt;
q.z = (t[1] - t[3]) * n_sqrt;
q.w = -1 * n * n_sqrt;
}
}
normalize_quaternion(q);
@@ -244,6 +259,23 @@ namespace Curobo
C[3] = sphere_pos.w;
}
template<typename scalar_t>
__device__ __forceinline__ void
transform_sphere_float4(const float *transform_mat, const scalar_t *sphere, float4 &C)
{
float4 sphere_pos = *(float4 *)&sphere[0];
int st_idx = 0;
C.x = transform_mat[0] * sphere_pos.x + transform_mat[1] * sphere_pos.y +
transform_mat[2] * sphere_pos.z + transform_mat[3];
C.y = transform_mat[4] * sphere_pos.x + transform_mat[5] * sphere_pos.y +
transform_mat[6] * sphere_pos.z + transform_mat[7];
C.z = transform_mat[8] * sphere_pos.x + transform_mat[9] * sphere_pos.y +
transform_mat[10] * sphere_pos.z + transform_mat[11];
C.w = sphere_pos.w;
}
template<typename scalar_t>
__device__ __forceinline__ void fixed_joint_fn(const scalar_t *fixedTransform,
float *JM)
@@ -440,7 +472,7 @@ namespace Curobo
e_pos.z = cumul_mat[4 + 4 + 3];
// compute position gradient:
j_pos = *(float3 *)&l_pos[0] - e_pos; // - e_pos;
j_pos = make_float3(l_pos[0], l_pos[1], l_pos[2]) - e_pos; // - e_pos;
float3 scale_grad = axis_sign * loc_grad;
scale_cross_sum(vec, j_pos, scale_grad, grad_q); // cross product
}
@@ -688,7 +720,7 @@ namespace Curobo
for (int i = 0; i < M; i++)
{
cumul_mat[outAddrStart + (i * M) + col_idx] =
dot(*(float4 *)&cumul_mat[inAddrStart + (i * M)], *(float4 *)JM);
dot(*(float4 *)&cumul_mat[inAddrStart + (i * M)], make_float4(JM[0], JM[1], JM[2], JM[3]));
}
if (use_global_cumul)
@@ -723,13 +755,21 @@ namespace Curobo
// read cumul idx:
read_cumul_idx = linkSphereMap[sph_idx];
float spheres_mem[4];
float4 spheres_mem = make_float4(0.0, 0.0, 0.0, 0.0);
const int16_t sphAddrs = sph_idx * 4;
transform_sphere(&cumul_mat[matAddrBase + read_cumul_idx * 16],
&robot_spheres[sphAddrs], &spheres_mem[0]);
*(float4 *)&b_robot_spheres[batchAddrs + sphAddrs] =
*(float4 *)&spheres_mem[0];
transform_sphere_float4(&cumul_mat[matAddrBase + (read_cumul_idx * 16)],
&robot_spheres[sphAddrs], spheres_mem);
//b_robot_spheres[batchAddrs + sphAddrs] = spheres_mem[0];
//b_robot_spheres[batchAddrs + sphAddrs + 1] = spheres_mem[1];
//b_robot_spheres[batchAddrs + sphAddrs + 2] = spheres_mem[2];
//b_robot_spheres[batchAddrs + sphAddrs + 3] = spheres_mem[3];
//float4 test_sphere = *(float4 *)&spheres_mem[0];// make_float4(spheres_mem[0],spheres_mem[1],spheres_mem[2],spheres_mem[3]);
*(float4 *)&b_robot_spheres[batchAddrs + sphAddrs] = spheres_mem;
}
// write position and rotation, we convert rotation matrix to a quaternion and
@@ -857,7 +897,7 @@ namespace Curobo
// fetch one row of cumul_mat, multiply with a column, which is in JM
cumul_mat[outAddrStart + elem_idx] =
dot(*(float4 *)&cumul_mat[inAddrStart + ((elem_idx / 4) * M)],
*(float4 *)JM);
make_float4(JM[0], JM[1], JM[2], JM[3]));
}
}
@@ -906,7 +946,7 @@ namespace Curobo
// read cumul idx:
read_cumul_idx = linkSphereMap[sph_idx];
float spheres_mem[4];
float spheres_mem[4] = {0.0,0.0,0.0,0.0};
transform_sphere(&cumul_mat[matAddrBase + read_cumul_idx * 16],
&robotSpheres[sphAddrs], &spheres_mem[0]);
@@ -925,7 +965,7 @@ namespace Curobo
int j_type = jointMapType[j];
if (j_type == Z_ROT)
{
float result = 0.0;
@@ -1137,19 +1177,26 @@ namespace Curobo
{
return;
}
float q[4] = { 0.0 }; // initialize array
//float q[4] = { 0.0 }; // initialize array
float4 q = make_float4(0,0,0,0);
float rot[9];
// read rot
*(float3 *)&rot[0] = *(float3 *)&in_rot_mat[batch_idx * 9];
*(float3 *)&rot[3] = *(float3 *)&in_rot_mat[batch_idx * 9 + 3];
*(float3 *)&rot[6] = *(float3 *)&in_rot_mat[batch_idx * 9 + 6];
#pragma unroll 9
for (int k = 0; k<9; k++)
{
rot[k] = in_rot_mat[batch_idx * 9 + k];
}
rot_to_quat(&rot[0], &q[0]);
// *(float3 *)&rot[0] = *(float3 *)&in_rot_mat[batch_idx * 9];
// *(float3 *)&rot[3] = *(float3 *)&in_rot_mat[batch_idx * 9 + 3];
// *(float3 *)&rot[6] = *(float3 *)&in_rot_mat[batch_idx * 9 + 6];
rot_to_quat(&rot[0], q);
// write quaternion:
*(float4 *)&out_quat[batch_idx * 4] = *(float4 *)&q[0];
*(float4 *)&out_quat[batch_idx * 4] = q;
}
} // namespace Kinematics
} // namespace Curobo
@@ -1161,8 +1208,8 @@ std::vector<torch::Tensor>kin_fused_forward(
const torch::Tensor robot_spheres, const torch::Tensor link_map,
const torch::Tensor joint_map, const torch::Tensor joint_map_type,
const torch::Tensor store_link_map, const torch::Tensor link_sphere_map,
const torch::Tensor joint_offset_map,
const int batch_size,
const torch::Tensor joint_offset_map,
const int batch_size,
const int n_joints,const int n_spheres,
const bool use_global_cumul = false)
{
@@ -1179,9 +1226,8 @@ std::vector<torch::Tensor>kin_fused_forward(
CHECK_INPUT(joint_map_type);
CHECK_INPUT(store_link_map);
CHECK_INPUT(link_sphere_map);
//CHECK_INPUT(link_chain_map);
CHECK_INPUT(joint_offset_map);
const int n_links = link_map.size(0);
const int store_n_links = link_pos.size(1);
assert(joint_map.dtype() == torch::kInt16);
@@ -1232,7 +1278,7 @@ std::vector<torch::Tensor>kin_fused_forward(
joint_map_type.data_ptr<int8_t>(),
joint_map.data_ptr<int16_t>(), link_map.data_ptr<int16_t>(),
store_link_map.data_ptr<int16_t>(),
link_sphere_map.data_ptr<int16_t>(),
link_sphere_map.data_ptr<int16_t>(),
joint_offset_map.data_ptr<float>(),
batch_size, n_spheres,
n_links, n_joints, store_n_links);
@@ -1253,7 +1299,7 @@ std::vector<torch::Tensor>kin_fused_forward(
joint_map_type.data_ptr<int8_t>(),
joint_map.data_ptr<int16_t>(), link_map.data_ptr<int16_t>(),
store_link_map.data_ptr<int16_t>(),
link_sphere_map.data_ptr<int16_t>(),
link_sphere_map.data_ptr<int16_t>(),
joint_offset_map.data_ptr<float>(),
batch_size, n_spheres,
n_links, n_joints, store_n_links);
@@ -1285,7 +1331,7 @@ std::vector<torch::Tensor>kin_fused_backward_16t(
using namespace Curobo::Kinematics;
CHECK_INPUT_GUARD(joint_vec);
CHECK_INPUT(grad_out);
CHECK_INPUT(grad_nlinks_pos);
CHECK_INPUT(grad_nlinks_pos);
CHECK_INPUT(grad_nlinks_quat);
CHECK_INPUT(global_cumul_mat);
CHECK_INPUT(fixed_transform);
@@ -1297,10 +1343,10 @@ std::vector<torch::Tensor>kin_fused_backward_16t(
CHECK_INPUT(link_sphere_map);
CHECK_INPUT(link_chain_map);
CHECK_INPUT(joint_offset_map);
const int n_links = link_map.size(0);
const int store_n_links = store_link_map.size(0);
// assert(n_links < 128);
assert(n_joints < 128); // for larger num. of joints, change kernel3's
// MAX_JOINTS template value.
@@ -1362,7 +1408,7 @@ std::vector<torch::Tensor>kin_fused_backward_16t(
joint_map.data_ptr<int16_t>(), link_map.data_ptr<int16_t>(),
store_link_map.data_ptr<int16_t>(),
link_sphere_map.data_ptr<int16_t>(),
link_chain_map.data_ptr<int16_t>(),
link_chain_map.data_ptr<int16_t>(),
joint_offset_map.data_ptr<float>(),
batch_size, n_spheres,
n_links, n_joints, store_n_links);
@@ -1386,7 +1432,7 @@ std::vector<torch::Tensor>kin_fused_backward_16t(
joint_map.data_ptr<int16_t>(), link_map.data_ptr<int16_t>(),
store_link_map.data_ptr<int16_t>(),
link_sphere_map.data_ptr<int16_t>(),
link_chain_map.data_ptr<int16_t>(),
link_chain_map.data_ptr<int16_t>(),
joint_offset_map.data_ptr<float>(),
batch_size, n_spheres,
n_links, n_joints, store_n_links);
@@ -1410,7 +1456,7 @@ std::vector<torch::Tensor>kin_fused_backward_16t(
joint_map.data_ptr<int16_t>(), link_map.data_ptr<int16_t>(),
store_link_map.data_ptr<int16_t>(),
link_sphere_map.data_ptr<int16_t>(),
link_chain_map.data_ptr<int16_t>(),
link_chain_map.data_ptr<int16_t>(),
joint_offset_map.data_ptr<float>(),
batch_size, n_spheres,
n_links, n_joints, store_n_links);
@@ -1435,11 +1481,11 @@ std::vector<torch::Tensor>kin_fused_backward_16t(
fixed_transform.data_ptr<float>(),
robot_spheres.data_ptr<float>(),
joint_map_type.data_ptr<int8_t>(),
joint_map.data_ptr<int16_t>(),
joint_map.data_ptr<int16_t>(),
link_map.data_ptr<int16_t>(),
store_link_map.data_ptr<int16_t>(),
link_sphere_map.data_ptr<int16_t>(),
link_chain_map.data_ptr<int16_t>(),
link_chain_map.data_ptr<int16_t>(),
joint_offset_map.data_ptr<float>(),
batch_size, n_spheres,
n_links, n_joints, store_n_links);

View File

@@ -434,15 +434,25 @@ namespace Curobo
// is ? horizon, not
// in this mode
float d_vec_weight[6] = { 0.0 };
*(float3 *)&d_vec_weight[0] = *(float3 *)&vec_weight[0];
*(float3 *)&d_vec_weight[3] = *(float3 *)&vec_weight[3];
#pragma unroll 6
for (int k = 0; k < 6; k++)
{
d_vec_weight[k] = vec_weight[k];
}
//*(float3 *)&d_vec_weight[0] = *(float3 *)&vec_weight[0]; // TODO
//*(float3 *)&d_vec_weight[3] = *(float3 *)&vec_weight[3];
float3 offset_rotation = *(float3 *)&offset_waypoint[0];
float3 offset_position = *(float3 *)&offset_waypoint[3];
if ((h_idx < horizon - 1) && (h_idx != horizon - offset_tstep))
{
*(float3 *)&d_vec_weight[0] *= *(float3 *)&run_vec_weight[0];
*(float3 *)&d_vec_weight[3] *= *(float3 *)&run_vec_weight[3];
#pragma unroll 6
for (int k = 0; k < 6; k++)
{
d_vec_weight[k] *= run_vec_weight[k];
}
//*(float3 *)&d_vec_weight[0] *= *(float3 *)&run_vec_weight[0];
//*(float3 *)&d_vec_weight[3] *= *(float3 *)&run_vec_weight[3];
}
if (!write_distance)
@@ -450,8 +460,8 @@ namespace Curobo
position_weight *= run_weight[h_idx];
rotation_weight *= run_weight[h_idx];
float sum_weight = 0;
#pragma unroll 6
#pragma unroll 6
for (int i = 0; i < 6; i++)
{
sum_weight += d_vec_weight[i];
@@ -480,7 +490,9 @@ namespace Curobo
float best_distance_vec[6] = { 0.0 };
float d_vec_convergence[2];
*(float2 *)&d_vec_convergence[0] = *(float2 *)&vec_convergence[0];
//*(float2 *)&d_vec_convergence[0] = *(float2 *)&vec_convergence[0]; // TODO
d_vec_convergence[0] = vec_convergence[0];
d_vec_convergence[1] = vec_convergence[1];
int best_idx = -1;

View File

@@ -131,7 +131,7 @@ namespace Curobo
int i = ndpt * (warp_idx / nwpr); // starting row number for this warp
int j = (warp_idx % nwpr) * 32; // starting column number for this warp
dist_t max_d = { .d = 0.0, .i = 0, .j = 0 };
dist_t max_d = {0.0, 0.0, 0.0 };// .d, .i, .j
__shared__ dist_t max_darr[32];
// Optimization: About 1/3 of the warps will have no work.
@@ -354,7 +354,7 @@ namespace Curobo
// in registers (max_d).
// Each thread computes upto ndpt distances.
//////////////////////////////////////////////////////
dist_t max_d[NBPB] = {{ .d = 0.0, .i = 0, .j = 0 } };
dist_t max_d[NBPB] = {{ 0.0, 0.0, 0.0}};
int16_t indices[ndpt * 2];
for (uint8_t i = 0; i < ndpt * 2; i++)
@@ -710,9 +710,7 @@ std::vector<torch::Tensor>self_collision_distance(
threadsPerBlock = warpsPerBlock * 32;
blocksPerGrid = batch_size;
// printf("Blocks: %d, threads/block:%d, ndpt=%d, nwpr=%d\n", blocksPerGrid,
// threadsPerBlock, ndpt, nwpr);
assert(collision_matrix.size(0) == nspheres * nspheres);
int smemSize = nspheres * sizeof(float4);

View File

@@ -104,7 +104,7 @@ class SelfCollisionDistance(torch.autograd.Function):
robot_spheres, # .view(-1, 4),
sphere_offset,
weight,
coll_matrix,
coll_matrix.view(-1),
thread_locations,
max_thread,
b * h,

View File

@@ -74,20 +74,20 @@ class KinematicsFusedFunction(Function):
link_pos,
link_quat,
b_robot_spheres,
global_cumul_mat,
global_cumul_mat.view(-1),
joint_seq,
fixed_transform,
robot_spheres,
fixed_transform.view(-1),
robot_spheres.view(-1),
link_map,
joint_map,
joint_map_type,
joint_map_type.view(-1),
store_link_map,
link_sphere_map,
joint_offset_map, # offset_joint_map
joint_offset_map,
b_size,
n_joints,
n_spheres,
True,
use_global_cumul,
)
out_link_pos = r[0]
out_link_quat = r[1]

View File

@@ -493,6 +493,18 @@ class WorldCollision(WorldCollisionConfig):
)
return mesh
def get_obstacle_names(self, env_idx: int = 0):
return []
def check_obstacle_exists(self, name: str, env_idx: int = 0) -> bool:
obstacle_names = self.get_obstacle_names(env_idx)
if name in obstacle_names:
return True
return False
class WorldPrimitiveCollision(WorldCollision):
"""World Oriented Bounding Box representation object
@@ -527,6 +539,10 @@ class WorldPrimitiveCollision(WorldCollision):
world_config, env_idx, fix_cache_reference=fix_cache_reference
)
def get_obstacle_names(self, env_idx: int = 0):
base_obstacles = super().get_obstacle_names(env_idx)
return self._env_obbs_names[env_idx] + base_obstacles
def load_batch_collision_model(self, world_config_list: List[WorldConfig]):
"""Load a batch of collision environments from a list of world configs.

View File

@@ -511,3 +511,7 @@ class WorldBloxCollision(WorldVoxelCollision):
def decay_layer(self, layer_name: str):
index = self._blox_names.index(layer_name)
self._blox_mapper.decay_occupancy(mapper_id=index)
def get_obstacle_names(self, env_idx: int = 0):
base_obstacles = super().get_obstacle_names(env_idx)
return self._blox_names + base_obstacles

View File

@@ -327,6 +327,10 @@ class WorldMeshCollision(WorldPrimitiveCollision):
log_error("Obstacle not found in world model: " + name)
self.world_model.objects
def get_obstacle_names(self, env_idx: int = 0):
base_obstacles = super().get_obstacle_names(env_idx)
return self._env_mesh_names[env_idx] + base_obstacles
def enable_mesh(
self,
enable: bool = True,

View File

@@ -257,6 +257,10 @@ class WorldVoxelCollision(WorldMeshCollision):
else:
return super().enable_obstacle(name, enable, env_idx)
def get_obstacle_names(self, env_idx: int = 0):
base_obstacles = super().get_obstacle_names(env_idx)
return self._env_voxel_names[env_idx] + base_obstacles
def enable_voxel(
self,
enable: bool = True,

View File

@@ -8,6 +8,8 @@
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
"""Geometry types are defined in this module. See :ref:`world_collision` for more information."""
from __future__ import annotations
# Standard Library
@@ -31,6 +33,8 @@ from curobo.util_file import get_assets_path, join_path
@dataclass
class Material:
"""Material properties of an obstacle, useful for rendering."""
metallic: float = 0.0
roughness: float = 0.4
@@ -45,21 +49,23 @@ class Obstacle:
#: Pose of obstacle as a list with format [x y z qw qx qy qz]
pose: Optional[List[float]] = None
#: NOTE: implement scaling for all obstacle types.
#: Scale obsctacle. This is only implemented for :class:`Mesh` and :class:`PointCloud`
#: obstacles.
scale: Optional[List[float]] = None
#: Color of obstacle to use in visualization.
color: Optional[List[float]] = None
#: texture to apply to obstacle in visualization.
#: Texture name for the obstacle.
texture_id: Optional[str] = None
#: texture to apply to obstacle in visualization.
#: Texture to apply to obstacle in visualization.
texture: Optional[str] = None
#: material properties to apply in visualization.
#: Material properties to apply in visualization.
material: Material = field(default_factory=Material)
#: Device and floating point precision to use for tensors.
tensor_args: TensorDeviceType = field(default_factory=TensorDeviceType)
def get_trimesh_mesh(self, process: bool = True, process_color: bool = True) -> trimesh.Trimesh:
@@ -387,7 +393,13 @@ class Mesh(Obstacle):
if isinstance(m, trimesh.Scene):
m = m.dump(concatenate=True)
if process_color and isinstance(m.visual, trimesh.visual.texture.TextureVisuals):
m.visual = m.visual.to_color()
try:
m.visual = m.visual.to_color()
except Exception as e:
log_warn("Could not convert texture to color: " + str(e))
if self.scale is not None:
m.vertices = np.ravel(self.scale) * m.vertices
# self.scale = None
else:
m = trimesh.Trimesh(
self.vertices,
@@ -396,9 +408,7 @@ class Mesh(Obstacle):
vertex_normals=self.vertex_normals,
face_colors=self.face_colors,
)
if self.scale is not None:
m.vertices = np.ravel(self.scale) * m.vertices
self.scale = None
return m
def update_material(self):

View File

@@ -295,6 +295,8 @@ class GraphPlanBase(GraphConfig):
@profiler.record_function("geometric_planner/mask_samples")
def _mask_samples(self, x_samples):
d = []
if self.safety_rollout_fn.cuda_graph_instance:
log_error("Cuda graph is using this rollout instance.")
if self.max_buffer < x_samples.shape[0]:
# c_samples = x_samples[:, 0:1] * 0.0
for i in range(math.ceil(x_samples.shape[0] / self.max_buffer)):

View File

@@ -530,6 +530,7 @@ class NewtonOptBase(Optimizer, NewtonOptConfig):
def _create_opt_iters_graph(self, q, grad_q, shift_steps):
# create a new stream:
self.reset()
self._cu_opt_q_in = q.detach().clone()
self._cu_opt_gq_in = grad_q.detach().clone()
s = torch.cuda.Stream(device=self.tensor_args.device)
@@ -541,12 +542,13 @@ class NewtonOptBase(Optimizer, NewtonOptConfig):
self._cu_opt_q_in, self._cu_opt_gq_in, shift_steps
)
torch.cuda.current_stream(device=self.tensor_args.device).wait_stream(s)
self.reset()
self.cu_opt_graph = torch.cuda.CUDAGraph()
with torch.cuda.graph(self.cu_opt_graph, stream=s):
self._cu_opt_q, self._cu_opt_cost, self._cu_q, self._cu_gq = self._opt_iters(
self._cu_opt_q_in, self._cu_opt_gq_in, shift_steps
)
torch.cuda.current_stream(device=self.tensor_args.device).wait_stream(s)
@get_torch_jit_decorator()

View File

@@ -150,6 +150,7 @@ class Optimizer(OptimizerConfig):
self._rollout_list = None
self.debug = []
self.debug_cost = []
self.cu_opt_graph = None
def optimize(self, opt_tensor: torch.Tensor, shift_steps=0, n_iters=None) -> torch.Tensor:
"""Find a solution through optimization given the initial values for variables.
@@ -223,6 +224,8 @@ class Optimizer(OptimizerConfig):
else:
log_info("Cuda Graph was not enabled")
self._batch_goal = None
if self.cu_opt_graph is not None:
self.cu_opt_graph.reset()
self.rollout_fn.reset_cuda_graph()
def reset_shape(self):

View File

@@ -204,26 +204,26 @@ class ParticleOptBase(Optimizer, ParticleOptConfig):
n_iters = n_iters if n_iters is not None else self.n_iters
# create cuda graph:
if self.use_cuda_graph and self.cu_opt_init:
if self.use_cuda_graph:
if not self.cu_opt_init:
self._initialize_cuda_graph(init_act.clone(), shift_steps=shift_steps)
curr_action_seq = self._call_cuda_opt_iters(init_act)
else:
curr_action_seq = self._run_opt_iters(
init_act, n_iters=n_iters, shift_steps=shift_steps
)
if self.use_cuda_graph:
if not self.cu_opt_init:
self._initialize_cuda_graph(init_act, shift_steps=shift_steps)
self.num_steps += 1
if self.calculate_value:
trajectories = self.generate_rollouts(init_act)
value = self._calc_val(trajectories)
self.info["value"] = value
# print(self.act_seq)
return curr_action_seq
def _initialize_cuda_graph(self, init_act: T_HDOF_float, shift_steps=0):
log_info("ParticleOptBase: Creating Cuda Graph")
self.reset()
self._cu_act_in = init_act.detach().clone()
# create a new stream:
@@ -239,6 +239,8 @@ class ParticleOptBase(Optimizer, ParticleOptConfig):
with torch.cuda.graph(self.cu_opt_graph, stream=s):
self._cu_act_seq = self._run_opt_iters(self._cu_act_in, shift_steps=shift_steps)
torch.cuda.current_stream(device=self.tensor_args.device).wait_stream(s)
self.cu_opt_init = True
def _call_cuda_opt_iters(self, init_act: T_HDOF_float):

View File

@@ -421,7 +421,7 @@ class ArmBase(RolloutBase, ArmBaseConfig):
def get_metrics(self, state: Union[JointState, KinematicModelState]):
"""Compute metrics given state
#TODO: Currently does not compute velocity and acceleration costs.
Args:
state (Union[JointState, URDFModelState]): _description_
@@ -429,6 +429,8 @@ class ArmBase(RolloutBase, ArmBaseConfig):
_type_: _description_
"""
if self.cuda_graph_instance:
log_error("Cuda graph is using this instance, please break the graph before using this")
if isinstance(state, JointState):
state = self._get_augmented_state(state)
out_metrics = self.constraint_fn(state)
@@ -462,6 +464,9 @@ class ArmBase(RolloutBase, ArmBaseConfig):
with torch.cuda.graph(self.cu_metrics_graph, stream=s):
self._cu_out_metrics = self.get_metrics(self._cu_metrics_state_in)
self._metrics_cuda_graph_init = True
self._cuda_graph_valid = True
if not self.cuda_graph_instance:
log_error("cuda graph is invalid")
if self._cu_metrics_state_in.position.shape != state.position.shape:
log_error("cuda graph changed")
self._cu_metrics_state_in.copy_(state)
@@ -541,6 +546,8 @@ class ArmBase(RolloutBase, ArmBaseConfig):
def rollout_constraint(
self, act_seq: torch.Tensor, use_batch_env: bool = True
) -> RolloutMetrics:
if self.cuda_graph_instance:
log_error("Cuda graph is using this instance, please break the graph before using this")
state = self.dynamics_model.forward(self.start_state, act_seq)
metrics = self.constraint_fn(state, use_batch_env=use_batch_env)
return metrics
@@ -566,6 +573,9 @@ class ArmBase(RolloutBase, ArmBaseConfig):
state, use_batch_env=use_batch_env
)
self._rollout_constraint_cuda_graph_init = True
self._cuda_graph_valid = True
if not self.cuda_graph_instance:
log_error("cuda graph is invalid")
self._cu_rollout_constraint_act_in.copy_(act_seq)
self.cu_rollout_constraint_graph.replay()
out_metrics = self._cu_rollout_constraint_out_metrics
@@ -602,12 +612,8 @@ class ArmBase(RolloutBase, ArmBaseConfig):
"""
with profiler.record_function("arm_base/update_params"):
self._goal_buffer.copy_(
goal, update_idx_buffers=self._goal_idx_update
) # TODO: convert this to a reference to avoid extra copy
# self._goal_buffer.copy_(goal, update_idx_buffers=True) # TODO: convert this to a reference to avoid extra copy
self._goal_buffer.copy_(goal, update_idx_buffers=self._goal_idx_update)
# TODO: move start state also inside Goal instance
if goal.current_state is not None:
if self.start_state is None:
self.start_state = goal.current_state.clone()

View File

@@ -178,6 +178,7 @@ class ArmReacher(ArmBase, ArmReacherConfig):
# self.cost_cfg.cspace_cfg.update_vec_weight(self.dynamics_model.cspace_distance_weight)
self.dist_cost = DistCost(self.cost_cfg.cspace_cfg)
if self.cost_cfg.pose_cfg is not None:
self.cost_cfg.pose_cfg.waypoint_horizon = self.horizon
self.goal_cost = PoseCost(self.cost_cfg.pose_cfg)
if self.cost_cfg.link_pose_cfg is None:
log_info(
@@ -265,7 +266,6 @@ class ArmReacher(ArmBase, ArmReacherConfig):
goal_cost = self.goal_cost.forward(
ee_pos_batch, ee_quat_batch, self._goal_buffer
)
# print(self._compute_g_dist, goal_cost.view(-1))
cost_list.append(goal_cost)
with profiler.record_function("cost/link_poses"):
if self._goal_buffer.links_goal_pose is not None and self.cost_cfg.pose_cfg is not None:
@@ -338,7 +338,6 @@ class ArmReacher(ArmBase, ArmReacherConfig):
out_metrics = ArmReacherMetrics()
if not isinstance(out_metrics, ArmReacherMetrics):
out_metrics = ArmReacherMetrics(**vars(out_metrics))
# print(self._goal_buffer.batch_retract_state_idx)
out_metrics = super(ArmReacher, self).convergence_fn(state, out_metrics)
# compute error with pose?
@@ -448,7 +447,7 @@ class ArmReacher(ArmBase, ArmReacherConfig):
self,
metric: PoseCostMetric,
):
pose_costs = self.get_pose_costs()
pose_costs = self.get_pose_costs(include_link_pose=metric.include_link_pose)
if metric.hold_partial_pose:
if metric.hold_vec_weight is None:
log_error("hold_vec_weight is required")

View File

@@ -21,9 +21,9 @@ import warp as wp
from curobo.cuda_robot_model.types import JointLimits
from curobo.types.robot import JointState
from curobo.types.tensor import T_DOF
from curobo.util.logger import log_error
from curobo.util.logger import log_error, log_warn
from curobo.util.torch_utils import get_cache_fn_decorator, get_torch_jit_decorator
from curobo.util.warp import init_warp, warp_support_kernel_key
from curobo.util.warp import init_warp, is_runtime_warp_kernel_enabled, warp_support_kernel_key
# Local Folder
from .cost_base import CostBase, CostConfig
@@ -106,6 +106,9 @@ class BoundCost(CostBase, BoundCostConfig):
)
self._out_gv_buffer = self._out_ga_buffer = self._out_gj_buffer = empty_buffer
if self.use_l2_kernel:
if not is_runtime_warp_kernel_enabled():
log_warn("Runtime warp kernel generation is disabled.")
self.use_l2_kernel = False
if not warp_support_kernel_key():
# define a compile-time constant so that warp hash is different for different dof
# this is required in older warp versions < 1.2.1 as warp hash didn't consider the
@@ -857,7 +860,7 @@ class WarpBoundPosL2Function(torch.autograd.Function):
return p_g, None, None, None, None, None, None, None, None, None, None, None
@wp.kernel
@wp.kernel(enable_backward=False)
def forward_bound_pos_warp(
pos: wp.array(dtype=wp.float32),
retract_config: wp.array(dtype=wp.float32),
@@ -945,7 +948,7 @@ def forward_bound_pos_warp(
out_grad_p[b_addrs] = g_p
@wp.kernel
@wp.kernel(enable_backward=False)
def forward_bound_warp(
pos: wp.array(dtype=wp.float32),
vel: wp.array(dtype=wp.float32),
@@ -1112,7 +1115,7 @@ def forward_bound_warp(
out_grad_j[b_addrs] = g_j
@wp.kernel
@wp.kernel(enable_backward=False)
def forward_bound_smooth_warp(
pos: wp.array(dtype=wp.float32),
vel: wp.array(dtype=wp.float32),
@@ -1559,7 +1562,11 @@ def make_bound_pos_smooth_kernel(dof_template: int):
module = wp.get_module(forward_bound_smooth_loop_warp.__module__)
key = "forward_bound_smooth_loop_warp_" + str(dof_template)
new_kernel = wp.Kernel(forward_bound_smooth_loop_warp, key=key, module=module)
if key in module.kernels:
new_kernel = module.kernels[key]
else:
new_kernel = wp.Kernel(forward_bound_smooth_loop_warp, key=key, module=module)
return new_kernel
@@ -1657,7 +1664,10 @@ def make_bound_pos_kernel(dof_template: int):
for i in range(dof_template):
out_grad_p[b_addrs + i] = g_p[i]
wp_module = wp.get_module(forward_bound_pos_loop_warp.__module__)
module = wp.get_module(forward_bound_pos_loop_warp.__module__)
key = "bound_pos_loop_warp_" + str(dof_template)
new_kernel = wp.Kernel(forward_bound_pos_loop_warp, key=key, module=wp_module)
if key in module.kernels:
new_kernel = module.kernels[key]
else:
new_kernel = wp.Kernel(forward_bound_pos_loop_warp, key=key, module=module)
return new_kernel

View File

@@ -18,9 +18,9 @@ import torch
import warp as wp
# CuRobo
from curobo.util.logger import log_error
from curobo.util.logger import log_error, log_warn
from curobo.util.torch_utils import get_cache_fn_decorator, get_torch_jit_decorator
from curobo.util.warp import init_warp, warp_support_kernel_key
from curobo.util.warp import init_warp, is_runtime_warp_kernel_enabled, warp_support_kernel_key
# Local Folder
from .cost_base import CostBase, CostConfig
@@ -226,7 +226,10 @@ def make_l2_kernel(dof_template: int):
module = wp.get_module(forward_l2_loop_warp.__module__)
key = "forward_l2_loop" + str(dof_template)
new_kernel = wp.Kernel(forward_l2_loop_warp, key=key, module=module)
if key in module.kernels:
new_kernel = module.kernels[key]
else:
new_kernel = wp.Kernel(forward_l2_loop_warp, key=key, module=module)
return new_kernel
@@ -343,6 +346,9 @@ class DistCost(CostBase, DistCostConfig):
self._init_post_config()
init_warp()
if self.use_l2_kernel:
if not is_runtime_warp_kernel_enabled():
log_warn("Runtime warp kernel generation is disabled.")
self.use_l2_kernel = False
if not warp_support_kernel_key():
# define a compile-time constant so that warp hash is different for different dof
# this is required in older warp versions < 1.2.1 as warp hash didn't consider the

View File

@@ -46,6 +46,7 @@ class PoseCostConfig(CostConfig):
use_projected_distance: bool = True
offset_waypoint: List[float] = None
offset_tstep_fraction: float = -1.0
waypoint_horizon: int = 0
def __post_init__(self):
if self.run_vec_weight is not None:
@@ -84,6 +85,7 @@ class PoseCostMetric:
offset_rotation: Optional[torch.Tensor] = None
offset_tstep_fraction: float = -1.0
remove_offset_waypoint: bool = False
include_link_pose: bool = False
def clone(self):
@@ -152,7 +154,6 @@ class PoseCost(CostBase, PoseCostConfig):
self.pos_weight = self.vec_weight[3:6]
self._vec_convergence = self.tensor_args.to_device(self.vec_convergence)
self._batch_size = 0
self._horizon = 0
def update_metric(self, metric: PoseCostMetric):
if metric.hold_partial_pose:
@@ -202,28 +203,34 @@ class PoseCost(CostBase, PoseCostConfig):
if offset_rotation is not None:
self.offset_waypoint[:3].copy_(offset_rotation)
self.offset_tstep_fraction[:] = offset_tstep_fraction
if self._horizon <= 0:
if self.waypoint_horizon <= 0:
log_error(
"Updating offset waypoint is only possible after initializing motion gen"
+ " run motion_gen.warmup() before adding offset_waypoint"
"Updating offset waypoint requires PoseCostConfig.waypoint_horizon to be set."
)
self.update_run_weight(run_tstep_fraction=offset_tstep_fraction)
self.update_run_weight(
run_tstep_fraction=offset_tstep_fraction, horizon=self.waypoint_horizon
)
def remove_offset_waypoint(self):
self.offset_tstep_fraction[:] = -1.0
self.update_run_weight()
self.update_run_weight(horizon=self.waypoint_horizon)
def update_run_weight(
self, run_tstep_fraction: float = 0.0, run_weight: Optional[float] = None
self,
run_tstep_fraction: float = 0.0,
run_weight: Optional[float] = None,
horizon: Optional[int] = None,
):
if self._horizon == 1:
if horizon is None:
horizon = self._horizon
if horizon <= 1:
return
if run_weight is None:
run_weight = self.run_weight
active_steps = math.floor(self._horizon * run_tstep_fraction)
self.initialize_run_weight_vec(self._horizon)
active_steps = math.floor(horizon * run_tstep_fraction)
self.initialize_run_weight_vec(horizon)
self._run_weight_vec[:, :active_steps] = 0
self._run_weight_vec[:, active_steps:-1] = run_weight
@@ -267,6 +274,7 @@ class PoseCost(CostBase, PoseCostConfig):
self._batch_size = batch_size
self._horizon = horizon
self.waypoint_horizon = horizon
def initialize_run_weight_vec(self, horizon: Optional[int] = None):
if horizon is None:

View File

@@ -176,7 +176,6 @@ class Goal(Sequence):
def repeat_seeds(self, num_seeds: int):
# across seeds, the data is the same, so could we just expand batch_idx
# TODO:
goal_pose = goal_state = current_state = links_goal_pose = retract_state = None
batch_enable_idx = batch_pose_idx = batch_world_idx = batch_current_state_idx = None
batch_retract_state_idx = batch_goal_state_idx = None
@@ -424,6 +423,7 @@ class RolloutBase:
def __init__(self, config: Optional[RolloutConfig] = None):
self.start_state = None
self.batch_size = 1
self._cuda_graph_valid = False
self._metrics_cuda_graph_init = False
self.cu_metrics_graph = None
self._rollout_constraint_cuda_graph_init = False
@@ -552,6 +552,7 @@ class RolloutBase:
pass
def reset_cuda_graph(self):
self._cuda_graph_valid = False
self._metrics_cuda_graph_init = False
if self.cu_metrics_graph is not None:
self.cu_metrics_graph.reset()
@@ -564,6 +565,10 @@ class RolloutBase:
def reset_shape(self):
pass
@property
def cuda_graph_instance(self):
return self._cuda_graph_valid
@abstractmethod
def get_action_from_state(self, state: State):
pass
@@ -581,6 +586,9 @@ class RolloutBase:
def get_full_dof_from_solution(self, q_js: JointState) -> JointState:
return q_js
def break_cuda_graph(self):
self._cuda_graph_valid = False
@get_torch_jit_decorator()
def tensor_repeat_seeds(tensor, num_seeds: int):

View File

@@ -0,0 +1,173 @@
#
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
"""Contains a class for storing file paths."""
# Standard Library
from dataclasses import dataclass
from typing import Optional
# CuRobo
from curobo.util.logger import log_error, log_info
from curobo.util_file import (
get_assets_path,
get_robot_configs_path,
get_world_configs_path,
join_path,
)
@dataclass(frozen=True)
class ContentPath:
"""Dataclass to store root path of configuration and assets."""
#: Root path for robot configuration file, either xrdf or yml.
robot_config_root_path: str = get_robot_configs_path()
#: Root path for robot XRDF.
robot_xrdf_root_path: str = get_robot_configs_path()
#: Root path for robot URDF.
robot_urdf_root_path: str = get_assets_path()
#: Root path for robot USD.
robot_usd_root_path: str = get_assets_path()
#: Root path for robot meshes and textures.
robot_asset_root_path: str = get_assets_path()
#: Root path for world description files (yml).
world_config_root_path: str = get_world_configs_path()
#: Root path for world assets (meshes, nvblox maps).
world_asset_root_path: str = get_assets_path()
#: Absolute path to the robot configuration file. If this is provided, the
#: :var:`robot_config_root_path`` will be ignored.
robot_config_absolute_path: Optional[str] = None
#: Absolute path to the robot XRDF file. If this is provided, the :var:`robot_xrdf_root_path`
#: will be ignored.
robot_xrdf_absolute_path: Optional[str] = None
#: Absolute path to the robot URDF file. If this is provided, the :var:`robot_urdf_root_path`
#: will be ignored.
robot_urdf_absolute_path: Optional[str] = None
#: Absolute path to the robot USD file. If this is provided, the :var:`robot_usd_root_path`
#: will be ignored.
robot_usd_absolute_path: Optional[str] = None
#: Absolute path to the robot assets. If this is provided, the :var:`robot_asset_root_path`
#: will be ignored.
robot_asset_absolute_path: Optional[str] = None
#: Absolute path to the world description file. If this is provided, the
#: :var:`world_config_root_path` will be ignored.
world_config_absolute_path: Optional[str] = None
#: Relative path to the robot configuration file. If this is provided, the
#: robot_config_absolute_path is initialized with the concatenation of
#: robot_config_root_path and robot_config_file.
robot_config_file: Optional[str] = None
#: Relative path to the robot XRDF file. If this is provided, the
#: robot_xrdf_absolute_path is initialized with the concatenation of
#: robot_xrdf_root_path and robot_xrdf_file.
robot_xrdf_file: Optional[str] = None
#: Relative path to the robot URDF file. If this is provided, the
#: robot_urdf_absolute_path is initialized with the concatenation of
#: robot_urdf_root_path and robot_urdf_file.
robot_urdf_file: Optional[str] = None
#: Relative path to the robot USD file. If this is provided, the
#: robot_usd_absolute_path is initialized with the concatenation of
#: robot_usd_root_path and robot_usd_file.
robot_usd_file: Optional[str] = None
#: Relative path to the robot assets.
robot_asset_subroot_path: Optional[str] = None
#: Relative path to the world description file. If this is provided, the
#: world_config_absolute_path is initialized with the concatenation of
#: world_config_root_path and world_config_file.
world_config_file: Optional[str] = None
def __post_init__(self):
if self.robot_config_file is not None:
if self.robot_config_absolute_path is not None:
log_error(
"robot_config_file and robot_config_absolute_path cannot be provided together."
)
object.__setattr__(
self,
"robot_config_absolute_path",
join_path(self.robot_config_root_path, self.robot_config_file),
)
if self.robot_xrdf_file is not None:
if self.robot_xrdf_absolute_path is not None:
log_error(
"robot_xrdf_file and robot_xrdf_absolute_path cannot be provided together."
)
object.__setattr__(
self,
"robot_xrdf_absolute_path",
join_path(self.robot_xrdf_root_path, self.robot_xrdf_file),
)
if self.robot_urdf_file is not None:
if self.robot_urdf_absolute_path is not None:
log_error(
"robot_urdf_file and robot_urdf_absolute_path cannot be provided together."
)
object.__setattr__(
self,
"robot_urdf_absolute_path",
join_path(self.robot_urdf_root_path, self.robot_urdf_file),
)
if self.robot_usd_file is not None:
if self.robot_usd_absolute_path is not None:
log_error("robot_usd_file and robot_usd_absolute_path cannot be provided together.")
object.__setattr__(
self,
"robot_usd_absolute_path",
join_path(self.robot_usd_root_path, self.robot_usd_file),
)
if self.robot_asset_subroot_path is not None:
if self.robot_asset_absolute_path is not None:
log_error(
"robot_asset_subroot_path and robot_asset_absolute_path cannot be provided together."
)
object.__setattr__(
self,
"robot_asset_absolute_path",
join_path(self.robot_asset_root_path, self.robot_asset_subroot_path),
)
if self.world_config_file is not None:
if self.world_config_absolute_path is not None:
log_error(
"world_config_file and world_config_absolute_path cannot be provided together."
)
object.__setattr__(
self,
"world_config_absolute_path",
join_path(self.world_config_root_path, self.world_config_file),
)
def get_robot_configuration_path(self):
"""Get the robot configuration path."""
if self.robot_config_absolute_path is None:
log_info("cuRobo configuration file not found, trying XRDF")
if self.robot_xrdf_absolute_path is None:
log_error("No Robot configuration file found")
else:
return self.robot_xrdf_absolute_path
return self.robot_config_absolute_path

View File

@@ -41,6 +41,17 @@ def is_cuda_graph_available():
return True
def is_cuda_graph_reset_available():
reset_cuda_graph = os.environ.get("CUROBO_TORCH_CUDA_GRAPH_RESET")
if reset_cuda_graph is not None:
if bool(int(reset_cuda_graph)):
if version.parse(torch.version.cuda) >= version.parse("12.0"):
return True
if not bool(int(reset_cuda_graph)):
return False
return False
def is_torch_compile_available():
force_compile = os.environ.get("CUROBO_TORCH_COMPILE_FORCE")
if force_compile is not None and bool(int(force_compile)):

View File

@@ -137,6 +137,8 @@ def get_batch_interpolated_trajectory(
# compute dt across trajectory:
if len(raw_traj.shape) == 2:
raw_traj = raw_traj.unsqueeze(0)
if out_traj_state is not None and len(out_traj_state.shape) == 2:
out_traj_state = out_traj_state.unsqueeze(0)
b, horizon, dof = raw_traj.position.shape # horizon
# given the dt required to run trajectory at maximum velocity,
# we find the number of timesteps required:
@@ -173,14 +175,17 @@ def get_batch_interpolated_trajectory(
if out_traj_state is not None and out_traj_state.position.shape[1] < steps_max:
log_warn(
"Interpolation buffer shape is smaller than steps_max,"
"Interpolation buffer shape is smaller than steps_max: "
+ str(out_traj_state.position.shape)
+ " creating new buffer of shape "
+ str(steps_max)
)
out_traj_state = None
if out_traj_state is None:
out_traj_state = JointState.zeros([b, steps_max, dof], tensor_args)
out_traj_state = JointState.zeros(
[b, steps_max, dof], tensor_args, joint_names=raw_traj.joint_names
)
if kind in [InterpolateType.LINEAR, InterpolateType.CUBIC]:
# plot and save:

View File

@@ -544,6 +544,7 @@ class UsdHelper:
def get_prim_from_obstacle(
self, obstacle: Obstacle, base_frame: str = "/world/obstacles", timestep=None
):
if isinstance(obstacle, Cuboid):
return self.add_cuboid_to_stage(obstacle, base_frame, timestep=timestep)
elif isinstance(obstacle, Mesh):
@@ -813,6 +814,8 @@ class UsdHelper:
visualize_robot_spheres: bool = True,
robot_color: Optional[List[float]] = None,
flatten_usd: bool = False,
goal_pose: Optional[Pose] = None,
goal_color: Optional[List[float]] = None,
):
if kin_model is None:
config_file = load_yaml(join_path(get_robot_configs_path(), robot_model_file))
@@ -829,6 +832,25 @@ class UsdHelper:
if robot_color is not None:
robot_mesh_model.add_color(robot_color)
robot_mesh_model.add_material(Material(metallic=0.4))
if goal_pose is not None:
kin_model.link_names
if kin_model.ee_link in kin_model.kinematics_config.mesh_link_names:
index = kin_model.kinematics_config.mesh_link_names.index(kin_model.ee_link)
gripper_mesh = m[index]
if len(goal_pose.shape) == 1:
goal_pose = goal_pose.unsqueeze(0)
if len(goal_pose.shape) == 2:
goal_pose = goal_pose.unsqueeze(0)
for i in range(goal_pose.n_goalset):
g = goal_pose.get_index(0, i).to_list()
world_model.add_obstacle(
Mesh(
file_path=gripper_mesh.file_path,
pose=g,
name="goal_idx_" + str(i),
color=goal_color,
)
)
usd_helper = UsdHelper()
usd_helper.create_stage(
save_path,
@@ -905,6 +927,8 @@ class UsdHelper:
robot_asset_prim_path=None,
robot_color: Optional[List[float]] = None,
flatten_usd: bool = False,
goal_pose: Optional[Pose] = None,
goal_color: Optional[List[float]] = None,
):
usd_exists = False
# if usd file doesn't exist, fall back to urdf animation script
@@ -915,10 +939,13 @@ class UsdHelper:
robot_model_file = robot_model_file["robot_cfg"]
robot_model_file["kinematics"]["load_link_names_with_mesh"] = True
robot_model_file["kinematics"]["use_usd_kinematics"] = True
if "usd_path" in robot_model_file["kinematics"]:
usd_exists = file_exists(
join_path(get_assets_path(), robot_model_file["kinematics"]["usd_path"])
)
usd_exists = file_exists(
join_path(get_assets_path(), robot_model_file["kinematics"]["usd_path"])
)
else:
usd_exists = False
else:
if kin_model.generator_config.usd_path is not None:
usd_exists = file_exists(kin_model.generator_config.usd_path)
@@ -946,6 +973,8 @@ class UsdHelper:
visualize_robot_spheres=visualize_robot_spheres,
robot_color=robot_color,
flatten_usd=flatten_usd,
goal_pose=goal_pose,
goal_color=goal_color,
)
if kin_model is None:
robot_cfg = CudaRobotModelConfig.from_data_dict(

View File

@@ -9,6 +9,8 @@
# its affiliates is strictly prohibited.
#
# Standard Library
import os
# Third Party
import warp as wp
@@ -60,3 +62,10 @@ def warp_support_kernel_key(wp_module=None):
)
return False
return True
def is_runtime_warp_kernel_enabled() -> bool:
env_variable = os.environ.get("CUROBO_WARP_RUNTIME_KERNEL_DISABLE")
if env_variable is None:
return True
return bool(int(env_variable))

View File

@@ -0,0 +1,170 @@
# Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
# Standard Library
from typing import Any, Dict, Optional
# CuRobo
from curobo.cuda_robot_model.urdf_kinematics_parser import UrdfKinematicsParser
from curobo.types.file_path import ContentPath
from curobo.util.logger import log_error, log_warn
from curobo.util_file import load_yaml
def return_value_if_exists(input_dict: Dict, key: str, suffix: str = "xrdf") -> Any:
if key not in input_dict:
log_error(key + " key not found in " + suffix)
return input_dict[key]
def convert_xrdf_to_curobo(
content_path: ContentPath = ContentPath(),
input_xrdf_dict: Optional[Dict] = None,
) -> Dict:
if content_path.robot_urdf_absolute_path is None:
log_error(
"content_path.robot_urdf_absolute_path or content_path.robot_urdf_file \
is required."
)
urdf_path = content_path.robot_urdf_absolute_path
if input_xrdf_dict is None:
input_xrdf_dict = load_yaml(content_path.robot_xrdf_absolute_path)
if isinstance(content_path, str):
log_error("content_path should be of type ContentPath")
if return_value_if_exists(input_xrdf_dict, "format") != "xrdf":
log_error("format is not xrdf")
raise ValueError("format is not xrdf")
if return_value_if_exists(input_xrdf_dict, "format_version") > 1.0:
log_warn("format_version is greater than 1.0")
# Also get base link as root of urdf
kinematics_parser = UrdfKinematicsParser(
urdf_path, mesh_root=content_path.robot_asset_absolute_path, build_scene_graph=True
)
joint_names = kinematics_parser.get_controlled_joint_names()
base_link = kinematics_parser.root_link
output_dict = {}
if "collision" in input_xrdf_dict:
coll_name = return_value_if_exists(input_xrdf_dict["collision"], "geometry")
if "spheres" not in input_xrdf_dict["geometry"][coll_name]:
log_error("spheres key not found in xrdf")
coll_spheres = return_value_if_exists(input_xrdf_dict["geometry"][coll_name], "spheres")
output_dict["collision_spheres"] = coll_spheres
buffer_distance = return_value_if_exists(input_xrdf_dict["collision"], "buffer_distance")
output_dict["collision_sphere_buffer"] = buffer_distance
output_dict["collision_link_names"] = list(coll_spheres.keys())
if "self_collision" in input_xrdf_dict:
if (
input_xrdf_dict["self_collision"]["geometry"]
!= input_xrdf_dict["collision"]["geometry"]
):
log_error("self_collision geometry does not match collision geometry")
self_collision_ignore = return_value_if_exists(
input_xrdf_dict["self_collision"],
"ignore",
)
self_collision_buffer = return_value_if_exists(
input_xrdf_dict["self_collision"],
"buffer_distance",
)
output_dict["self_collision_ignore"] = self_collision_ignore
output_dict["self_collision_buffer"] = self_collision_buffer
else:
log_error("self_collision key not found in xrdf")
else:
log_warn("collision key not found in xrdf, collision avoidance is disabled")
tool_frames = return_value_if_exists(input_xrdf_dict, "tool_frames")
output_dict["ee_link"] = tool_frames[0]
output_dict["link_names"] = None
if len(tool_frames) > 1:
output_dict["link_names"] = input_xrdf_dict["tool_frames"]
# cspace:
cspace_dict = return_value_if_exists(input_xrdf_dict, "cspace")
active_joints = return_value_if_exists(cspace_dict, "joint_names")
default_joint_positions = return_value_if_exists(input_xrdf_dict, "default_joint_positions")
active_config = []
locked_joints = {}
for j in joint_names:
if j in active_joints:
if j in default_joint_positions:
active_config.append(default_joint_positions[j])
else:
active_config.append(0.0)
else:
locked_joints[j] = 0.0
if j in default_joint_positions:
locked_joints[j] = default_joint_positions[j]
acceleration_limits = return_value_if_exists(cspace_dict, "acceleration_limits")
jerk_limits = return_value_if_exists(cspace_dict, "jerk_limits")
max_acc = max(acceleration_limits)
max_jerk = max(jerk_limits)
output_dict["lock_joints"] = locked_joints
all_joint_names = active_joints + list(locked_joints.keys())
output_cspace = {
"joint_names": all_joint_names,
"retract_config": active_config + list(locked_joints.values()),
"null_space_weight": [1.0 for _ in range(len(all_joint_names))],
"cspace_distance_weight": [1.0 for _ in range(len(all_joint_names))],
"max_acceleration": acceleration_limits
+ [max_acc for _ in range(len(all_joint_names) - len(active_joints))],
"max_jerk": jerk_limits
+ [max_jerk for _ in range(len(all_joint_names) - len(active_joints))],
}
output_dict["cspace"] = output_cspace
extra_links = {}
if "modifiers" in input_xrdf_dict:
for k in range(len(input_xrdf_dict["modifiers"])):
mod_list = list(input_xrdf_dict["modifiers"][k].keys())
if len(mod_list) > 1:
log_error("Each modifier should have only one key")
raise ValueError("Each modifier should have only one key")
mod_type = mod_list[0]
if mod_type == "set_base_frame":
base_link = input_xrdf_dict["modifiers"][k]["set_base_frame"]
elif mod_type == "add_frame":
frame_data = input_xrdf_dict["modifiers"][k]["add_frame"]
extra_links[frame_data["frame_name"]] = {
"parent_link_name": frame_data["parent_frame_name"],
"link_name": frame_data["frame_name"],
"joint_name": frame_data["joint_name"],
"joint_type": frame_data["joint_type"],
"fixed_transform": frame_data["fixed_transform"]["position"]
+ [frame_data["fixed_transform"]["orientation"]["w"]]
+ frame_data["fixed_transform"]["orientation"]["xyz"],
}
else:
log_warn('XRDF modifier "' + mod_type + '" not recognized')
output_dict["extra_links"] = extra_links
output_dict["base_link"] = base_link
output_dict["urdf_path"] = urdf_path
output_dict = {"robot_cfg": {"kinematics": output_dict}}
return output_dict

View File

@@ -12,12 +12,15 @@
import os
import shutil
import sys
from typing import Dict, List
from typing import Dict, List, Union
# Third Party
import yaml
from yaml import Loader
# CuRobo
from curobo.util.logger import log_warn
# get paths
def get_module_path():
@@ -55,8 +58,10 @@ def get_weights_path():
def join_path(path1, path2):
if path1[-1] == os.sep:
log_warn("path1 has trailing slash, removing it")
if isinstance(path2, str):
return os.path.join(path1, path2)
return os.path.join(os.sep, path1 + os.sep, path2)
else:
return path2
@@ -200,3 +205,17 @@ def merge_dict_a_into_b(a, b):
merge_dict_a_into_b(v, b[k])
else:
b[k] = v
def is_platform_windows():
return sys.platform == "win32"
def is_platform_linux():
return sys.platform == "linux"
def is_file_xrdf(file_path: str) -> bool:
if file_path.endswith(".xrdf") or file_path.endswith(".XRDF"):
return True
return False

View File

@@ -222,6 +222,7 @@ def compute_smoothness(
abs_jerk = torch.abs(jerk) * scale_dt**3
# calculate max mean jerk:
# mean_jerk_val = torch.max(torch.mean(abs_jerk, dim=-1), dim=-1)[0]
# max_jerk_val = torch.max(torch.max(abs_jerk, dim=-1)[0], dim=-1)[0]
max_jerk_val = torch.max(abs_jerk, dim=-2)[0] # batch x dof
acc_label = torch.logical_and(max_acc_val <= max_acc, max_jerk_val <= max_jerk)
acc_label = torch.all(acc_label, dim=-1)

View File

@@ -51,7 +51,7 @@ from curobo.types.robot import JointState, RobotConfig
from curobo.types.tensor import T_BDOF, T_DOF, T_BValue_bool, T_BValue_float
from curobo.util.logger import log_error, log_warn
from curobo.util.sample_lib import HaltonGenerator
from curobo.util.torch_utils import get_torch_jit_decorator
from curobo.util.torch_utils import get_torch_jit_decorator, is_cuda_graph_reset_available
from curobo.util_file import (
get_robot_configs_path,
get_task_configs_path,
@@ -312,11 +312,34 @@ class IKSolverConfig:
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
safety_cfg = ArmReacherConfig.from_dict(
robot_cfg,
config_data["model"],
config_data["cost"],
base_config_data["constraint"],
base_config_data["convergence"],
base_config_data["world_collision_checker_cfg"],
world_model,
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
aux_cfg = ArmReacherConfig.from_dict(
robot_cfg,
config_data["model"],
config_data["cost"],
base_config_data["constraint"],
base_config_data["convergence"],
base_config_data["world_collision_checker_cfg"],
world_model,
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
arm_rollout_mppi = ArmReacher(cfg)
arm_rollout_grad = ArmReacher(grad_cfg)
arm_rollout_safety = ArmReacher(grad_cfg)
aux_rollout = ArmReacher(grad_cfg)
arm_rollout_safety = ArmReacher(safety_cfg)
aux_rollout = ArmReacher(aux_cfg)
config_dict = ParallelMPPIConfig.create_data_dict(
config_data["mppi"], arm_rollout_mppi, tensor_args
@@ -356,7 +379,7 @@ class IKSolverConfig:
safety_rollout=arm_rollout_safety,
optimizers=opts,
compute_metrics=True,
use_cuda_graph_metrics=grad_config_data["lbfgs"]["use_cuda_graph"],
use_cuda_graph_metrics=use_cuda_graph,
sync_cuda_time=sync_cuda_time,
)
ik = WrapBase(cfg)
@@ -583,8 +606,14 @@ class IKSolver(IKSolverConfig):
if update_reference:
self.reset_shape()
if self.use_cuda_graph and self._col is not None:
log_error("changing goal type, breaking previous cuda graph.")
self.reset_cuda_graph()
if is_cuda_graph_reset_available():
log_warn("changing goal type, breaking previous cuda graph.")
self.reset_cuda_graph()
else:
log_error(
"changing goal type, cuda graph reset not available, "
+ "consider updating to cuda >= 12.0"
)
self.solver.update_nproblems(self._solve_state.get_ik_batch_size())
self._goal_buffer.current_state = self.init_state.repeat_seeds(goal_pose.batch)

View File

@@ -1051,6 +1051,10 @@ class MotionGenStatus(Enum):
#: Finetune Trajectory optimization failed to find a solution.
FINETUNE_TRAJOPT_FAIL = "Finetune TrajOpt Fail"
#: Optimized dt is greater than the maximum allowed dt. Set maximum_trajectory_dt to a higher
#: value.
DT_EXCEPTION = "dt exceeded maximum allowed trajectory dt"
#: Invalid query was given. The start state is either out of joint limits, in collision with
#: world, or in self-collision. In the future, this will also check for reachability of goal
#: pose/ joint target in joint limits.
@@ -1068,6 +1072,8 @@ class MotionGenStatus(Enum):
#: Invalid start state was given. The start state is out of joint limits.
INVALID_START_STATE_JOINT_LIMITS = "Start state is out of joint limits"
#: Invalid partial pose target.
INVALID_PARTIAL_POSE_COST_METRIC = "Invalid partial pose metric"
#: Motion generation query was successful.
SUCCESS = "Success"
@@ -1286,7 +1292,7 @@ class MotionGenResult:
interpolate_trajectory: bool = True,
interpolation_dt: Optional[float] = None,
interpolation_kind: InterpolateType = InterpolateType.LINEAR_CUDA,
create_interpolation_buffer: bool = False,
create_interpolation_buffer: bool = True,
):
"""Retime the optimized trajectory by a dilation factor.
@@ -1401,7 +1407,7 @@ class MotionGen(MotionGenConfig):
self._rollout_list = None
self._solver_rollout_list = None
self._pose_solver_rollout_list = None
self._pose_rollout_list = None
self._kin_list = None
self.update_batch_size(seeds=self.trajopt_seeds)
@@ -2202,7 +2208,7 @@ class MotionGen(MotionGenConfig):
log_warn("Partial position between start and goal is not equal.")
return False
rollouts = self.get_all_pose_solver_rollout_instances()
rollouts = self.get_all_pose_rollout_instances()
[
rollout.update_pose_cost_metric(metric)
for rollout in rollouts
@@ -2246,6 +2252,16 @@ class MotionGen(MotionGenConfig):
)
return self._pose_solver_rollout_list
def get_all_pose_rollout_instances(self) -> List[RolloutBase]:
"""Get all rollout instances used across components in motion generation."""
if self._pose_rollout_list is None:
self._pose_rollout_list = (
self.ik_solver.get_all_rollout_instances()
+ self.trajopt_solver.get_all_rollout_instances()
+ self.finetune_trajopt_solver.get_all_rollout_instances()
)
return self._pose_rollout_list
def get_all_kinematics_instances(self) -> List[CudaRobotModel]:
"""Get all kinematics instances used across components in motion generation.
@@ -2271,7 +2287,7 @@ class MotionGen(MotionGenConfig):
voxelize_method: str = "ray",
world_objects_pose_offset: Optional[Pose] = None,
remove_obstacles_from_world_config: bool = False,
) -> None:
) -> bool:
"""Attach an object or objects from world to a robot's link.
This method assumes that the objects exist in the world configuration. If attaching
@@ -2318,7 +2334,13 @@ class MotionGen(MotionGenConfig):
sphere_tensor[:, 3] = -10.0
sph_list = []
if n_spheres == 0:
log_error("MG: No spheres found")
log_warn(
"MG: No spheres found, max_spheres: "
+ str(max_spheres)
+ " n_objects: "
+ str(len(object_names))
)
return False
for i, x in enumerate(object_names):
obs = self.world_model.get_obstacle(x)
if obs is None:
@@ -2351,6 +2373,8 @@ class MotionGen(MotionGenConfig):
self.attach_spheres_to_robot(sphere_tensor=sphere_tensor, link_name=link_name)
return True
def attach_external_objects_to_robot(
self,
joint_state: JointState,
@@ -2360,7 +2384,7 @@ class MotionGen(MotionGenConfig):
sphere_fit_type: SphereFitType = SphereFitType.VOXEL_VOLUME_SAMPLE_SURFACE,
voxelize_method: str = "ray",
world_objects_pose_offset: Optional[Pose] = None,
) -> None:
) -> bool:
"""Attach external objects (not in world model) to a robot's link.
Args:
@@ -2401,7 +2425,13 @@ class MotionGen(MotionGenConfig):
sphere_tensor[:, 3] = -10.0
sph_list = []
if n_spheres == 0:
log_error("MG: No spheres found")
log_warn(
"MG: No spheres found, max_spheres: "
+ str(max_spheres)
+ " n_objects: "
+ str(len(object_names))
)
return False
for i, x in enumerate(object_names):
obs = external_objects[i]
sph = obs.get_bounding_spheres(
@@ -2423,6 +2453,7 @@ class MotionGen(MotionGenConfig):
sphere_tensor[: spheres.shape[0], :] = spheres.contiguous()
self.attach_spheres_to_robot(sphere_tensor=sphere_tensor, link_name=link_name)
return True
def add_camera_frame(self, camera_observation: CameraObservation, obstacle_name: str):
"""Add camera frame to the world collision checker.
@@ -2654,7 +2685,8 @@ class MotionGen(MotionGenConfig):
the status of the start state.
"""
joint_position = start_state.position
if self.rollout_fn.cuda_graph_instance:
log_error("Cannot check start state as this rollout_fn is used by a CUDA graph.")
if len(joint_position.shape) == 1:
joint_position = joint_position.unsqueeze(0)
if len(joint_position.shape) > 2:
@@ -2941,7 +2973,7 @@ class MotionGen(MotionGenConfig):
result = MotionGenResult(
success=torch.as_tensor([False], device=self.tensor_args.device),
valid_query=valid_query,
status="Invalid Hold Partial Pose",
status=MotionGenStatus.INVALID_PARTIAL_POSE_COST_METRIC,
)
return result
self.update_batch_size(seeds=solve_state.num_trajopt_seeds, batch=solve_state.batch_size)
@@ -3487,7 +3519,14 @@ class MotionGen(MotionGenConfig):
result.success = traj_result.success
if plan_config.enable_finetune_trajopt and torch.count_nonzero(result.success) == 0:
result.status = MotionGenStatus.FINETUNE_TRAJOPT_FAIL
if (
traj_result.debug_info is not None
and "dt_exception" in traj_result.debug_info
and traj_result.debug_info["dt_exception"]
):
result.status = MotionGenStatus.DT_EXCEPTION
result.interpolated_plan = traj_result.interpolated_solution.trim_trajectory(
0, traj_result.path_buffer_last_tstep[0]
@@ -3681,6 +3720,7 @@ class MotionGen(MotionGenConfig):
finetune_time = 0
newton_iters = None
for k in range(plan_config.finetune_attempts):
scaled_dt = torch.clamp(
opt_dt
* plan_config.finetune_js_dt_scale
@@ -3703,7 +3743,6 @@ class MotionGen(MotionGenConfig):
finetune_time += traj_result.solve_time
if torch.count_nonzero(traj_result.success) > 0 or not self.optimize_dt:
break
seed_traj = traj_result.optimized_seeds.detach().clone()
newton_iters = 4
@@ -3714,6 +3753,13 @@ class MotionGen(MotionGenConfig):
result.debug_info["finetune_trajopt_result"] = traj_result
if torch.count_nonzero(traj_result.success) == 0:
result.status = MotionGenStatus.FINETUNE_TRAJOPT_FAIL
if (
traj_result.debug_info is not None
and "dt_exception" in traj_result.debug_info
and traj_result.debug_info["dt_exception"]
):
result.status = MotionGenStatus.DT_EXCEPTION
elif plan_config.enable_finetune_trajopt:
traj_result.success = traj_result.success[0:1]
result.solve_time += traj_result.solve_time + result.finetune_time

View File

@@ -75,6 +75,9 @@ class MpcSolverConfig:
#: MPC Solver.
solver: WrapMpc
#: Rollout function for auxiliary rollouts.
rollout_fn: ArmReacher
#: World Collision Checker.
world_coll_checker: Optional[WorldCollision] = None
@@ -91,13 +94,13 @@ class MpcSolverConfig:
base_cfg: Optional[dict] = None,
tensor_args: TensorDeviceType = TensorDeviceType(),
compute_metrics: bool = True,
use_cuda_graph: Optional[bool] = None,
use_cuda_graph: bool = True,
particle_opt_iters: Optional[int] = None,
self_collision_check: bool = True,
collision_checker_type: Optional[CollisionCheckerType] = CollisionCheckerType.MESH,
use_es: Optional[bool] = None,
es_learning_rate: Optional[float] = 0.01,
use_cuda_graph_metrics: bool = False,
use_cuda_graph_metrics: bool = True,
store_rollouts: bool = True,
use_cuda_graph_full_step: bool = False,
sync_cuda_time: bool = True,
@@ -226,9 +229,32 @@ class MpcSolverConfig:
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
safety_cfg = ArmReacherConfig.from_dict(
robot_cfg,
config_data["model"],
config_data["cost"],
base_cfg["constraint"],
base_cfg["convergence"],
base_cfg["world_collision_checker_cfg"],
world_model,
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
aux_cfg = ArmReacherConfig.from_dict(
robot_cfg,
config_data["model"],
config_data["cost"],
base_cfg["constraint"],
base_cfg["convergence"],
base_cfg["world_collision_checker_cfg"],
world_model,
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
arm_rollout_mppi = ArmReacher(cfg)
arm_rollout_safety = ArmReacher(cfg)
arm_rollout_safety = ArmReacher(safety_cfg)
arm_rollout_aux = ArmReacher(aux_cfg)
config_data["mppi"]["store_rollouts"] = store_rollouts
if use_cuda_graph is not None:
config_data["mppi"]["use_cuda_graph"] = use_cuda_graph
@@ -285,6 +311,7 @@ class MpcSolverConfig:
tensor_args=tensor_args,
use_cuda_graph_full_step=use_cuda_graph_full_step,
world_coll_checker=world_coll_checker,
rollout_fn=arm_rollout_aux,
)
@@ -529,6 +556,7 @@ class MpcSolver(MpcSolverConfig):
is disabled.
"""
self.solver.safety_rollout.enable_cspace_cost(enable)
self.rollout_fn.enable_cspace_cost(enable)
for opt in self.solver.optimizers:
opt.rollout_fn.enable_cspace_cost(enable)
@@ -539,6 +567,7 @@ class MpcSolver(MpcSolverConfig):
enable: Enable or disable reaching pose cost. When False, pose cost is disabled.
"""
self.solver.safety_rollout.enable_pose_cost(enable)
self.rollout_fn.enable_pose_cost(enable)
for opt in self.solver.optimizers:
opt.rollout_fn.enable_pose_cost(enable)
@@ -588,18 +617,13 @@ class MpcSolver(MpcSolverConfig):
@property
def kinematics(self) -> CudaRobotModel:
"""Get kinematics instance of the robot."""
return self.solver.safety_rollout.dynamics_model.robot_model
return self.rollout_fn.dynamics_model.robot_model
@property
def world_collision(self) -> WorldCollision:
"""Get the world collision checker."""
return self.world_coll_checker
@property
def rollout_fn(self) -> ArmReacher:
"""Get the rollout function."""
return self.solver.safety_rollout
def _step_once(
self,
current_state: JointState,

View File

@@ -54,7 +54,7 @@ from curobo.types.robot import JointState, RobotConfig
from curobo.types.tensor import T_BDOF, T_DOF, T_BValue_bool, T_BValue_float
from curobo.util.helpers import list_idx_if_not_none
from curobo.util.logger import log_error, log_info, log_warn
from curobo.util.torch_utils import get_torch_jit_decorator
from curobo.util.torch_utils import get_torch_jit_decorator, is_cuda_graph_reset_available
from curobo.util.trajectory import (
InterpolateType,
calculate_dt_no_clamp,
@@ -415,9 +415,10 @@ class TrajOptSolverConfig:
tensor_args=tensor_args,
)
safety_robot_model = robot_cfg.kinematics
safety_robot_cfg = RobotConfig(**vars(robot_cfg))
safety_robot_cfg.kinematics = safety_robot_model
# safety_robot_model = robot_cfg.kinematics
# safety_robot_cfg = RobotConfig(**vars(robot_cfg))
# safety_robot_cfg.kinematics = safety_robot_model
safety_robot_cfg = robot_cfg
safety_cfg = ArmReacherConfig.from_dict(
safety_robot_cfg,
config_data["model"],
@@ -429,6 +430,30 @@ class TrajOptSolverConfig:
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
aux_cfg = ArmReacherConfig.from_dict(
safety_robot_cfg,
config_data["model"],
config_data["cost"],
base_config_data["constraint"],
base_config_data["convergence"],
base_config_data["world_collision_checker_cfg"],
world_model,
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
interpolate_cfg = ArmReacherConfig.from_dict(
safety_robot_cfg,
config_data["model"],
config_data["cost"],
base_config_data["constraint"],
base_config_data["convergence"],
base_config_data["world_collision_checker_cfg"],
world_model,
world_coll_checker=world_coll_checker,
tensor_args=tensor_args,
)
arm_rollout_mppi = None
with profiler.record_function("trajopt_config/create_rollouts"):
if use_particle_opt:
@@ -437,8 +462,8 @@ class TrajOptSolverConfig:
arm_rollout_safety = ArmReacher(safety_cfg)
if aux_rollout is None:
aux_rollout = ArmReacher(safety_cfg)
interpolate_rollout = ArmReacher(safety_cfg)
aux_rollout = ArmReacher(aux_cfg)
interpolate_rollout = ArmReacher(interpolate_cfg)
if trajopt_dt is not None:
if arm_rollout_mppi is not None:
arm_rollout_mppi.update_traj_dt(dt=trajopt_dt)
@@ -482,8 +507,8 @@ class TrajOptSolverConfig:
cfg = WrapConfig(
safety_rollout=arm_rollout_safety,
optimizers=opt_list,
compute_metrics=True, # not evaluate_interpolated_trajectory,
use_cuda_graph_metrics=grad_config_data["lbfgs"]["use_cuda_graph"],
compute_metrics=True,
use_cuda_graph_metrics=use_cuda_graph,
sync_cuda_time=sync_cuda_time,
)
trajopt = WrapBase(cfg)
@@ -730,8 +755,11 @@ class TrajOptSolver(TrajOptSolverConfig):
if update_reference:
if self.use_cuda_graph and self._col is not None:
log_error("changing goal type, breaking previous cuda graph.")
self.reset_cuda_graph()
if is_cuda_graph_reset_available():
log_warn("changing goal type, breaking previous cuda graph")
self.reset_cuda_graph()
else:
log_error("changing goal type not supported in cuda graph mode")
self.solver.update_nproblems(self._solve_state.get_batch_size())
self._col = torch.arange(
@@ -1273,34 +1301,39 @@ class TrajOptSolver(TrajOptSolverConfig):
seed_traj: JointState,
num_seeds: int,
batch_mode: bool = False,
):
) -> TrajOptResult:
"""Get result from the optimization problem.
Args:
result: _description_
return_all_solutions: _description_
goal: _description_
seed_traj: _description_
num_seeds: _description_
batch_mode: _description_
Raises:
log_error: _description_
ValueError: _description_
result: Result of the optimization problem.
return_all_solutions: Return solutions for all seeds.
goal: Goal object containing convergence parameters.
seed_traj: Seed trajectory used for optimization.
num_seeds: Number of seeds used for optimization.
batch_mode: Batch mode for problems.
Returns:
_description_
TrajOptResult: Result of the trajectory optimization.
"""
st_time = time.time()
if self.trim_steps is not None:
result.action = result.action.trim_trajectory(self.trim_steps[0], self.trim_steps[1])
interpolated_trajs, last_tstep, opt_dt = self.get_interpolated_trajectory(result.action)
interpolated_trajs, last_tstep, opt_dt, buffer_change = self.get_interpolated_trajectory(
result.action
)
if self.sync_cuda_time:
torch.cuda.synchronize(device=self.tensor_args.device)
interpolation_time = time.time() - st_time
if self.evaluate_interpolated_trajectory:
with profiler.record_function("trajopt/evaluate_interpolated"):
if self.use_cuda_graph_metrics:
if self.use_cuda_graph_metrics and buffer_change:
if is_cuda_graph_reset_available():
self.interpolate_rollout.reset_cuda_graph()
buffer_change = False
else:
self.interpolate_rollout.break_cuda_graph()
if self.use_cuda_graph_metrics and not buffer_change:
metrics = self.interpolate_rollout.get_metrics_cuda_graph(interpolated_trajs)
else:
metrics = self.interpolate_rollout.get_metrics(interpolated_trajs)
@@ -1309,9 +1342,10 @@ class TrajOptSolver(TrajOptSolverConfig):
result.metrics.rotation_error = metrics.rotation_error
result.metrics.cspace_error = metrics.cspace_error
result.metrics.goalset_index = metrics.goalset_index
st_time = time.time()
if result.metrics.cspace_error is None and result.metrics.position_error is None:
raise log_error("convergence check requires either goal_pose or goal_state")
log_error("convergence check requires either goal_pose or goal_state")
success = jit_feasible_success(
result.metrics.feasible,
@@ -1334,7 +1368,7 @@ class TrajOptSolver(TrajOptSolverConfig):
elif result.metrics.cspace_error is not None:
converge = result.metrics.cspace_error[..., -1] <= self.cspace_threshold
else:
raise ValueError("convergence check requires either goal_pose or goal_state")
log_error("convergence check requires either goal_pose or goal_state")
success = torch.logical_and(feasible, converge)
if return_all_solutions:
@@ -1428,13 +1462,22 @@ class TrajOptSolver(TrajOptSolverConfig):
opt_dt_v = opt_dt.view(1, 1)
opt_solution = best_act_seq.scale_by_dt(self.solver_dt_tensor, opt_dt_v)
select_time = time.time() - st_time
debug_info = None
debug_info = {}
if self.store_debug_in_result:
debug_info = {
"solver": result.debug,
"interpolation_time": interpolation_time,
"select_time": select_time,
}
if not torch.count_nonzero(success) > 0:
max_dt = torch.max(opt_dt).item()
if max_dt >= self.traj_evaluator_config.max_dt:
log_info(
"Optimized dt is above maximum_trajectory_dt, consider \
increasing max_trajectory_dt"
)
debug_info["dt_exception"] = True
traj_result = TrajOptResult(
success=success,
goal=goal,
@@ -1706,7 +1749,7 @@ class TrajOptSolver(TrajOptSolverConfig):
(b, self.interpolation_steps, dof), self.tensor_args
)
self._interpolated_traj_buffer.joint_names = self.rollout_fn.joint_names
interpolation_buffer_reallocated = False
state, last_tstep, opt_dt = get_batch_interpolated_trajectory(
traj_state,
self.solver_dt_tensor,
@@ -1721,7 +1764,13 @@ class TrajOptSolver(TrajOptSolverConfig):
max_dt=self.traj_evaluator_config.max_dt,
optimize_dt=self.optimize_dt,
)
return state, last_tstep, opt_dt
if state.shape != self._interpolated_traj_buffer.shape:
interpolation_buffer_reallocated = True
if is_cuda_graph_reset_available():
log_info("Interpolated trajectory buffer was recreated, reinitializing cuda graph")
self._interpolated_traj_buffer = state.clone()
return state, last_tstep, opt_dt, interpolation_buffer_reallocated
def calculate_trajectory_dt(
self,

View File

@@ -23,7 +23,7 @@ from curobo.opt.opt_base import Optimizer
from curobo.opt.particle.particle_opt_base import ParticleOptBase
from curobo.rollout.rollout_base import Goal, RolloutBase, RolloutMetrics
from curobo.types.robot import State
from curobo.util.logger import log_info
from curobo.util.logger import log_info, log_warn
@dataclass
@@ -34,6 +34,13 @@ class WrapConfig:
use_cuda_graph_metrics: bool
sync_cuda_time: bool
def __post_init__(self):
if self.use_cuda_graph_metrics:
log_info(
"Using cuda graph for metrics is experimental. If you encounter CUDA errors,\
please disable it."
)
@dataclass
class WrapResult:
@@ -62,7 +69,6 @@ class WrapBase(WrapConfig):
def get_metrics(self, state: State, use_cuda_graph: bool = False) -> RolloutMetrics:
if use_cuda_graph:
log_info("Using cuda graph")
return self.safety_rollout.get_metrics_cuda_graph(state)
return self.safety_rollout.get_metrics(state)
@@ -135,7 +141,7 @@ class WrapBase(WrapConfig):
metrics = None
filtered_state = self.safety_rollout.filter_robot_state(goal.current_state)
goal.current_state = filtered_state
goal.current_state.copy_(filtered_state)
self.update_params(goal)
if seed is None:
seed = self.get_init_act()

View File

@@ -46,9 +46,7 @@ class WrapMpc(WrapBase):
metrics = None
start_time = time.time()
# print("i:",goal.current_state.position)
filtered_state = self.safety_rollout.filter_robot_state(goal.current_state)
# print("f:", filtered_state.position)
goal.current_state.copy_(filtered_state)
self.update_params(goal)

View File

@@ -14,3 +14,4 @@ import os
os.environ["CUROBO_TORCH_COMPILE_DISABLE"] = str(1)
os.environ["CUROBO_USE_LRU_CACHE"] = str(1)
os.environ["CUROBO_TORCH_CUDA_GRAPH_RESET"] = str(0)

75
tests/cuda_graph_test.py Normal file
View File

@@ -0,0 +1,75 @@
#
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
# CuRobo
from curobo.types.base import TensorDeviceType
from curobo.types.math import Pose
from curobo.types.robot import JointState, RobotConfig
from curobo.util_file import get_robot_configs_path, join_path, load_yaml
from curobo.wrap.reacher.motion_gen import MotionGen, MotionGenConfig
from curobo.wrap.reacher.mpc import MpcSolver, MpcSolverConfig
def test_motion_gen_mpc():
tensor_args = TensorDeviceType()
world_file = "collision_test.yml"
robot_file = "ur5e.yml"
motion_gen_config = MotionGenConfig.load_from_robot_config(
robot_file,
world_file,
tensor_args,
use_cuda_graph=True,
)
motion_gen = MotionGen(motion_gen_config)
robot_cfg = load_yaml(join_path(get_robot_configs_path(), robot_file))["robot_cfg"]
robot_cfg = RobotConfig.from_dict(robot_cfg, tensor_args)
retract_cfg = motion_gen.get_retract_config()
state = motion_gen.rollout_fn.compute_kinematics(
JointState.from_position(retract_cfg.view(1, -1))
)
motion_gen.warmup(warmup_js_trajopt=False)
retract_pose = Pose(state.ee_pos_seq.squeeze(), quaternion=state.ee_quat_seq.squeeze())
start_state = JointState.from_position(retract_cfg.view(1, -1) + 0.5)
tensor_args = TensorDeviceType()
robot_cfg = load_yaml(join_path(get_robot_configs_path(), robot_file))["robot_cfg"]
robot_cfg = RobotConfig.from_dict(robot_cfg, tensor_args)
world_file = "collision_test.yml"
if True:
new_motion_gen_config = MotionGenConfig.load_from_robot_config(
robot_file,
world_file,
tensor_args,
use_cuda_graph=True,
)
new_motion_gen = MotionGen(new_motion_gen_config)
new_motion_gen.warmup()
if True:
mpc_config = MpcSolverConfig.load_from_robot_config(
robot_file,
world_file,
use_cuda_graph=True,
use_cuda_graph_metrics=False,
use_cuda_graph_full_step=False,
)
mpc = MpcSolver(mpc_config)
retract_cfg = robot_cfg.cspace.retract_config.view(1, -1)
result = motion_gen.plan(start_state, retract_pose, enable_graph=False)
assert result.success
result = new_motion_gen.plan(start_state, retract_pose, enable_graph=False)
assert result.success

View File

@@ -18,14 +18,18 @@ from curobo.cuda_robot_model.cuda_robot_generator import CudaRobotGeneratorConfi
from curobo.cuda_robot_model.cuda_robot_model import CudaRobotModel, CudaRobotModelConfig
from curobo.cuda_robot_model.types import CSpaceConfig
from curobo.geom.transform import matrix_to_quaternion, quaternion_to_matrix
from curobo.geom.types import Cuboid
from curobo.types.base import TensorDeviceType
from curobo.types.math import quat_multiply
from curobo.types.state import JointState
from curobo.util_file import get_robot_configs_path, join_path, load_yaml
@pytest.fixture(scope="module")
def cfg():
cfg = CudaRobotModelConfig.from_robot_yaml_file("franka.yml", "panda_hand")
robot_data = load_yaml(join_path(get_robot_configs_path(), "franka.yml"))
robot_data["robot_cfg"]["kinematics"]["extra_collision_spheres"] = {"attached_object": 100}
cfg = CudaRobotModelConfig.from_robot_yaml_file(robot_data, "panda_hand")
return cfg
@@ -121,11 +125,11 @@ def test_franka_attach_object_kinematics(cfg):
attached_spheres = state.link_spheres_tensor[:, sph_idx, :]
# check if all values are equal to position and quaternion
assert torch.norm(attached_spheres[:, :, 0] - 0.0829) < 0.1
assert torch.norm(attached_spheres[:, :, 0] - attached_spheres[0, 0, 0]) < 0.1
assert torch.norm(attached_spheres[:, :, 1]) < 0.0001
# attach an object:
new_object = torch.zeros((4, 4), **(tensor_args.as_torch_dict()))
new_object = torch.zeros((100, 4), **(tensor_args.as_torch_dict()))
new_object[:2, 3] = 0.01
new_object[:2, 1] = 0.1
# print(attached_spheres[:, sph_idx,:].shape)
@@ -145,6 +149,42 @@ def test_franka_attach_object_kinematics(cfg):
robot_model.kinematics_config.detach_object()
def test_franka_attach_external_object_kinematics(cfg):
tensor_args = TensorDeviceType()
robot_model = CudaRobotModel(cfg)
q_test = torch.as_tensor(
[0.0, -1.2, 0.0, -2.0, 0.0, 1.0, 0.0], **(tensor_args.as_torch_dict())
).view(1, -1)
sph_idx = robot_model.kinematics_config.get_sphere_index_from_link_name("attached_object")
state = robot_model.get_state(q_test.clone())
attached_spheres = state.link_spheres_tensor[:, sph_idx, :]
# check if all values are equal to position and quaternion
sphere = attached_spheres[0, 0, 0].item()
assert torch.norm(attached_spheres[:, :, 0] - sphere) < 0.01
external_object = Cuboid(
name="external_object", dims=[0.1, 0.1, 0.1], pose=[0, 0, 0, 1, 0, 0, 0]
)
robot_model.attach_external_objects_to_robot(
JointState.from_position(q_test.clone()), [external_object]
)
state = robot_model.get_state(q_test.clone())
attached_spheres = state.link_spheres_tensor[:, sph_idx, :]
assert torch.norm(attached_spheres[:, :, 0] - sphere) > 0.1
robot_model.kinematics_config.detach_object()
state = robot_model.get_state(q_test.clone())
attached_spheres = state.link_spheres_tensor[:, sph_idx, :]
assert torch.norm(attached_spheres[:, :, 0] - sphere) < 0.01
assert torch.norm(attached_spheres[:, :, 1]) < 0.0001
robot_model.kinematics_config.detach_object()
def test_locked_joints_kinematics():
tensor_args = TensorDeviceType()

View File

@@ -21,11 +21,12 @@ from curobo.wrap.reacher.motion_gen import (
MotionGen,
MotionGenConfig,
MotionGenPlanConfig,
MotionGenStatus,
PoseCostMetric,
)
@pytest.fixture(scope="module")
@pytest.fixture(scope="function")
def motion_gen(request):
tensor_args = TensorDeviceType()
world_file = "collision_table.yml"
@@ -108,7 +109,7 @@ def test_reach_only_position(motion_gen):
)
result = motion_gen.plan_single(start_state, goal_pose, m_config.clone())
assert result.status != MotionGenStatus.INVALID_PARTIAL_POSE_COST_METRIC
assert torch.count_nonzero(result.success) == 1
reached_state = result.optimized_plan[-1]

View File

@@ -18,6 +18,7 @@ from curobo.geom.types import WorldConfig
from curobo.types.base import TensorDeviceType
from curobo.types.math import Pose
from curobo.types.robot import JointState, RobotConfig
from curobo.util.torch_utils import is_cuda_graph_reset_available
from curobo.util.trajectory import InterpolateType
from curobo.util_file import get_robot_configs_path, get_world_configs_path, join_path, load_yaml
from curobo.wrap.reacher.motion_gen import (
@@ -43,6 +44,21 @@ def motion_gen():
return motion_gen_instance
@pytest.fixture(scope="module")
def motion_gen_cg():
tensor_args = TensorDeviceType()
world_file = "collision_table.yml"
robot_file = "franka.yml"
motion_gen_config = MotionGenConfig.load_from_robot_config(
robot_file,
world_file,
tensor_args,
use_cuda_graph=True,
)
motion_gen_instance = MotionGen(motion_gen_config)
return motion_gen_instance
@pytest.fixture(scope="module")
def motion_gen_batch_env():
tensor_args = TensorDeviceType()
@@ -63,13 +79,53 @@ def motion_gen_batch_env():
return motion_gen_instance
@pytest.fixture(scope="module")
def motion_gen_batch_env_cg():
tensor_args = TensorDeviceType()
world_files = ["collision_table.yml", "collision_test.yml"]
world_cfg = [
WorldConfig.from_dict(load_yaml(join_path(get_world_configs_path(), world_file)))
for world_file in world_files
]
robot_file = "franka.yml"
motion_gen_config = MotionGenConfig.load_from_robot_config(
robot_file,
world_cfg,
tensor_args,
use_cuda_graph=True,
)
motion_gen_instance = MotionGen(motion_gen_config)
return motion_gen_instance
@pytest.mark.parametrize(
"motion_gen_str,interpolation",
[
("motion_gen", InterpolateType.LINEAR),
("motion_gen", InterpolateType.CUBIC),
# ("motion_gen", InterpolateType.KUNZ_STILMAN_OPTIMAL),
("motion_gen", InterpolateType.LINEAR_CUDA),
pytest.param(
"motion_gen_cg",
InterpolateType.LINEAR,
marks=pytest.mark.skipif(
not is_cuda_graph_reset_available(), reason="CUDAGraph.reset() not available"
),
),
pytest.param(
"motion_gen_cg",
InterpolateType.CUBIC,
marks=pytest.mark.skipif(
not is_cuda_graph_reset_available(), reason="CUDAGraph.reset() not available"
),
),
pytest.param(
"motion_gen_cg",
InterpolateType.LINEAR_CUDA,
marks=pytest.mark.skipif(
not is_cuda_graph_reset_available(), reason="CUDAGraph.reset() not available"
),
),
],
)
def test_motion_gen_single(motion_gen_str, interpolation, request):
@@ -95,7 +151,21 @@ def test_motion_gen_single(motion_gen_str, interpolation, request):
assert torch.norm(goal_pose.position - reached_state.ee_pos_seq) < 0.005
def test_motion_gen_goalset(motion_gen):
@pytest.mark.parametrize(
"motion_gen_str",
[
("motion_gen"),
pytest.param(
"motion_gen_cg",
marks=pytest.mark.skipif(
not is_cuda_graph_reset_available(), reason="CUDAGraph.reset() not available"
),
),
],
)
def test_motion_gen_goalset(motion_gen_str, request):
motion_gen = request.getfixturevalue(motion_gen_str)
motion_gen.reset()
retract_cfg = motion_gen.get_retract_config()
@@ -203,8 +273,28 @@ def test_motion_gen_batch(motion_gen):
[
("motion_gen", InterpolateType.LINEAR),
("motion_gen", InterpolateType.CUBIC),
# ("motion_gen", InterpolateType.KUNZ_STILMAN_OPTIMAL),
("motion_gen", InterpolateType.LINEAR_CUDA),
pytest.param(
"motion_gen_cg",
InterpolateType.LINEAR,
marks=pytest.mark.skipif(
not is_cuda_graph_reset_available(), reason="CUDAGraph.reset() not available"
),
),
pytest.param(
"motion_gen_cg",
InterpolateType.CUBIC,
marks=pytest.mark.skipif(
not is_cuda_graph_reset_available(), reason="CUDAGraph.reset() not available"
),
),
pytest.param(
"motion_gen_cg",
InterpolateType.LINEAR_CUDA,
marks=pytest.mark.skipif(
not is_cuda_graph_reset_available(), reason="CUDAGraph.reset() not available"
),
),
],
)
def test_motion_gen_batch_graph(motion_gen_str: str, interpolation: InterpolateType, request):

View File

@@ -14,12 +14,9 @@ import pytest
import torch
# CuRobo
from curobo.geom.types import WorldConfig
from curobo.types.base import TensorDeviceType
from curobo.types.math import Pose
from curobo.types.robot import JointState, RobotConfig
from curobo.util.trajectory import InterpolateType
from curobo.util_file import get_robot_configs_path, get_world_configs_path, join_path, load_yaml
from curobo.types.robot import JointState
from curobo.wrap.reacher.motion_gen import MotionGen, MotionGenConfig, MotionGenPlanConfig
@@ -58,6 +55,24 @@ def motion_gen_ur5e():
return motion_gen_instance
@pytest.fixture(scope="module")
def motion_gen_ur5e_small_interpolation():
tensor_args = TensorDeviceType()
world_file = "collision_table.yml"
robot_file = "ur5e.yml"
motion_gen_config = MotionGenConfig.load_from_robot_config(
robot_file,
world_file,
tensor_args,
interpolation_steps=10,
interpolation_dt=0.05,
)
motion_gen_instance = MotionGen(motion_gen_config)
motion_gen_instance.warmup(warmup_js_trajopt=False, enable_graph=False)
return motion_gen_instance
@pytest.mark.parametrize(
"motion_gen",
[
@@ -164,6 +179,7 @@ def test_pose_sequence_speed_ur5e_scale(velocity_scale, acceleration_scale):
("motion_gen_ur5e", 0.15),
("motion_gen_ur5e", 0.1),
("motion_gen_ur5e", 0.001),
("motion_gen_ur5e_small_interpolation", 0.01),
],
)
def test_pose_sequence_speed_ur5e_time_dilation(motion_gen_str, time_dilation_factor, request):
@@ -201,6 +217,7 @@ def test_pose_sequence_speed_ur5e_time_dilation(motion_gen_str, time_dilation_fa
)
if result.success.item():
plan = result.get_interpolated_plan()
augmented_js = motion_gen.get_full_js(plan)
trajectory = trajectory.stack(plan.clone())
motion_time += result.motion_time
else:

View File

@@ -35,7 +35,7 @@ def test_self_collision_experimental(batch_size, horizon):
tensor_args = TensorDeviceType()
robot_cfg = load_yaml(join_path(get_robot_configs_path(), robot_file))["robot_cfg"]
robot_cfg["kinematics"]["debug"] = {"self_collision_experimental": True}
robot_cfg["kinematics"]["debug"] = {"self_collision_experimental": False}
robot_cfg = RobotConfig.from_dict(robot_cfg, tensor_args)
kinematics = CudaRobotModel(robot_cfg.kinematics)
self_collision_data = kinematics.get_self_collision_config()
@@ -44,6 +44,7 @@ def test_self_collision_experimental(batch_size, horizon):
tensor_args=tensor_args
)
cost_fn = SelfCollisionCost(self_collision_config)
cost_fn.self_collision_kin_config.experimental_kernel = True
b = batch_size
h = horizon
@@ -74,6 +75,8 @@ def test_self_collision_franka():
tensor_args = TensorDeviceType()
robot_cfg = load_yaml(join_path(get_robot_configs_path(), "franka.yml"))["robot_cfg"]
robot_cfg["kinematics"]["debug"] = {"self_collision_experimental": False}
robot_cfg = RobotConfig.from_dict(robot_cfg, tensor_args)
robot_cfg.kinematics.self_collision_config.experimental_kernel = True
kinematics = CudaRobotModel(robot_cfg.kinematics)
@@ -83,6 +86,7 @@ def test_self_collision_franka():
tensor_args=tensor_args
)
cost_fn = SelfCollisionCost(self_collision_config)
cost_fn.self_collision_kin_config.experimental_kernel = True
b = 10
h = 1

33
tests/xrdf_test.py Normal file
View File

@@ -0,0 +1,33 @@
#
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.
#
# CuRobo
from curobo.cuda_robot_model.cuda_robot_model import CudaRobotModel, CudaRobotModelConfig
from curobo.cuda_robot_model.util import load_robot_yaml
from curobo.types.file_path import ContentPath
def test_xrdf_kinematics():
robot_file = "ur10e.xrdf"
urdf_file = "robot/ur_description/ur10e.urdf"
content_path = ContentPath(robot_xrdf_file=robot_file, robot_urdf_file=urdf_file)
robot_data = load_robot_yaml(content_path)
robot_data["robot_cfg"]["kinematics"]["ee_link"] = "wrist_3_link"
cfg = CudaRobotModelConfig.from_data_dict(robot_data["robot_cfg"]["kinematics"])
robot = CudaRobotModel(cfg)
q_test = robot.tensor_args.to_device([0.0, 0.0, 0.0, 0.0, 0.0, 0.0]).view(1, -1)
kin_pose = robot.get_state(q_test)
expected_position = robot.tensor_args.to_device([1.1842, 0.2907, 0.0609]).view(1, -1)
error = kin_pose.ee_pose.position - expected_position
assert error.norm() < 0.01