Motion Planning API

../../../_images/motion_planning_api_tutorial_robot_move_arm_1st.png

在 MoveIt 中,运动规划器使用插件基础结构加载。这 允许 MoveIt 在运行时加载运动规划器。在此示例中,我们将 运行执行此操作所需的 C++ 代码。

入门

如果您尚未这样做,请确保您已完成 Getting Started.

运行演示

打开两个 shell。在第一个 shell 中启动 RViz 并等待所有内容完成加载::

ros2 launch moveit2_tutorials move_group.launch.py​​

在第二个 shell 中,运行启动文件::

ros2 launch moveit2_tutorials motion_planning_api_tutorial.launch.py​​

注意:**本教程使用 **RvizVisualToolsGui 面板逐步完成演示。要将此面板添加到 RViz,请按照 Visualization Tutorial.

片刻之后,RViz 窗口应会出现,看起来类似于本页顶部的窗口。要完成每个演示步骤,请按 Next button in the RvizVisualToolsGui 屏幕底部的面板或选择 Key Tool in the Tools 屏幕顶部的面板,然后在 RViz 聚焦时按下键盘上的 N

预期输出

在 RViz 中,我们最终应该能够看到四条轨迹被重播:

  1. 机器人将手臂移动到第一个姿势目标,

    A

  2. 机器人将手臂移动到关节目标,

    B

  3. 机器人将其手臂移回原始姿势目标,

  4. 机器人将其手臂移动到新的姿势目标,同时保持末端执行器水平。

    C

完整代码

完整代码可见 here in the moveit_tutorials GitHub project.

Start

Setting up to start using a planner is pretty easy. Planners are setup as plugins in MoveIt and you can use the ROS pluginlib interface to load any planner that you want to use. Before we can load the planner, we need two objects, a RobotModel and a PlanningScene. We will start by instantiating a RobotModelLoader object, which will look up the robot description on the ROS parameter server and construct a RobotModel for us to use.

const std::string PLANNING_GROUP = "panda_arm";
robot_model_loader::RobotModelLoader robot_model_loader(motion_planning_api_tutorial_node, "robot_description");
const moveit::core::RobotModelPtr& robot_model = robot_model_loader.getModel();
/* Create a RobotState and JointModelGroup to keep track of the current robot pose and planning group*/
moveit::core::RobotStatePtr robot_state(new moveit::core::RobotState(robot_model));
const moveit::core::JointModelGroup* joint_model_group = robot_state->getJointModelGroup(PLANNING_GROUP);

Using the RobotModel, we can construct a PlanningScene that maintains the state of the world (including the robot).

planning_scene::PlanningScenePtr planning_scene(new planning_scene::PlanningScene(robot_model));

Configure a valid robot state

planning_scene->getCurrentStateNonConst().setToDefaultValues(joint_model_group, "ready");

We will now construct a loader to load a planner, by name. Note that we are using the ROS pluginlib library here.

std::unique_ptr<pluginlib::ClassLoader<planning_interface::PlannerManager>> planner_plugin_loader;
planning_interface::PlannerManagerPtr planner_instance;
std::vector<std::string> planner_plugin_names;

We will get the name of planning plugin we want to load from the ROS parameter server, and then load the planner making sure to catch all exceptions.

if (!motion_planning_api_tutorial_node->get_parameter("ompl.planning_plugins", planner_plugin_names))
  RCLCPP_FATAL(LOGGER, "Could not find planner plugin names");
try
{
  planner_plugin_loader.reset(new pluginlib::ClassLoader<planning_interface::PlannerManager>(
      "moveit_core", "planning_interface::PlannerManager"));
}
catch (pluginlib::PluginlibException& ex)
{
  RCLCPP_FATAL(LOGGER, "Exception while creating planning plugin loader %s", ex.what());
}

if (planner_plugin_names.empty())
{
  RCLCPP_ERROR(LOGGER,
               "No planner plugins defined. Please make sure that the planning_plugins parameter is not empty.");
  return -1;
}

const auto& planner_name = planner_plugin_names.at(0);
try
{
  planner_instance.reset(planner_plugin_loader->createUnmanagedInstance(planner_name));
  if (!planner_instance->initialize(robot_model, motion_planning_api_tutorial_node,
                                    motion_planning_api_tutorial_node->get_namespace()))
    RCLCPP_FATAL(LOGGER, "Could not initialize planner instance");
  RCLCPP_INFO(LOGGER, "Using planning interface '%s'", planner_instance->getDescription().c_str());
}
catch (pluginlib::PluginlibException& ex)
{
  const std::vector<std::string>& classes = planner_plugin_loader->getDeclaredClasses();
  std::stringstream ss;
  for (const auto& cls : classes)
    ss << cls << " ";
  RCLCPP_ERROR(LOGGER, "Exception while loading planner '%s': %s\nAvailable plugins: %s", planner_name.c_str(),
               ex.what(), ss.str().c_str());
}

moveit::planning_interface::MoveGroupInterface move_group(motion_planning_api_tutorial_node, PLANNING_GROUP);

Visualization

The package MoveItVisualTools provides many capabilities for visualizing objects, robots, and trajectories in RViz as well as debugging tools such as step-by-step introspection of a script.

namespace rvt = rviz_visual_tools;
moveit_visual_tools::MoveItVisualTools visual_tools(motion_planning_api_tutorial_node, "panda_link0",
                                                    "move_group_tutorial", move_group.getRobotModel());
visual_tools.enableBatchPublishing();
visual_tools.deleteAllMarkers();  // clear all old markers
visual_tools.trigger();

/* Remote control is an introspection tool that allows users to step through a high level script
   via buttons and keyboard shortcuts in RViz */
visual_tools.loadRemoteControl();

/* RViz provides many types of markers, in this demo we will use text, cylinders, and spheres*/
Eigen::Isometry3d text_pose = Eigen::Isometry3d::Identity();
text_pose.translation().z() = 1.75;
visual_tools.publishText(text_pose, "Motion Planning API Demo", rvt::WHITE, rvt::XLARGE);

/* Batch publishing is used to reduce the number of messages being sent to RViz for large visualizations */
visual_tools.trigger();

/* We can also use visual_tools to wait for user input */
visual_tools.prompt("Press 'next' in the RvizVisualToolsGui window to start the demo");

Pose Goal

We will now create a motion plan request for the arm of the Panda specifying the desired pose of the end-effector as input.

visual_tools.trigger();
planning_interface::MotionPlanRequest req;
planning_interface::MotionPlanResponse res;
geometry_msgs::msg::PoseStamped pose;
pose.header.frame_id = "panda_link0";
pose.pose.position.x = 0.3;
pose.pose.position.y = 0.4;
pose.pose.position.z = 0.75;
pose.pose.orientation.w = 1.0;

A tolerance of 0.01 m is specified in position and 0.01 radians in orientation

std::vector<double> tolerance_pose(3, 0.01);
std::vector<double> tolerance_angle(3, 0.01);

We will create the request as a constraint using a helper function available from the kinematic_constraints package.

moveit_msgs::msg::Constraints pose_goal =
    kinematic_constraints::constructGoalConstraints("panda_link8", pose, tolerance_pose, tolerance_angle);

req.group_name = PLANNING_GROUP;
req.goal_constraints.push_back(pose_goal);

Define workspace bounds

req.workspace_parameters.min_corner.x = req.workspace_parameters.min_corner.y =
    req.workspace_parameters.min_corner.z = -5.0;
req.workspace_parameters.max_corner.x = req.workspace_parameters.max_corner.y =
    req.workspace_parameters.max_corner.z = 5.0;

We now construct a planning context that encapsulate the scene, the request and the response. We call the planner using this planning context

planning_interface::PlanningContextPtr context =
    planner_instance->getPlanningContext(planning_scene, req, res.error_code);

if (!context)
{
  RCLCPP_ERROR(LOGGER, "Failed to create planning context");
  return -1;
}
context->solve(res);
if (res.error_code.val != res.error_code.SUCCESS)
{
  RCLCPP_ERROR(LOGGER, "Could not compute plan successfully");
  return -1;
}

Visualize the result

std::shared_ptr<rclcpp::Publisher<moveit_msgs::msg::DisplayTrajectory>> display_publisher =
    motion_planning_api_tutorial_node->create_publisher<moveit_msgs::msg::DisplayTrajectory>("/display_planned_path",
                                                                                             1);
moveit_msgs::msg::DisplayTrajectory display_trajectory;

/* Visualize the trajectory */
moveit_msgs::msg::MotionPlanResponse response;
res.getMessage(response);

display_trajectory.trajectory_start = response.trajectory_start;
display_trajectory.trajectory.push_back(response.trajectory);
visual_tools.publishTrajectoryLine(display_trajectory.trajectory.back(), joint_model_group);
visual_tools.trigger();
display_publisher->publish(display_trajectory);

/* Set the state in the planning scene to the final state of the last plan */
robot_state->setJointGroupPositions(joint_model_group, response.trajectory.joint_trajectory.points.back().positions);
planning_scene->setCurrentState(*robot_state.get());

Display the goal state

visual_tools.publishAxisLabeled(pose.pose, "goal_1");
visual_tools.publishText(text_pose, "Pose Goal (1)", rvt::WHITE, rvt::XLARGE);
visual_tools.trigger();

/* We can also use visual_tools to wait for user input */
visual_tools.prompt("Press 'next' in the RvizVisualToolsGui window to continue the demo");

Joint Space Goals

Now, setup a joint space goal

moveit::core::RobotState goal_state(robot_model);
std::vector<double> joint_values = { -1.0, 0.7, 0.7, -1.5, -0.7, 2.0, 0.0 };
goal_state.setJointGroupPositions(joint_model_group, joint_values);
moveit_msgs::msg::Constraints joint_goal =
    kinematic_constraints::constructGoalConstraints(goal_state, joint_model_group);
req.goal_constraints.clear();
req.goal_constraints.push_back(joint_goal);

Call the planner and visualize the trajectory

/* Re-construct the planning context */
context = planner_instance->getPlanningContext(planning_scene, req, res.error_code);
/* Call the Planner */
context->solve(res);
/* Check that the planning was successful */
if (res.error_code.val != res.error_code.SUCCESS)
{
  RCLCPP_ERROR(LOGGER, "Could not compute plan successfully");
  return -1;
}
/* Visualize the trajectory */
res.getMessage(response);
display_trajectory.trajectory.push_back(response.trajectory);

/* Now you should see two planned trajectories in series*/
visual_tools.publishTrajectoryLine(display_trajectory.trajectory.back(), joint_model_group);
visual_tools.trigger();
display_publisher->publish(display_trajectory);

/* We will add more goals. But first, set the state in the planning
   scene to the final state of the last plan */
robot_state->setJointGroupPositions(joint_model_group, response.trajectory.joint_trajectory.points.back().positions);
planning_scene->setCurrentState(*robot_state.get());

Display the goal state

visual_tools.publishAxisLabeled(pose.pose, "goal_2");
visual_tools.publishText(text_pose, "Joint Space Goal (2)", rvt::WHITE, rvt::XLARGE);
visual_tools.trigger();

/* Wait for user input */
visual_tools.prompt("Press 'next' in the RvizVisualToolsGui window to continue the demo");

/* Now, we go back to the first goal to prepare for orientation constrained planning */
req.goal_constraints.clear();
req.goal_constraints.push_back(pose_goal);
context = planner_instance->getPlanningContext(planning_scene, req, res.error_code);
context->solve(res);
res.getMessage(response);

display_trajectory.trajectory.push_back(response.trajectory);
visual_tools.publishTrajectoryLine(display_trajectory.trajectory.back(), joint_model_group);
visual_tools.trigger();
display_publisher->publish(display_trajectory);

/* Set the state in the planning scene to the final state of the last plan */
robot_state->setJointGroupPositions(joint_model_group, response.trajectory.joint_trajectory.points.back().positions);
planning_scene->setCurrentState(*robot_state.get());

Display the goal state

visual_tools.trigger();

/* Wait for user input */
visual_tools.prompt("Press 'next' in the RvizVisualToolsGui window to continue the demo");

Adding Path Constraints

Let’s add a new pose goal again. This time we will also add a path constraint to the motion.

/* Let's create a new pose goal */

pose.pose.position.x = 0.32;
pose.pose.position.y = -0.25;
pose.pose.position.z = 0.65;
pose.pose.orientation.w = 1.0;
moveit_msgs::msg::Constraints pose_goal_2 =
    kinematic_constraints::constructGoalConstraints("panda_link8", pose, tolerance_pose, tolerance_angle);

/* Now, let's try to move to this new pose goal*/
req.goal_constraints.clear();
req.goal_constraints.push_back(pose_goal_2);

/* But, let's impose a path constraint on the motion.
   Here, we are asking for the end-effector to stay level*/
geometry_msgs::msg::QuaternionStamped quaternion;
quaternion.header.frame_id = "panda_link0";
req.path_constraints = kinematic_constraints::constructGoalConstraints("panda_link8", quaternion);

Imposing path constraints requires the planner to reason in the space of possible positions of the end-effector (the workspace of the robot) because of this, we need to specify a bound for the allowed planning volume as well; Note: a default bound is automatically filled by the WorkspaceBounds request adapter (part of the OMPL pipeline, but that is not being used in this example). We use a bound that definitely includes the reachable space for the arm. This is fine because sampling is not done in this volume when planning for the arm; the bounds are only used to determine if the sampled configurations are valid.

req.workspace_parameters.min_corner.x = req.workspace_parameters.min_corner.y =
    req.workspace_parameters.min_corner.z = -2.0;
req.workspace_parameters.max_corner.x = req.workspace_parameters.max_corner.y =
    req.workspace_parameters.max_corner.z = 2.0;

Call the planner and visualize all the plans created so far.

context = planner_instance->getPlanningContext(planning_scene, req, res.error_code);
context->solve(res);
res.getMessage(response);
display_trajectory.trajectory.push_back(response.trajectory);
visual_tools.publishTrajectoryLine(display_trajectory.trajectory.back(), joint_model_group);
visual_tools.trigger();
display_publisher->publish(display_trajectory);

/* Set the state in the planning scene to the final state of the last plan */
robot_state->setJointGroupPositions(joint_model_group, response.trajectory.joint_trajectory.points.back().positions);
planning_scene->setCurrentState(*robot_state.get());

Display the goal state

visual_tools.publishAxisLabeled(pose.pose, "goal_3");
visual_tools.publishText(text_pose, "Orientation Constrained Motion Plan (3)", rvt::WHITE, rvt::XLARGE);
visual_tools.trigger();

启动文件

整个启动文件是 here 在 GitHub 上。本教程中的所有代码都可以从 moveit_tutorials 包中编译和运行。