Working with perception using MoveIt! and Gazebo
Till now, in MoveIt!, we have worked with arm only. In this section, we will see how to interface a 3D vision sensor data to MoveIt!. The sensor can be either simulated using Gazebo or you can directly interface an RGB-D sensor such as Kinect or Xtion Pro using the openni_launch
package. Here we will work using Gazebo simulation.
We will add sensors to MoveIt! for vision assisted pick and place. We will create a grasp table and a grasp object in gazebo for the pick and place operation. We will add two custom models called grasp_table and grasp_object. The sample models are located along with the chapter codes and it should copy to the ~/.gazebo/models
folder for accessing the models from gazebo.
The following command will launch the robot arm and the Asus Xtion pro simulation in gazebo:
$ roslaunch seven_dof_arm_gazebo seven_dof_arm_bringup_grasping
This command will open up gazebo with arm joint controllers and gazebo plugin for 3D vision...