In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position.
You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack.
The robot must satisfy some requirements before it uses the navigation stack:
The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous:
In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack.
The navigation stack needs to know the position of the sensors, wheels, and joints.
To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy.
Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us.
If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets.
Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation.
Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it:
#include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf::Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } }
Remember to add the following line in your CMakelist.txt file to create the new executable:
rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp)
And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot).
Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code:
#include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener){ //we'll create a point in the base_laser frame that we'd like to transform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link: (%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y, laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z, base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point from "base_laser" to "base_link": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind(&transformPoint, boost::ref(listener))); ros::spin(); }
Remember to add the line in the CMakeList.txt file to create the executable.
Compile the package and run both the nodes using the following commands:
$ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener
Then you will see the following message:
[ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00) -----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33
This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link.
As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point.
A transform tree defines offsets in terms of both translation and rotation between different coordinate frames.
Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link):
The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is.
Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack.
If you want to see the transformation tree of your robot, use the following command:
$ roslaunch chapter7_tutorials gazebo_map_robot.launch model:= "`rospack find chapter7 _tutorials`/urdf/robot1_base_04.xacro" $ rosrun tf view_frames
The resultant frame is depicted as follows:
And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code:
$ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames
The resultant frame is depicted as follows: