Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Navigation Stack – Robot Setups

Save for later
  • 11 min read
  • 09 Oct 2013

  • Introduction to the navigation stacks and their powerful capabilities—clearly one of the greatest pieces of software that comes with ROS.
  • The TF is explained in order to show how to transform from the frame of one physical element to the other; for example, the data received using a sensor or the command for the desired position of an actuator.
  • We will see how to create a laser driver or simulate it.
  • We will learn how the odometry is computed and published, and how Gazebo provides it.
  • A base controller will be presented, including a detailed description of how to create one for your robot.
  • We will see how to execute SLAM with ROS. That is, we will show you how you can build a map from the environment with your robot as it moves through it.
  • Finally, you will be able to localize your robot in the map using the localization algorithms of the navigation stack.

The navigation stack in ROS

In order to understand the navigation stack, you should think of it as a set of algorithms that use the sensors of the robot and the odometry, and you can control the robot using a standard message. It can move your robot without problems (for example, without crashing or getting stuck in some location, or getting lost) to another position.

You would assume that this stack can be easily used with any robot. This is almost true, but it is necessary to tune some configuration files and write some nodes to use the stack.

The robot must satisfy some requirements before it uses the navigation stack:

  • The navigation stack can only handle a differential drive and holonomic-wheeled robots. The shape of the robot must be either a square or a rectangle. However, it can also do certain things with biped robots, such as robot localization, as long as the robot does not move sideways.
  • It requires that the robot publishes information about the relationships between all the joints and sensors' position.
  • The robot must send messages with linear and angular velocities.
  • A planar laser must be on the robot to create the map and localization. Alternatively, you can generate something equivalent to several lasers or a sonar, or you can project the values to the ground if they are mounted in another place on the robot.

The following diagram shows you how the navigation stacks are organized. You can see three groups of boxes with colors (gray and white) and dotted lines. The plain white boxes indicate those stacks that are provided by ROS, and they have all the nodes to make your robot really autonomous:

null-23-img-0

In the following sections, we will see how to create the parts marked in gray in the diagram. These parts depend on the platform used; this means that it is necessary to write code to adapt the platform to be used in ROS and to be used by the navigation stack.

Creating transforms

The navigation stack needs to know the position of the sensors, wheels, and joints.

To do that, we use the TF (which stands for Transform Frames) software library. It manages a transform tree. You could do this with mathematics, but if you have a lot of frames to calculate, it will be a bit complicated and messy.

Thanks to TF, we can add more sensors and parts to the robot, and the TF will handle all the relations for us.

If we put the laser 10 cm backwards and 20 cm above with regard to the origin of the coordinates of base_link, we would need to add a new frame to the transformation tree with these offsets.

Once inserted and created, we could easily know the position of the laser with regard to the base_link value or the wheels. The only thing we need to do is call the TF library and get the transformation.

Creating a broadcaster

Let's test it with a simple code. Create a new file in chapter7_tutorials/src with the name tf_broadcaster.cpp, and put the following code inside it:

#include <ros/ros.h> #include <tf/transform_broadcaster.h> int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_publisher"); ros::NodeHandle n; ros::Rate r(100); tf::TransformBroadcaster broadcaster; while(n.ok()){ broadcaster.sendTransform( tf::StampedTransform( tf::Transform(tf::Quaternion(0, 0, 0, 1), tf:

:Vector3(0.1, 0.0, 0.2)), ros::Time::now(),"base_link", "base_laser")); r.sleep(); } }

Remember to add the following line in your CMakelist.txt file to create the new executable:

rosbuild_add_executable(tf_broadcaster src/tf_broadcaster.cpp)

And we also create another node that will use the transform, and it will give us the position of a point of a sensor with regard to the center of base_link (our robot).

Creating a listener

Create a new file in chapter7_tutorials/src with the name tf_listener.cpp and input the following code:

#include <ros/ros.h> #include <geometry_msgs/PointStamped.h> #include <tf/transform_listener.h> void transformPoint(const tf::TransformListener& listener)

{ //we'll create a point in the base_laser frame that we'd like to

transform to the base_link frame geometry_msgs::PointStamped laser_point; laser_point.header.frame_id = "base_laser"; //we'll just use the most recent transform available for our simple example laser_point.header.stamp = ros::Time(); //just an arbitrary point in space laser_point.point.x = 1.0; laser_point.point.y = 2.0; laser_point.point.z = 0.0; geometry_msgs::PointStamped base_point; listener.transformPoint("base_link", laser_point, base_point); ROS_INFO("base_laser: (%.2f, %.2f. %.2f) -----> base_link:

(%.2f, %.2f, %.2f) at time %.2f", laser_point.point.x, laser_point.point.y,

laser_point.point.z, base_point.point.x, base_point.point.y, base_point.point.z,

base_point.header.stamp.toSec()); ROS_ERROR("Received an exception trying to transform a point

from \"base_laser\" to \"base_link\": %s", ex.what()); } int main(int argc, char** argv){ ros::init(argc, argv, "robot_tf_listener"); ros::NodeHandle n; tf::TransformListener listener(ros::Duration(10)); //we'll transform a point once every second ros::Timer timer = n.createTimer(ros::Duration(1.0), boost::bind

(&transformPoint, boost::ref(listener))); ros::spin(); }

Remember to add the line in the CMakeList.txt file to create the executable.

Compile the package and run both the nodes using the following commands:

$ rosmake chapter7_tutorials $ rosrun chapter7_tutorials tf_broadcaster $ rosrun chapter7_tutorials tf_listener

Then you will see the following message:

[ INFO] [1368521854.336910465]: base_laser: (1.00, 2.00. 0.00)

-----> base_link: (1.10, 2.00, 0.20) at time 1368521854.33 [ INFO] [1368521855.336347545]: base_laser: (1.00, 2.00. 0.00)

-----> base_link: (1.10, 2.00, 0.20) at time 1368521855.33

This means that the point that you published on the node, with the position (1.00, 2.00, 0.00) relative to base_laser, has the position (1.10, 2.00, 0.20) relative to base_link.

As you can see, the tf library performs all the mathematics for you to get the coordinates of a point or the position of a joint relative to another point.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime

A transform tree defines offsets in terms of both translation and rotation between different coordinate frames.

Let us see an example to help you understand this. We are going to add another laser, say, on the back of the robot (base_link):

null-23-img-1

The system had to know the position of the new laser to detect collisions, such as the one between wheels and walls. With the TF tree, this is very simple to do and maintain and is also scalable. Thanks to tf, we can add more sensors and parts, and the tf library will handle all the relations for us. All the sensors and joints must be correctly configured on tf to permit the navigation stack to move the robot without problems, and to exactly know where each one of their components is.

Before starting to write the code to configure each component, keep in mind that you have the geometry of the robot specified in the URDF file. So, for this reason, it is not necessary to configure the robot again. Perhaps you do not know it, but you have been using the robot_state_publisher package to publish the transform tree of your robot. We used it for the first time; therefore, you do have the robot configured to be used with the navigation stack.

Watching the transformation tree

If you want to see the transformation tree of your robot, use the following command:

$ roslaunch chapter7_tutorials gazebo_map_robot.launch model:=

"`rospack find chapter7_tutorials`/urdf/robot1_base_04.xacro"$

rosrun tf view_frames

The resultant frame is depicted as follows:

null-23-img-2

And now, if you run tf_broadcaster and run the rosrun tf view_frames command again, you will see the frame that you have created by code:

$ rosrun chapter7_tutorials tf_broadcaster $ rosrun tf view_frames

The resultant frame is depicted as follows:

null-23-img-3

Publishing sensor information

Your robot can have a lot of sensors to see the world; you can program a lot of nodes to take these data and do something, but the navigation stack is prepared only to use the planar laser's sensor. So, your sensor must publish the data with one of these types: sensor_msgs/LaserScan or sensor_msgs/PointCloud.

We are going to use the laser located in front of the robot to navigate in Gazebo. Remember that this laser is simulated on Gazebo, and it publishes data on the base_scan/scan frame.

In our case, we do not need to configure anything of our laser to use it on the navigation stack. This is because we have tf configured in the .urdf file, and the laser is publishing data with the correct type.

If you use a real laser, ROS might have a driver for it. Anyway, if you are using a laser that has no driver on ROS and want to write a node to publish the data with the sensor_msgs/LaserScan sensor, you have an example template to do it, which is shown in the following section.

But first, remember the structure of the message sensor_msgs/LaserScan. Use the following command:

$ rosmsg show sensor_msgs/LaserScan std_msgs/Header header uint32 seq time stamp string frame_id float32 angle_min float32 angle_max float32 angle_increment float32 time_increment float32 scan_time float32 range_min float32 range_max float32[] rangesfloat32[] intensities

Creating the laser node

Now we will create a new file in chapter7_tutorials/src with the name laser.cpp and put the following code in it:

#include <ros/ros.h> #include <sensor_msgs/LaserScan.h> int main(int argc, char** argv){ ros::init(argc, argv, "laser_scan_publisher"); ros::NodeHandle n; ros::Publisher scan_pub = n.advertise

<sensor_msgs::LaserScan>("scan", 50); unsigned int num_readings = 100; double laser_frequency = 40; double ranges[num_readings]; double intensities[num_readings]; int count = 0; ros::Rate r(1.0); while(n.ok()){ //generate some fake data for our laser scan for(unsigned int i = 0; i < num_readings; ++i){ ranges[i] = count; intensities[i] = 100 + count; } ros::Time scan_time = ros::Time::now(); //populate the LaserScan message sensor_msgs::LaserScan scan; scan.header.stamp = scan_time; scan.header.frame_id = "base_link"; scan.angle_min = -1.57; scan.angle_max = 1.57; scan.angle_increment = 3.14 / num_readings; scan.time_increment = (1 / laser_frequency) / (num_readings); scan.range_min = 0.0; scan.range_max = 100.0; scan.ranges.resize(num_readings); scan.intensities.resize(num_readings); for(unsigned int i = 0; i < num_readings; ++i){ scan.ranges[i] = ranges[i]; scan.intensities[i] = intensities[i]; } scan_pub.publish(scan); ++count; r.sleep(); } }

As you can see, we are going to create a new topic with the name scan and the message type sensor_msgs/LaserScan. You must be familiar with this message type from sensor_msgs/LaserScan. The name of the topic must be unique. When you configure the navigation stack, you will select this topic to be used for the navigation. The following command line shows how to create the topic with the correct name:

ros::Publisher scan_pub = n.advertise<sensor_msgs::LaserScan>

("scan", 50);

It is important to publish data with header, stamp, frame_id, and many more elements because, if not, the navigation stack could fail with such data:

scan.header.stamp = scan_time; scan.header.frame_id = "base_link";

Other important data on header is frame_id. It must be one of the frames created in the .urdf file and must have a frame published on the tf frame transforms. The navigation stack will use this information to know the real position of the sensor and make transforms such as the one between the data sensor and obstacles.

null-23-img-4

With this template, you can use any laser although it has no driver for ROS. You only have to change the fake data with the right data from your laser.

This template can also be used to create something that looks like a laser but is not. For example, you could simulate a laser using stereoscopy or using a sensor such as a sonar.