Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Robotics

35 Articles
article-image-amazon-remars-day-1-kicks-off-showcasing-amazons-next-gen-ai-robots-spot-the-robo-dog-and-a-guest-appearance-from-iron-man
Savia Lobo
06 Jun 2019
11 min read
Save for later

Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’

Savia Lobo
06 Jun 2019
11 min read
Amazon’s inaugural re:MARS event kicked off on Tuesday, June 4 at the Aria in Las Vegas. This 4-day event is inspired by MARS, a yearly invite-only event hosted by Jeff Bezos that brings together innovative minds in Machine learning, Automation, Robotics, and Space to share new ideas across these rapidly advancing domains. re:MARS featured a lot of announcements revealing a range of robots each engineered for a different purpose. Some of them include helicopter drones for delivery, two robot dogs by Boston Dynamics, Autonomous human-like acrobats by Walt Disney Imagineering, and much more. Amazon also revealed Alexa’s new Dialog Modeling for Natural, Cross-Skill Conversations. Let us have a brief look at each of the announcements. Robert Downey Jr. announces ‘The Footprint Coalition’ project to clean up the environment using Robotics Popularly known as the “Iron Man”, Robert Downey Jr.’s visit was one of the exciting moments where he announced a new project called The Footprint Coalition to clean up the planet using advanced technologies at re:MARS. “Between robotics and nanotechnology we could probably clean up the planet significantly, if not entirely, within a decade,” he said. According to The Forbes, “Amazon did not immediately respond to questions about whether it was investing financially or technologically in Downey Jr.’s project.” “At this point, the effort is severely light on details, with only a bare-bones website to accompany Downey’s public statement, but the actor said he plans to officially launch the project by April 2020,” Forbes reports. A recent United Nations report found that humans are having an unprecedented and devastating effect on global biodiversity, and researchers have found microplastics polluting the air, ocean, and soil. The announcement of this project has been opened to the public because the “company itself is under fire for its policies around the environment and climate change”. Additionally, Morgan Pope and Tony Dohi of Walt Disney Imagineering, also demonstrated their work to create autonomous acrobats. https://twitter.com/jillianiles/status/1136082571081555968 https://twitter.com/thesullivan/status/1136080570549563393 Amazon will soon deliver orders using drones On Wednesday, Amazon unveiled a revolutionary new drone that will test deliver toothpaste and other household goods starting within months. This drone is “part helicopter and part science-fiction aircraft” with built-in AI features and sensors that will help it fly robotically without threatening traditional aircraft or people on the ground. Gur Kimchi, vice president of Amazon Prime Air, said in an interview to Bloomberg, “We have a design that is amazing. It has performance that we think is just incredible. We think the autonomy system makes the aircraft independently safe.” However, he refused to provide details on where the delivery tests will be conducted. Also, the drones have received a year’s approval from the FAA to test the devices in limited ways that still won't allow deliveries. According to a Bloomberg report, “It can take years for traditional aircraft manufacturers to get U.S. Federal Aviation Administration approval for new designs and the agency is still developing regulations to allow drone flights over populated areas and to address national security concerns. The new drone presents even more challenges for regulators because there aren’t standards yet for its robotic features”. Competitors to Amazon’s unnamed drone include Alphabet Inc.’s Wing, which became the first drone to win an FAA approval to operate as a small airline, in April. Also, United Parcel Service Inc. and drone startup Matternet Inc. began using drones to move medical samples between hospitals in Raleigh, North Carolina, in March. Amazon’s drone is about six feet across with six propellers that lift it vertically off the ground. It is surrounded by a six-sided shroud that will protect people from the propellers, and also serves as a high-efficiency wing such that it can fly more horizontally like a plane. Once it gets off the ground, the craft tilts and flies sideways -- the helicopter blades becoming more like airplane propellers. Kimchi said, “Amazon’s business model for the device is to make deliveries within 7.5 miles (12 kilometers) from a company warehouse and to reach customers within 30 minutes. It can carry packages weighing as much as five pounds. More than 80% of packages sold by the retail behemoth are within that weight limit.” According to the company, one of the things the drone has mastered is detecting utility wires and clotheslines. They have been notoriously difficult to identify reliably and pose a hazard for a device attempting to make deliveries in urban and suburban areas. To know more about these high-tech drones in detail, head over to Amazon’s official blogpost. Boston Dynamics’ first commercial robot, Spot Boston Dynamics revealed its first commercial product, a quadrupedal robot named Spot.  Boston Dynamics’ CEO Marc Raibert told The Verge, “Spot is currently being tested in a number of “proof-of-concept” environments, including package delivery and surveying work.” He also said that although there’s no firm launch date for the commercial version of Spot, it should be available within months, certainly before the end of the year. “We’re just doing some final tweaks to the design. We’ve been testing them relentlessly”, Raibert said. These Spot robots are capable of navigating environments autonomously, but only when their surroundings have been mapped in advance. They can withstand kicks and shoves and keep their balance on tricky terrain, but they don’t decide for themselves where to walk. These robots are simple to control; using a D-pad, users can steer the robot as just like an RC car or mechanical toy. A quick tap on the video feed streamed live from the robot’s front-facing camera allows to select a destination for it to walk to, and another tap lets the user assume control of a robot arm mounted on top of the chassis. With 3D cameras mounted atop, a Spot robot can map environments like construction sites, identifying hazards and work progress. It also has a robot arm which gives it greater flexibility and helps it open doors and manipulate objects. https://twitter.com/jjvincent/status/1136096290016595968 The commercial version will be “much less expensive than prototypes [and] we think they’ll be less expensive than other peoples’ quadrupeds”, Raibert said. Here’s a demo video of the Spot robot at the re:MARS event. https://youtu.be/xy_XrAxS3ro Alexa gets new dialog modeling for improved natural, cross-skill conversations Amazon unveiled new features in Alexa that would help the conversational agent to answer more complex questions and carry out more complex tasks. Rohit Prasad, Alexa vice president and head scientist, said, “We envision a world where customers will converse more naturally with Alexa: seamlessly transitioning between skills, asking questions, making choices, and speaking the same way they would with a friend, family member, or co-worker. Our objective is to shift the cognitive burden from the customer to Alexa.” This new update to Alexa is a set of AI modules that work together to generate responses to customers’ questions and requests. With every round of dialog, the system produces a vector — a fixed-length string of numbers — that represents the context and the semantic content of the conversation. “With this new approach, Alexa will predict a customer’s latent goal from the direction of the dialog and proactively enable the conversation flow across topics and skills,” Prasad says. “This is a big leap for conversational AI.” At re:MARS, Prasad also announced the developer preview of Alexa Conversations, a new deep learning-based approach for skill developers to create more-natural voice experiences with less effort, fewer lines of code, and less training data than before. The preview allows skill developers to create natural, flexible dialogs within a single skill; upcoming releases will allow developers to incorporate multiple skills into a single conversation. With Alexa Conversations, developers provide: (1) application programming interfaces, or APIs, that provide access to their skills’ functionality; (2) a list of entities that the APIs can take as inputs, such as restaurant names or movie times;  (3) a handful of sample dialogs annotated to identify entities and actions and mapped to API calls. Alexa Conversations’ AI technology handles the rest. “It’s way easier to build a complex voice experience with Alexa Conversations due to its underlying deep-learning-based dialog modeling,” Prasad said. To know more about this announcement in detail, head over to Alexa’s official blogpost. Amazon Robotics unveiled two new robots at its fulfillment centers Brad Porter, vice president of robotics at Amazon, announced two new robots, one is, code-named Pegasus and the other one, Xanthus. Pegasus, which is built to sort packages, is a 3-foot-wide robot equipped with a conveyor belt on top to drop the right box in the right location. “We sort billions of packages a year. The challenge in package sortation is, how do you do it quickly and accurately? In a world of Prime one-day [delivery], accuracy is super-important. If you drop a package off a conveyor, lose track of it for a few hours  — or worse, you mis-sort it to the wrong destination, or even worse, if you drop it and damage the package and the inventory inside — we can’t make that customer promise anymore”, Porter said. Porter said Pegasus robots have already driven a total of 2 million miles, and have reduced the number of wrongly sorted packages by 50 percent. Porter said the Xanthus, represents the latest incarnation of Amazon’s drive robot. Amazon uses tens of thousands of the current-generation robot, known as Hercules, in its fulfillment centers. Amazon unveiled Xanthus Sort Bot and Xanthus Tote Mover. “The Xanthus family of drives brings innovative design, enabling engineers to develop a portfolio of operational solutions, all of the same hardware base through the addition of new functional attachments. We believe that adding robotics and new technologies to our operations network will continue to improve the associate and customer experience,” Porter says. To know more about these new robots watch the video below: https://youtu.be/4MH7LSLK8Dk StyleSnap: An AI-powered shopping Amazon announced StyleSnap, a recent move to promote AI-powered shopping. StyleSnap helps users pick out clothes and accessories. All they need to do is upload a photo or screenshot of what they are looking for, when they are unable to describe what they want. https://twitter.com/amazonnews/status/1136340356964999168 Amazon said, "You are not a poet. You struggle to find the right words to explain the shape of a neckline, or the spacing of a polka dot pattern, and when you attempt your text-based search, the results are far from the trend you were after." To use StyleSnap, just open the Amazon app, click the camera icon in the upper right-hand corner, select the StyleSnap option, and then upload an image of the outfit. Post this, StyleSnap provides recommendations of similar outfits on Amazon to purchase, with users able to filter across brand, pricing, and reviews. Amazon's AI system can identify colors and edges, and then patterns like floral and denim. Using this information, its algorithm can then accurately pick a matching style. To know more about StyleSnap in detail, head over to Amazon’s official blog post. Amazon Go trains cashierless store algorithms using synthetic data Amazon at the re:MARS shared more details about Amazon Go, the company’s brand for its cashierless stores. They said Amazon Go uses synthetic data to intentionally introduce errors to its computer vision system. Challenges that had to be addressed before opening stores to avoid queues include the need to make vision systems that account for sunlight streaming into a store, little time for latency delays, and small amounts of data for certain tasks. Synthetic data is being used in a number of ways to power few-shot learning, improve AI systems that control robots, train AI agents to walk, or beat humans in games of Quake III. Dilip Kumar, VP of Amazon Go, said, “As our application improved in accuracy — and we have a very highly accurate application today — we had this interesting problem that there were very few negative examples, or errors, which we could use to train our machine learning models.” He further added, “So we created synthetic datasets for one of our challenging conditions, which allowed us to be able to boost the diversity of the data that we needed. But at the same time, we have to be careful that we weren’t introducing artifacts that were only visible in the synthetic data sets, [and] that the data translates well to real-world situations — a tricky balance.” To know more about this news in detail, check out this video: https://youtu.be/jthXoS51hHA The Amazon re:MARS event is still ongoing and will have many more updates. To catch live updates from Vegas visit Amazon’s blog. World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase Amazon introduces S3 batch operations to process millions of S3 objects Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available
Read more
  • 0
  • 0
  • 3210

article-image-boston-dynamics-adds-military-grade-mortar-parkour-skills-to-its-popular-humanoid-atlas-robot
Natasha Mathur
12 Oct 2018
2 min read
Save for later

Boston Dynamics adds military-grade mortor (parkour) skills to its popular humanoid Atlas Robot

Natasha Mathur
12 Oct 2018
2 min read
Boston Dynamics, a robotics design company, has now added parkour skills to its popular and advanced humanoid robot, named Atlas. Parkour is a training discipline that involves using movement developed by the military obstacle course training. The company posted a video on YouTube yesterday that shows Atlas jumping over a log, climbing and leaping up staggered tall boxes mimicking a parkour runner in the military. “The control software (in Atlas) uses the whole body including legs, arms, and torso, to marshal the energy and strength for jumping over the log and leaping up the steps without breaking its pace.  (Step height 40 cm.) Atlas uses computer vision to locate itself with respect to visible markers on the approach to hit the terrain accurately”, mentioned Boston Dynamics in yesterday’s video. Atlas Parkour  The original version of Atlas was made public, back in 2013, and was created for the United States Defense Advanced Research Projects Agency (DARPA). It quickly became famous for its control system. This advanced control system robot coordinates the motion of its arms, torso, and legs to achieve whole-body mobile manipulation. Boston Dynamics then unveiled the next generation of its Atlas robot, back in 2016. This next-gen electrically powered and hydraulically actuated Atlas Robot was capable of walking on the snow, picking up boxes, and getting up by itself after a fall. It was designed mainly to operate outdoors and inside buildings. Atlas, the next-generation Atlas consists of sensors embedded in its body and legs to balance. It also comprises LIDAR and stereo sensors in its head. This helps it avoid obstacles, assess the terrain well and also help it with navigation. Boston Dynamics has a variety of other robots such as Handle, SpotMini, Spot, LS3, WildCat, BigDog, SandFlea, and Rhex. These robots are capable of performing actions that range from doing backflips, opening (and holding) doors, washing the dishes, trail running, and lifting boxes among others. For more information, check out the official Boston Dynamics Website. Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019 Meet CIMON, the first AI robot to join the astronauts aboard ISS What we learned at the ICRA 2018 conference for robotics & automation
Read more
  • 0
  • 0
  • 4209

article-image-diy-selfie-drone-arduino-esp8266
Vijin Boricha
29 May 2018
10 min read
Save for later

How to assemble a DIY selfie drone with Arduino and ESP8266

Vijin Boricha
29 May 2018
10 min read
Have you ever thought of something that can take a photo from the air, or perhaps take a selfie from it? How about we build a drone for taking selfies and recording videos from the air? Taking photos from the sky is one of the most exciting things in photography this year. You can shoot from helicopters, planes, or even from satellites. Unless you own a personal air vehicle or someone you know does, you know this is a costly affair sure to burn through your pockets. Drones can come in handy here. Have ever googled drone photography? If you did, I am sure you'd want to build or buy a drone for photography, because of the amazing views of the common subjects taken from the sky. Today, we will learn to build a drone for aerial photography and videography. This tutorial is an excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. Assuming you know how to build your customized frame if not you can refer to our book, or you may buy HobbyKing X930 glass fiber frame and connect the parts together, as directed in the manual. However, I have a few suggestions to help you carry out a better assembly of the frame: Firstly, connect the motor mounted with the legs or wings or arms of the frame. Tighten them firmly, as they will carry and hold the most important equipment of the drone. Then, connect them to the base and, later other parts with firm connections. Now, we will calibrate our ESCs. We will take the signal cable from an ESC (the motor is plugged into the ESC; careful, don't connect the propeller) and connect it to the throttle pins on the radio. Make sure the transmitter is turned on and the throttle is in the lowest position. Now, plug the battery into the ESC and you will hear a beep. Now, gradually increase the throttle from the transmitter. Your motor will start spinning at any position. This is because the ESC is not calibrated. So, you need to tell the ESC where the high point and the low point of the throttle are. Disconnect the battery first. Increase the throttle of the transmitter to the highest position and power the ESC. Your ESC will now beep once and beep 3 times in every 4 seconds. Now, move the throttle to the bottommost position and you will hear the ESC beep as if it is ready and calibrated. Now, you can increase the throttle of the transmitter and will see from lower to higher, the throttle will work. Now, mount the motors, connect them to the ESCs, and then connect them to the ArduPilot, changing the pins gradually. Now, connect your GPS to the ArduPilot and calibrate it. Now, our drone is ready to fly. I would suggest you fly the drone for about 10-15 minutes before connecting the camera. Connecting the camera For a photography drone, connecting the camera and controlling the camera is one of the most important things. Your pictures and videos will be spoiled if you cannot adjust the camera and stabilize it properly. In our case, we will use a camera gimbal to hold the camera and move it from the ground. Choosing a gimbal The camera gimbal holds the camera for you and can move the camera direction according to your command. There are a number of camera gimbals out there. You can choose any type, depending on your demand and camera size and specification. If you want to use a DSLR camera, you should use a bigger gimbal and, if you use a point and shoot type camera or action camera, you may use small- or medium-sized gimbals. There are two types of gimbals, a brushless gimbal, and a standard gimbal. The standard gimbal has servo motors and gears. If you use an FPV camera, then a standard gimbal with a 2-axis manual mount is the best option. The standard gimbal is not heavy; it is lightweight and not expensive. The best thing is you will not need an external controller board for your standard camera gimbal. The brushless gimbal is for professional aero photographers. It is smooth and can shoot videos or photos with better quality. The brushless gimbal will need an external controller board for your drone and the brushless gimbal is heavier than the standard gimbal. Choosing the best gimbal is one of the hard things for a photographer, as the stabilization of the image is a must for photoshoots. If you cannot control the camera from the ground, then using a gimbal is worthless. The following picture shows a number of gimbals: After choosing your camera and the gimbal, the first thing is to mount the gimbal and the camera to the drone. Make sure the mount is firm, but not too hard, because it will make the camera shake while flying the drone. You may use the Styrofoam or rubber pieces that came with the gimbal to reduce the vibration and make the image stable. Configuring the camera with the ArduPilot Configuring the camera with the ArduPilot is easy. Before going any further, let us learn a few things about the camera gimbal's Euler angels: Tilt: This moves the camera sloping position (range -90 degrees to +90 degrees), it is the motion (clockwise-anticlockwise) with the vertical axis Roll: This is a motion ranging from 0 degrees to 360 degrees parallel to the horizontal axis Pan: This is the same type motion of roll ranging from 0 degrees to 360 degrees but in the vertical axis Shutter: This is a switch that triggers a click or sends a signal Firstly, we are going to use the standard gimbal. Basically, there are two servos in a standard gimbal. One is for pitch or tilt and another is for the roll. So, a standard gimbal gives you a two-dimensional motion with the camera viewpoint. Connection Follow these steps to connect the camera to the ArduPilot: Take the pitch servo's signal pin and connect it to the 11th pin of the ArduPilot (A11) and the roll signal to the 10th pin (A10). Make sure you connect only the signal (S pin) cable of the servos to the pin, not the other two pins (ground and the VCC). The signal cables must be connected to the innermost pins of the A11 and A10 pins (two pins make a raw; see the following picture for clarification): My suggestion is adding an extra battery for your gimbal's servos. If you want to connect your servo directly to the ArduPilot, your ArduPilot will not perform well, as the servos will draw power. Now, connect your ArduPilot to your PC using wire or telemetry. Go to the Initial Setup menu and, under Optional Hardware, you will find another option called Camera Gimbal. Click on this and you will see the following screen: For the Tilt, change the pin to RC11; for the Roll, change the pin to RC10; and for Shutter, change it to CH7. If you want to change the Tilt during the flight from the transmitter, you need to change the Input Ch of the Tilt. See the following screenshot: Now, you need to change an option in the Configuration | Extended Tuning page. Set Ch6 Opt to None, as in the following screenshot, and hit the Write Params button: We need to align the minimum and maximum PWM values for the servos of the gimbal. To do that, we can tilt the frame of the gimbal to the leftmost position and from the transmitter, move the knob to the minimum position and start increasing, your servo will start to move at any time, then stop moving the knob. For the maximum calibration, move the Tilt to the rightmost position and do the same thing for the knob with the maximum position. Do the same thing for the pitch with the forward and backward motion. We also need to level the gimbal for better performance. To do that, you need to keep the gimbal frame level to the ground and set the Camera Gimbal option, the Servo Limits, and the Angle Limits. Change them as per the level of the frame. Controlling the camera Controlling the camera to take selfies or record video is easy. You can use the shutter pin we used before or the camera's mobile app for controlling the camera. My suggestion is to use the camera's app to take shots because you will get a live preview of what you are shooting and it will be easy to control the camera shots. However, if you want to use the Shutter button manually from the transmitter then you can do this too. We have connected the RC7 pin for controlling a servo. You can use a servo or a receiver switch for your camera to manually trigger the shutter. To do that, you can buy a receiver controller on/off switch. You can use this switch for various purposes. Clicking the shutter of your camera is one of them. Manually triggering the camera is easy. It is usually done for point and shoot cameras. To do that, you need to update the firmware of your cameras. You can do this in many ways, but the easiest one will be discussed here. Your RECEIVER CONTROLLED ON/OFF SWITCH may look like the following: You can see five wires in the picture. The three wires together are, as usual, pins of the servo motor. Take out the signal cable (in this case, this is the yellow cable) and connect it to the RC7 pin of the ArduPilot. Then, connect the positive to one of the thick red wires. Take the camera's data cable and connect the other tick wire to the positive of the USB cable and the negative wire will be connected to the negative of the three connected wires. Then, an output of the positive and negative wire will go to the battery (an external battery is suggested for the camera). To upgrade the camera firmware, you need to go to the camera's website and upgrade the firmware for the remote shutter option. In my case, the website is http://chdk.wikia.com/wiki/CHDK . I have downloaded it for a Canon point and shoot camera. You can also use action cameras for your drones. They are cheap and can be controlled remotely via mobile applications. Flying and taking shots Flying the photography drone is not that difficult. My suggestion is to lock the altitude and fly parallel to the ground. If you use a camera remote controller or an app, then it is really easy to take the photo or record a video. However, if you use the switch, as we discussed, then you need to open and connect your drone to the mission planner via telemetry. Go to the flight data, right click on the map, and then click the Trigger Camera Now option. It will trigger the Camera Shutter button and start recording or take a photo. You can do this when your drone is in a locked position and, using the timer, take a shot from above, which can be a selfie too. Let's try it. Let me know what happens and whether you like it or not. Next, learn to build other drones like a mission control drone or gliding drones from our book Building Smart Drones with ESP8266 and Arduino. Drones: Everything you ever wanted to know! How to build an Arduino based ‘follow me’ drone Tips and tricks for troubleshooting and flying drones safely
Read more
  • 0
  • 0
  • 8687

article-image-arduino-based-follow-me-drone
Vijin Boricha
23 May 2018
9 min read
Save for later

How to build an Arduino based 'follow me' drone

Vijin Boricha
23 May 2018
9 min read
In this tutorial, we will learn how to train the drone to do something or give the drone artificial intelligence by coding from scratch. There are several ways to build Follow Me-type drones. We will learn easy and quick ways in this article. Before going any further, let's learn the basics of a Follow Me drone. This is a book excerpt from Building Smart Drones with ESP8266 and Arduino written by Syed Omar Faruk Towaha. If you are a hardcore programmer and hardware enthusiast, you can build an Arduino drone, and make it a Follow Me drone by enabling a few extra features. For this section, you will need the following things: Motors ESCs Battery Propellers Radio-controller Arduino Nano HC-05 Bluetooth module GPS MPU6050 or GY-86 gyroscope. Some wires Connections are simple: You need to connect the motors to the ESCs, and ESCs to the battery. You can use a four-way connector (power distribution board) for this, like in the following diagram: Now, connect the radio to the Arduino Nano with the following pin configuration: Arduino pin Radio pin D3 CH1 D5 CH2 D2 CH3 D4 CH4 D12 CH5 D6 CH6 Now, connect the Gyroscope to the Arduino Nano with the following configuration: Arduino pin Gyroscope pin 5V 5V GND GND A4 SDA A5 SCL You are left with the four wires of the ESC signals; let's connect them to the Arduino Nano now, as shown in the following configuration: Arduino pin Motor signal pin D7 Motor 1 D8 Motor 2 D9 Motor 3 D10 Motor 4 Our connection is almost complete. Now we need to power the Arduino Nano and the ESCs. Before doing that, making common the ground means connecting both the wired to the ground. Before going any further, we need to upload the code to the brain of our drone, which is the Arduino Nano. The code is a little bit big. I am going to explain the code after installing the necessary library. You will need a library installed to the Arduino library folder before going to the programming part. The library's name is PinChangeInt. Install the library and write the code for the drone. The full code can be found at Github. Let's explain the code a little bit. In the code, you will find lots of functions with calculations. For our gyroscope, we needed to define all the axes, sensor data, pin configuration, temperature synchronization data, I2C data, and so on. In the following function, we have declared two structures for the accel and gyroscope data with all the directions: typedef union accel_t_gyro_union { struct { uint8_t x_accel_h; uint8_t x_accel_l; uint8_t y_accel_h; uint8_t y_accel_l; uint8_t z_accel_h; uint8_t z_accel_l; uint8_t t_h; uint8_t t_l; uint8_t x_gyro_h; uint8_t x_gyro_l; uint8_t y_gyro_h; uint8_t y_gyro_l; uint8_t z_gyro_h; uint8_t z_gyro_l; } reg; struct { int x_accel; int y_accel; int z_accel; int temperature; int x_gyro; int y_gyro; int z_gyro; } value; }; In the void setup() function of our code, we have declared the pins we have connected to the motors: myservoT.attach(7); //7-TOP myservoR.attach(8); //8-Right myservoB.attach(9); //9 - BACK myservoL.attach(10); //10 LEFT We also called our test_gyr_acc() and test_radio_reciev() functions, for testing the gyroscope and receiving data from the remote respectively. In our test_gyr_acc() function. In our test_gyr_acc() function, we have checked if it can detect our gyroscope sensor or not and set a condition if there is an error to get gyroscope data then to set our pin 13 high to get a signal: void test_gyr_acc() { error = MPU6050_read (MPU6050_WHO_AM_I, &c, 1); if (error != 0) { while (true) { digitalWrite(13, HIGH); delay(300); digitalWrite(13, LOW); delay(300); } } } We need to calibrate our gyroscope after testing if it connected. To do that, we need the help of mathematics. We will multiply both the rad_tilt_TB and rad_tilt_LR by 2.4 and add it to our x_a and y_a respectively. then we need to do some more calculations to get correct x_adder and the y_adder: void stabilize() { P_x = (x_a + rad_tilt_LR) * 2.4; P_y = (y_a + rad_tilt_TB) * 2.4; I_x = I_x + (x_a + rad_tilt_LR) * dt_ * 3.7; I_y = I_y + (y_a + rad_tilt_TB) * dt_ * 3.7; D_x = x_vel * 0.7; D_y = y_vel * 0.7; P_z = (z_ang + wanted_z_ang) * 2.0; I_z = I_z + (z_ang + wanted_z_ang) * dt_ * 0.8; D_z = z_vel * 0.3; if (P_z > 160) { P_z = 160; } if (P_z < -160) { P_z = -160; } if (I_x > 30) { I_x = 30; } if (I_x < -30) { I_x = -30; } if (I_y > 30) { I_y = 30; } if (I_y < -30) { I_y = -30; } if (I_z > 30) { I_z = 30; } if (I_z < -30) { I_z = -30; } x_adder = P_x + I_x + D_x; y_adder = P_y + I_y + D_y; } We then checked that our ESCs are connected properly with the escRead() function. We also called elevatorRead() and aileronRead() to configure our drone's elevator and the aileron. We called test_radio_reciev() to test if the radio we have connected is working, then we called check_radio_signal() to check if the signal is working. We called all the stated functions from the void loop() function of our Arduino code. In the void loop() function, we also needed to configure the power distribution of the system. We added a condition, like the following: if(main_power > 750) { stabilize(); } else { zero_on_zero_throttle(); } We also set a boundary; if main_power is greater than 750 (which is a stabling value for our case), then we stabilize the system or we call zero_on_zero_throttle(), which initializes all the values of all the directions. After uploading this, you can control your drone by sending signals from your remote control. Now, to make it a Follow Me drone, you need to connect a Bluetooth module or a GPS. You can connect your smartphone to the drone by using a Bluetooth module (HC-05 preferred) or another Bluetooth module as master-slave usage. And, of course, to make the drone follow you, you need the GPS. So, let's connect them to our drone. To connect the Bluetooth module, follow the following configuration: Arduino pin Bluetooth module pin TX RX RX TX 5V 5V GND GND See the following diagram for clarification: For the GPS, connect it as shown in the following configuration: Arduino pin GPS pin D11 TX D12 RX GND GND 5V 5V See the following diagram for clarification: Since all the sensors usages 5V power, I would recommend using an external 5V power supply for better communication, especially for the GPS. If we use the Bluetooth module, we need to make the drone's module the slave module and the other module the master module. To do that, you can set a pin mode for the master and then set the baud rate to at least 38,400, which is the minimum operating baud rate for the Bluetooth module. Then, we need to check if one module can hear the other module. For that, we can write our void loop() function as follows: if(Serial.available() > 0) { state = Serial.read(); } if (state == '0') { digitalWrite(Pin, LOW); state = 0; } else if (state == '1') { digitalWrite(Pin, HIGH); state = 0; } And do the opposite for the other module, connecting it to another Arduino. Remember, you only need to send and receive signals, so refrain from using other utilities of the Bluetooth module, for power consumption and swiftness. If we use the GPS, we need to calibrate the compass and make it able to communicate with another GPS module. We need to read the long value from the I2C, as follows: float readLongFromI2C() { unsigned long tmp = 0; for (int i = 0; i < 4; i++) { unsigned long tmp2 = Wire.read(); tmp |= tmp2 << (i*8); } return tmp; } float readFloatFromI2C() { float f = 0; byte* p = (byte*)&f; for (int i = 0; i < 4; i++) p[i] = Wire.read(); return f; } Then, we have to get the geo distance, as follows, where DEGTORAD is a variable that changes degree to radian: float geoDistance(struct geoloc &a, struct geoloc &b) { const float R = 6371000; // Earth radius float p1 = a.lat * DEGTORAD; float p2 = b.lat * DEGTORAD; float dp = (b.lat-a.lat) * DEGTORAD; float dl = (b.lon-a.lon) * DEGTORAD; float x = sin(dp/2) * sin(dp/2) + cos(p1) * cos(p2) * sin(dl/2) * sin(dl/2); float y = 2 * atan2(sqrt(x), sqrt(1-x)); return R * y; } We also need to write a function for the Geo bearing, where lat and lon are latitude and longitude respectively, gained from the raw data of the GPS sensor: float geoBearing(struct geoloc &a, struct geoloc &b) { float y = sin(b.lon-a.lon) * cos(b.lat); float x = cos(a.lat)*sin(b.lat) - sin(a.lat)*cos(b.lat)*cos(b.lon-a.lon); return atan2(y, x) * RADTODEG; } You can also use a mobile app to communicate with the GPS and make the drone move with you. Then the process is simple. Connect the GPS to your drone and get the TX and RX data from the Arduino and spread it through the radio and receive it through the telemetry, and then use the GPS from the phone with DroidPlanner or Tower. You also need to add a few lines in the main code to calibrate the compass. You can see the previous calibration code. The calibration of the compass varies from location to location. So, I would suggest you use the try-error method. In the following section, I will discuss how you can use an ESP8266 to make a GPS tracker that can be used with your drone. We learned to build a Follow Me-type drone and also used DroidPlanner 2 and Tower to configure it. Know more about using a smartphone to enable the follow me feature of ArduPilot and GPS tracker using ESP8266 from this book Building Smart Drones with ESP8266 and Arduino. Read More
Read more
  • 0
  • 0
  • 13159

article-image-setting-intel-edison
Packt
21 Jun 2017
8 min read
Save for later

Setting up Intel Edison

Packt
21 Jun 2017
8 min read
In this article by Avirup Basu, the author of the book Intel Edison Projects, we will be covering the following topics: Setting up the Intel Edison Setting up the developer environment (For more resources related to this topic, see here.) In every Internet of Things(IoT) or robotics project, we have a controller that is the brain of the entire system. Similarly we have Intel Edison. The Intel Edison computing module comes in two different packages. One of which is a mini breakout board the other of which is an Arduino compatible board. One can use the board in its native state as well but in that case the person has to fabricate his/hers own expansion board. The Edison is basically a size of a SD card. Due to its tiny size, it's perfect for wearable devices. However it's capabilities makes it suitable for IoT application and above all, the powerful processing capability makes it suitable for robotics application. However we don't simply use the device in this state. We hook up the board with an expansion board. The expansion board provides the user with enough flexibility and compatibility for interfacing with other units. The Edison has an operating system that is running the entire system. It runs a Linux image. Thus, to setup your device, you initially need to configure your device both at the hardware and at software level. Initial hardware setup We'll concentrate on the Edison package that comes with an Arduino expansion board. Initially you will get two different pieces: The Intel® Edison board The Arduino expansion board The following given is the architecture of the device: Architecture of Intel Edison. Picture Credits: https://software.intel.com/en-us/ We need to hook these two pieces up in a single unit. Place the Edison board on top of the expansion board such that the GPIO interfaces meet at a single point. Gently push the Edison against the expansion board. You will get a click sound. Use the screws that comes with the package to tighten the set up. Once, this is done, we'll now setup the device both at hardware level and software level to be used further. Following are the steps we'll cover in details: Downloading necessary software packages Connecting your Intel® Edison to your PC Flashing your device with the Linux image Connecting to a Wi-Fi network SSH-ing your Intel® Edison device Downloading necessary software packages To move forward with the development on this platform, we need to download and install a couple of software which includes the drivers and the IDEs. Following is the list of the software along with the links that are required: Intel® Platform Flash Tool Lite (https://01.org/android-ia/downloads/intel-platform-flash-tool-lite) PuTTY (http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) Intel XDK for IoT (https://software.intel.com/en-us/intel-xdk) Arduino IDE (https://www.arduino.cc/en/Main/Software) FileZilla FTP client (https://filezilla-project.org/download.php) Notepad ++ or any other editor (https://notepad-plus-plus.org/download/v7.3.html) Drivers and miscellaneous downloads Latest Yocto* Poky image Windows standalone driver for Intel Edison FTDI drivers (http://www.ftdichip.com/Drivers/VCP.htm) The 1st and the 2nd packages can be downloaded from (https://software.intel.com/en-us/iot/hardware/edison/downloads) Plugging in your device After all the software and drivers installation, we'll now connect the device to a PC. You need two Micro-B USB Cables(s) to connect your device to the PC. You can also use a 9V power adapter and a single Micro-B USB Cable, but for now we will not use the power adapter: Different sections of Arduino expansion board of Intel Edison A small switch exists between the USB port and the OTG port. This switch must be towards the OTG port because we're going to power the device from the OTG port and not through the DC power port. Once it is connected to your PC, open your device manager and expands the ports section. If all installations of drivers were successful, then you must see two ports: Intel Edison virtual com port USB serial port Flashing your device Once your device is successfully detected an installed, you need to flash your device with the Linux image. For this we'll use the flash tool provided by Intel: Open the flash lite tool and connect your device to the PC: Intel phone flash lite tool Once the flash tool is opened, click on Browse... and browse to the .zip file of the Linux image you have downloaded. After you click on OK, the tool will automatically unzip the file. Next, click on Start to flash: Intel® Phone flash lite tool – stage 1 You will be asked to disconnect and reconnect your device. Do as the tool says and the board should start flashing. It may take some time before the flashing is completed. You are requested not to tamper with the device during the process. Once the flashing is completed, we'll now configure the device: Intel® Phone flash lite tool – complete Configuring the device After flashing is successfully we'll now configure the device. We're going to use the PuTTY console for the configuration. PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. We're going to use the serial section here. Before opening PuTTY console: Open up the device manager and note the port number for USB serial port. This will be used in your PuTTY console: Ports for Intel® Edison in PuTTY Next select Serialon PuTTY console and enter the port number. Use a baud rate of 115200. Press Open to open the window for communicating with the device: PuTTY console – login screen Once you are in the console of PuTTY, then you can execute commands to configure your Edison. Following is the set of tasks we'll do in the console to configure the device: Provide your device a name Provide root password (SSH your device) Connect your device to Wi-Fi Initially when in the console, you will be asked to login. Type in root and press Enter. Once entered you will see root@edison which means that you are in the root directory: PuTTY console – login success Now, we are in the Linux Terminal of the device. Firstly, we'll enter the following command for setup: configure_edison –setup Press Enter after entering the command and the entire configuration will be somewhat straightforward: PuTTY console – set password Firstly, you will be asked to set a password. Type in a password and press Enter. You need to type in your password again for confirmation. Next, we'll set up a name for the device: PuTTY console – set name Give a name for your device. Please note that this is not the login name for your device. It's just an alias for your device. Also the name should be at-least 5 characters long. Once you entered the name, it will ask for confirmation press y to confirm. Then it will ask you to setup Wi-Fi. Again select y to continue. It's not mandatory to setup Wi-Fi, but it's recommended. We need the Wi-Fi for file transfer, downloading packages, and so on: PuTTY console – set Wi-Fi Once the scanning is completed, we'll get a list of available networks. Select the number corresponding to your network and press Enter. In this case it 5 which corresponds to avirup171which is my Wi-Fi. Enter the network credentials. After you do that, your device will get connected to the Wi-Fi. You should get an IP after your device is connected: PuTTY console – set Wi-Fi -2 After successful connection you should get this screen. Make sure your PC is connected to the same network. Open up the browser in your PC, and enter the IP address as mentioned in the console. You should get a screen similar to this: Wi-Fi setup – completed Now, we are done with the initial setup. However Wi-Fi setup normally doesn't happens in one go. Sometimes your device doesn't gets connected to the Wi-Fi and sometimes we cannot get this page as shown before. In those cases you need to start wpa_cli to manually configure the Wi-Fi. Refer to the following link for the details: http://www.intel.com/content/www/us/en/support/boards-and-kits/000006202.html Summary In this article, we have covered the areas of initial setup of Intel Edison and configuring it to the network. We have also covered how to transfer files to the Edison and vice versa. Resources for Article: Further resources on this subject: Getting Started with Intel Galileo [article] Creating Basic Artificial Intelligence [article] Using IntelliTrace to Diagnose Problems with a Hosted Service [article]
Read more
  • 0
  • 0
  • 3699

article-image-configuring-esp8266
Packt
14 Jun 2017
10 min read
Save for later

Configuring the ESP8266

Packt
14 Jun 2017
10 min read
In this article by Marco Schwartz the authors of the book ESP8266 Internet of Things Cookbook, we will learn following recipes: Setting up the Arduino development environment for the ESP8266 Choosing an ESP8266 Required additional components (For more resources related to this topic, see here.) Setting up the Arduino development environment for the ESP8266 To start us off, we will look at how to set up Arduino IDE development environment so that we can use it to program the ESP8266. This will involve installing the Arduino IDE and getting the board definitions for our ESP8266 module. Getting ready The first thing you should do is download the Arduino IDE if you do not already have it installed in your computer. You can do that from this link: https://www.arduino.cc/en/Main/Software. The webpage will appear as shown. It features that latest version of the Arduino IDE. Select your operating system and download the latest version that is available when you access the link (it was 1.6.13 at when this articlewas being written): When the download is complete, install the Arduino IDE and run it on your computer. Now that the installation is complete it is time to get the ESP8266 definitions. Open the preference window in the Arduino IDE from File|Preferences or by pressing CTRL+Comma. Copy this URL: http://arduino.esp8266.com/stable/package_esp8266com_index.json. Paste it in the filed labelled additional board manager URLs as shown in the figure. If you are adding other URLs too, use a comma to separate them: Open the board manager from the Tools|Board menu and install the ESP8266 platform. The board manager will download the board definition files from the link provided in the preferences window and install them. When the installation is complete the ESP8266 board definitions should appear as shown in the screenshot. Now you can select your ESP8266 board from Tools|Board menu: How it works… The Arduino IDE is an open source development environment used for programming Arduino boards and Arduino-based boards. It is also used to upload sketches to other open source boards, such as the ESP8266. This makes it an important accessory when creating Internet of Things projects. Choosing an ESP8266 board The ESP8266 module is a self-contained System On Chip (SOC) that features an integrated TCP/IP protocol stack that allows you to add Wi-Fi capability to your projects. The module is usually mounted on circuit boards that breakout the pins of the ESP8266 chip, making it easy for you program the chip and to interface with input and output devices. ESP8266 boards come in different forms depending on the company that manufactures them. All the boards use Espressif’s ESP8266 chip as the main controller, but have different additional components and different pin configurations, giving each board unique additional features. Therefore, before embarking on your IoT project, take some time to compare and contrast the different types of ESP8266 boards that are available. This way, you will be able to select the board that has features best suited for your project. Available options The simple ESP8266-01 module is the most basic ESP8266 board available in the market. It has 8 pins which include 4 General Purpose Input/Output (GPIO) pins, serial communication TX and RX pins, enable pin and power pins VCC and GND. Since it only has 4 GPIO pins, you can only connect three inputs or outputsto it. The 8-pin header on the ESP8266-01 module has a 2.0mm spacing which is not compatible with breadboards. Therefore, you have to look for another way to connect the ESP8266-01 module to your setup when prototyping. You can use female to male jumper wires to do that: The ESP8266-07 is an improved version of the ESP8266-01 module. It has 16 pins which comprise of 9 GPIO pins, serial communication TX and RX pins, a reset pin, an enable pin and power pins VCC and GND. One of the GPIO pins can be used as an analog input pin.The board also comes with a U.F.L. connector that you can use to plug an external antenna in case you need to boost Wi-Fi signal. Since the ESP8266 has more GPIO pins you can have more inputs and outputs in your project. Moreover, it supports both SPI and I2C interfaces which can come in handy if you want to use sensors or actuators that communicate using any of those protocols. Programming the board requires the use of an external FTDI breakout board based on USB to serial converters such as the FT232RL chip. The pads/pinholes of the ESP8266-07 have a 2.0mm spacing which is not breadboard friendly. To solve this, you have to acquire a plate holder that breaks out the ESP8266-07 pins to a breadboard compatible pin configuration, with 2.54mm spacing between the pins. This will make prototyping easier. This board has to be powered from a 3.3V which is the operating voltage for the ESP8266 chip: The Olimex ESP8266 module is a breadboard compatible board that features the ESP8266 chip. Just like the ESP8266-07 board, it has SPI, I2C, serial UART and GPIO interface pins. In addition to that it also comes with Secure Digital Input/Output (SDIO) interface which is ideal for communication with an SD card. This adds 6 extra pins to the configuration bringing the total to 22 pins. Since the board does not have an on-board USB to serial converter, you have to program it using an FTDI breakout board or a similar USB to serial board/cable. Moreover it has to be powered from a 3.3V source which is the recommended voltage for the ESP8266 chip: The Sparkfun ESP8266 Thing is a development board for the ESP8266 Wi-Fi SOC. It has 20 pins that are breadboard friendly, which makes prototyping easy. It features SPI, I2C, serial UART and GPIO interface pins enabling it to be interfaced with many input and output devices.There are 8 GPIO pins including the I2C interface pins. The board has a 3.3V voltage regulator which allows it to be powered from sources that provide more than 3.3V. It can be powered using a micro USB cable or Li-Po battery. The USB cable also charges the attached Li-Po battery, thanks to the Li-Po battery charging circuit on the board. Programming has to be done via an external FTDI board: The Adafruit feather Huzzah ESP8266 is a fully stand-alone ESP8266 board. It has built in USB to serial interface that eliminates the need for using an external FTDI breakout board to program it. Moreover, it has an integrated battery charging circuit that charges any connected Li-Po battery when the USB cable is connected. There is also a 3.3V voltage regulator on the board that allows the board to be powered with more than 3.3V. Though there are 28 breadboard friendly pins on the board, only 22 are useable. 10 of those pins are GPIO pins and can also be used for SPI as well as I2C interfacing. One of the GPIO pins is an analog pin: What to choose? All the ESP8266 boards will add Wi-Fi connectivity to your project. However, some of them lack important features and are difficult to work with. So, the best option would be to use the module that has the most features and is easy to work with. The Adafruit ESP8266 fits the bill. The Adafruit ESP8266 is completely stand-alone and easy to power, program and configure due to its on-board features. Moreover, it offers many input/output pins that will enable you to add more features to your projects. It is affordable andsmall enough to fit in projects with limited space. There’s more… Wi-Fi isn’t the only technology that we can use to connect out projects to the internet. There are other options such as Ethernet and 3G/LTE. There are shields and breakout boards that can be used to add these features to open source projects. You can explore these other options and see which works for you. Required additional components To demonstrate how the ESP8266 works we will use some addition components. These components will help us learn how to read sensor inputs and control actuators using the GPIO pins. Through this you can post sensor data to the internet and control actuators from the internet resources such as websites. Required components The components we will use include: Sensors DHT11 Photocell Soil humidity Actuators Relay Powerswitch tail kit Water pump Breadboard Jumper wires Micro USB cable Sensors Let us discuss the three sensors we will be using. DHT11 The DHT11 is a digital temperature and humidity sensor. It uses a thermistor and capacitive humidity sensor to monitor the humidity and temperature of the surrounding air and produces a digital signal on the data pin. A digital pin on the ESP8266 can be used to read the data from the sensor data pin: Photocell A photocell is a light sensor that changes its resistance depending on the amount of incident light it is exposed to. They can be used in a voltage divider setup to detect the amount of light in the surrounding. In a setup where the photocell is used in the Vcc side of the voltage divider, the output of the voltage divider goes high when the light is bright and low when the light is dim. The output of the voltage divider is connected to an analog input pin and the voltage readings can be read: Soil humidity sensor The soil humidity sensor is used for measuring the amount of moisture in soil and other similar materials. It has two large exposed pads that act as a variable resistor. If there is more moisture in the soil the resistance between the pads reduces, leading to higher output signal. The output signal is connected to an analog pin from where its value is read: Actuators Let’s discuss about the actuators. Relays A relay is a switch that is operated electrically. It uses electromagnetism to switch large loads using small voltages. It comprises of three parts: a coil, spring and contacts. When the coil is energized by a HIGH signal from a digital pin of the ESP8266 it attracts the contacts forcing them closed. This completes the circuit and turns on the connected load. When the signal on the digital pin goes LOW, the coil is no longer energized and the spring pulls the contacts apart. This opens the circuit and turns of the connected load: Power switch tail kit A power switch tail kit is a device that is used to control standard wall outlet devices with microcontrollers. It is already packaged to prevent you from having to mess around with high voltage wiring. Using it you can control appliances in your home using the ESP8266: Water pump A water pump is used to increase the pressure of fluids in a pipe. It uses a DC motor to rotate a fan and create a vacuum that sucks up the fluid. The sucked fluid is then forced to move by the fan, creating a vacuum again that sucks up the fluid behind it. This in effect moves the fluid from one place to another: Breadboard A breadboard is used to temporarily connect components without soldering. This makes it an ideal prototyping accessory that comes in handy when building circuits: Jumper wires Jumper wires are flexible wires that are used to connect different parts of a circuit on a breadboard: Micro USB cable A micro USB cable will be used to connect the Adafruit ESP8266 board to the compute: Summary In this article we have learned how to setting up the Arduino development environment for the ESP8266,choosing an ESP8266, and required additional components.  Resources for Article: Further resources on this subject: Internet of Things with BeagleBone [article] Internet of Things Technologies [article] BLE and the Internet of Things [article]
Read more
  • 0
  • 0
  • 1876
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-systems-and-logics
Packt
06 Apr 2017
19 min read
Save for later

Systems and Logics

Packt
06 Apr 2017
19 min read
In this article by Priya Kuber, Rishi Gaurav Bhatnagar, and Vijay Varada, authors of the book Arduino for Kids explains structure and various components of a code: How does a code work What is code What is a System How to download, save and access a file in the Arduino IDE (For more resources related to this topic, see here.) What is a System? Imagine system as a box which in which a process is completed. Every system is solving a larger problem, and can be broken down into smaller problems that can be solved and assembled. Sort of like a Lego set! Each small process has 'logic' as the backbone of the solution. Logic, can be expressed as an algorithm and implemented in code. You can design a system to arrive at solutions to a problem. Another advantage to breaking down a system into small processes is that in case your solution fails to work, you can easily spot the source of your problem, by checking if your individual processes work. What is Code? Code is a simple set of written instructions, given to a specific program in a computer, to perform a desired task. Code is written in a computer language. As we all know by now, a computer is an intelligent, electronic device capable of solving logical problems with a given set of instructions. Some examples of computer languages are Python, Ruby, C, C++ and so on. Find out some more examples of languages from the internet and write it down in your notebook. What is an Algorithm? A logical set by step process, guided by the boundaries (or constraints) defined by a problem, followed to find a solution is called an algorithm. In a better and more pictorial form, it can be represented as follows: (Logic + Control = Algorithm) (A picture depicting this equation) What does that even mean? Look at the following example to understand the process. Let's understand what an algorithm means with the help of an example. It's your friend's birthday and you have been invited for the party (Isn't this exciting already?). You decide to gift her something. Since it's a gift, let's wrap it. What would you do to wrap the gift? How would you do it? Look at the size of the gift Fetch the gift wrapping paper Fetch the scissors Fetch the tape Then you would proceed to place the gift inside the wrapping paper. You will start start folding the corners in a way that it efficiently covers the Gift. In the meanwhile, to make sure that your wrapping is tight, you would use a scotch tape. You keep working on the wrapper till the whole gift is covered (and mind you, neatly! you don't want mommy scolding you, right?). What did you just do? You used a logical step by step process to solve a simple task given to you. Again coming back to the sentence: (Logic + Control = Algorithm) 'Logic' here, is the set of instructions given to a computer to solve the problem. 'Control' are the words making sure that the computer understands all your boundaries. Logic Logic is the study of reasoning and when we add this to the control structures, they become algorithms. Have you ever watered the plants using a water pipe or washed a car with it? How do you think it works? The pipe guides the water from the water tap to the car. It makes sure optimum amount of water reaches the end of the pipe. A pipe is a control structure for water in this case. We will understand more about control structures in the next topic. How does a control structure work? A very good example to understand how a control structure works, is taken from wikiversity. (https://en.wikiversity.org/wiki/Control_structures) A precondition is the state of a variable before entering a control structure. In the gift wrapping example, the size of the gift determines the amount of gift wrapping paper you will use. Hence, it is a condition that you need to follow to successfully finish the task. In programming terms, such condition is called precondition. Similarly, a post condition is the state of the variable after exiting the control structure. And a variable, in code, is an alphabetic character, or a set of alphabetic characters, representing or storing a number, or a value. Some examples of variables are x, y, z, a, b, c, kitten, dog, robot Let us analyze flow control by using traffic flow as a model. A vehicle is arriving at an intersection. Thus, the precondition is the vehicle is in motion. Suppose the traffic light at the intersection is red. The control structure must determine the proper course of action to assign to the vehicle. Precondition: The vehicle is in motion. Control Structure: Is the traffic light green? If so, then the vehicle may stay in motion. Is the traffic light red? If so, then the vehicle must stop. End of Control Structure: Post condition: The vehicle comes to a stop. Thus, upon exiting the control structure, the vehicle is stopped. If you wonder where you learnt to wrap the gift, you would know that you learnt it by observing other people doing a similar task through your eyes. Since our microcontroller does not have eyes, we need to teach it to have a logical thinking using Code. The series of logical steps that lead to a solution is called algorithm as we saw in the previous task. Hence, all the instructions we give to a micro controller are in the form of an algorithm. A good algorithm solves the problem in a fast and efficient way. Blocks of small algorithms form larger algorithms. But algorithm is just code! What will happen when you try to add sensors to your code? A combination of electronics and code can be called a system. Picture: (block diagram of sensors + Arduino + code written in a dialogue box ) Logic is universal. Just like there can be multiple ways to fold the wrapping paper, there can be multiple ways to solve a problem too! A micro controller takes the instructions only in certain languages. The instructions then go to a compiler that translates the code that we have written to the machine. What language does your Arduino Understand? For Arduino, we will use the language 'processing'. Quoting from processing.org, Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Processing is an open source programming language and integrated development environment (IDE). Processing was originally built for designers and it was extensively used in electronics arts and visual design communities with the sole purpose of teaching the fundamentals of computer sciences in a visual context. This also served as the foundations of electronic sketchbooks. From the previous example of gift wrapping, you noticed that before you need to bring in the paper and other stationery needed, you had to see the size of the problem at hand (the gift). What is a Library? In computer language, the stationery needed to complete your task, is called "Library". A library is a collection of reusable code that a programmer can 'call' instead of writing everything again. Now imagine if you had to cut a tree, make paper, then color the paper into the beautiful wrapping paper that you used, when I asked you to wrap the gift. How tiresome would it be? (If you are inventing a new type of paper, sure, go ahead chop some wood!) So before writing a program, you make sure that you have 'called' all the right libraries. Can you search the internet and make a note of a few arduino libraries in your inventor's diary? Please remember, that libraries are also made up of code! As your next activity, we will together learn more about how a library is created. Activity: Understanding the Morse Code During the times before the two-way mobile communication, people used a one-way communication called the Morse code. The following image is the experimental setup of a Morse code. Do not worry; we will not get into how you will perform it physically, but by this example, you will understand how your Arduino will work. We will show you the bigger picture first and then dissect it systematically so that you understand what a code contains. The Morse code is made up of two components "Short" and "Long" signals. The signals could be in the form of a light pulse or sound. The following image shows how the Morse code looks like. A dot is a short signal and a dash is a long signal. Interesting, right? Try encrypting your message for your friend with this dots and dashes. For example, "Hello" would be: The image below shows how the Arduino code for Morse code will looks like. The piece of code in dots and dashes is the message SOS that I am sure you all know, is an urgent appeal for help. SOS in Morse goes: dot dot dot; dash dash dash; dot dot dot. Since this is a library, which is being created using dots and dashes, it is important that we define how the dot becomes dot, and dash becomes dash first. The following sections will take smaller sections or pieces of main code and explain you how they work. We will also introduce some interesting concepts using the same. What is a function? Functions have instructions in a single line of code, telling the values in the bracket how to act. Let us see which one is the function in our code. Can you try to guess from the following screenshot? No? Let me help you! digitalWrite() in the above code is a Function, that as you understand, 'writes' on the correct pin of the Arduino. delay is a Function that tells the controller how frequently it should send the message. The higher the delay number, the slower will be the message (Imagine it as a way to slow down your friend who speaks too fast, helping you to understand him better!) Look up the internet to find out what is the maximum number that you stuff into delay. What is a constant? A constant is an identifier with pre-defined, non-changeable values. What is an identifier you ask? An identifier is a name that labels the identity of a unique object or value. As you can see from the above piece of code, HIGH and LOW are Constants. Q: What is the opposite of Constant? Ans: Variable The above food for thought brings us to the next section. What is a variable? A variable is a symbolic name for information. In plain English, a 'teacher' can have any name; hence, the 'teacher' could be a variable. A variable is used to store a piece of information temporarily. The value of a variable changes, if any action is taken on it for example; Add, subtract, multiply etc. (Imagine how your teacher praises you when you complete your assignment on time and scolds you when you do not!) What is a Datatype? Datatypes are sets of data that have a pre-defined value. Now look at the first block of the example program in the following image: int as shown in the above screenshot, is a Datatype The following table shows some of the examples of a Datatype. Datatype Use Example int describes an integer number is used to represent whole numbers 1, 2, 13, 99 etc float used to represent that the numbers are decimal 0.66, 1.73 etc char represents any character. Strings are written in single quotes 'A', 65 etc str represent string "This is a good day!" With the above definition, can we recognize what pinMode is? Every time you have a doubt in a command or you want to learn more about it, you can always look it up at Arduino.cc website. You could do the same for digitalWrite() as well! From the pinMode page of Arduino.cc we can define it as a command that configures the specified pin to behave either as an input or an output. Let us now see something more interesting. What is a control structure? We have already seen the working of a control structure. In this section, we will be more specific to our code. Now I draw your attention towards this specific block from the main example above: Do you see void setup() followed by a code in the brackets? Similarly void loop() ? These make the basics of the structure of an Arduino program sketch. A structure, holds the program together, and helps the compiler to make sense of the commands entered. A compiler is a program that turns code understood by humans into the code that is understood by machines. There are other loop and control structures as you can see in the following screenshot: These control structures are explained next. How do you use Control Structures? Imagine you are teaching your friend to build 6 cm high lego wall. You ask her to place one layer of lego bricks, and then you further ask her to place another layer of lego bricks on top of the bottom layer. You ask her to repeat the process until the wall is 6 cm high. This process of repeating instructions until a desired result is achieved is called Loop. A micro-controller is only as smart as you program it to be. Hence, we will move on to the different types of loops. While loop: Like the name suggests, it repeats a statement (or group of statements) while the given condition is true. The condition is tested before executing the loop body. For loop: Execute a sequence of statements multiple times and abbreviates the code that manages the loop variable. Do while loop: Like a while statement, except that it tests the condition at the end of the loop body Nested loop: You can use one or more loop inside any another while, for or do..while loop. Now you were able to successfully tell your friend when to stop, but how to control the micro controller? Do not worry, the magic is on its way! You introduce Control statements. Break statements: Breaks the flow of the loop or switch statement and transfers execution to the statement that is immediately following the loop or switch. Continue statements: This statement causes the loop to skip the remainder of its body and immediately retest its condition before reiterating. Goto statements: This transfers control to a statement which is labeled . It is no advised to use goto statement in your programs. Quiz time: What is an infinite loop? Look up the internet and note it in your inventor-notebook. The Arduino IDE The full form of IDE is Integrated Development Environment. IDE uses a Compiler to translate code in a simple language that the computer understands. Compiler is the program that reads all your code and translates your instructions to your microcontroller. In case of the Arduino IDE, it also verifies if your code is making sense to it or not. Arduino IDE is like your friend who helps you finish your homework, reviews it before you give it for submission, if there are any errors; it helps you identify them and resolve them. Why should you love the Arduino IDE? I am sure by now things look too technical. You have been introduced to SO many new terms to learn and understand. The important thing here is not to forget to have fun while learning. Understanding how the IDE works is very useful when you are trying to modify or write your own code. If you make a mistake, it would tell you which line is giving you trouble. Isn't it cool? The Arduino IDE also comes with loads of cool examples that you can plug-and-play. It also has a long list of libraries for you to access. Now let us learn how to get the library on to your computer. Ask an adult to help you with this section if you are unable to succeed. Make a note of the following answers in your inventor's notebook before downloading the IDE. Get your answers from google or ask an adult. What is an operating system?What is the name of the operating system running on your computer?What is the version of your current operating system?Is your operating system 32 bit or 64 bit?What is the name of the Arduino board that you have? Now that we did our homework, let us start playing! How to download the IDE? Let us now, go further and understand how to download something that's going to be our playground. I am sure you'd be eager to see the place you'll be working in for building new and interesting stuff! For those of you wanting to learn and do everything my themselves, open any browser and search for "Arduino IDE" followed by the name of your operating system with "32 bits" or "64 bits" as learnt in the previous section. Click to download the latest version and install! Else, the step-by-step instructions are here: Open your browser (Firefox, Chrome, Safari)(Insert image of the logos of firefox, chrome and safari) Go to www.arduino.ccas shown in the following screenshot Click on the 'Download' section of the homepage, which is the third option from your left as shown in the following screenshot. From the options, locate the name of your operating system, click on the right version (32 bits or 64 bits) Then click on 'Just Download' after the new page appears. After clicking on the desired link and saving the files, you should be able to 'double click' on the Arduino icon and install the software. If you have managed to install successfully, you should see the following screens. If not, go back to step 1 and follow the procedure again. The next screenshot shows you how the program will look like when it is loading. This is how the IDE looks when no code is written into it. Your first program Now that you have your IDE ready and open, it is time to start exploring. As promised before, the Arduino IDE comes with many examples, libraries, and helping tools to get curious minds such as you to get started soon. Let us now look at how you can access your first program via the Arduino IDE. A large number of examples can be accessed in the File > Examples option as shown in the following screenshot. Just like we all have nicknames in school, a program, written in in processing is called a 'sketch'. Whenever you write any program for Arduino, it is important that you save your work. Programs written in processing are saved with the extension .ino The name .ino is derived from the last 3 letters of the word ArduINO. What are the other extensions are you aware of? (Hint: .doc, .ppt etc) Make a note in your inventor's notebook. Now ask yourself why do so many extensions exist. An extension gives the computer, the address of the software which will open the file, so that when the contents are displayed, it makes sense. As we learnt above, that the program written in the Arduino IDE is called a 'Sketch'. Your first sketch is named 'blink'. What does it do? Well, it makes your Arduino blink! For now, we can concentrate on the code. Click on File | Examples | Basics | Blink. Refer to next image for this. When you load an example sketch, this is how it would look like. In the image below you will be able to identify the structure of code, recall the meaning of functions and integers from the previous section. We learnt that the Arduino IDE is a compiler too! After opening your first example, we can now learn how to hide information from the compiler. If you want to insert any extra information in plain English, you can do so by using symbols as following. /* your text here*/ OR // your text here Comments can also be individually inserted above lines of code, explaining the functions. It is good practice to write comments, as it would be useful when you are visiting back your old code to modify at a later date. Try editing the contents of the comment section by spelling out your name. The following screenshot will show you how your edited code will look like. Verifying your first sketch Now that you have your first complete sketch in the IDE, how do you confirm that your micro-controller with understand? You do this, by clicking on the easy to locate Verify button with a small . You will now see that the IDE informs you "Done compiling" as shown in the screenshot below. It means that you have successfully written and verified your first code. Congratulations, you did it! Saving your first sketch As we learnt above, it is very important to save your work. We now learn the steps to make sure that your work does not get lost. Now that you have your first code inside your IDE, click on File > SaveAs. The following screenshot will show you how to save the sketch. Give an appropriate name to your project file, and save it just like you would save your files from Paintbrush, or any other software that you use. The file will be saved in a .ino format. Accessing your first sketch Open the folder where you saved the sketch. Double click on the .ino file. The program will open in a new window of IDE. The following screen shot has been taken from a Mac Os, the file will look different in a Linux or a Windows system. Summary Now we know about systems and how logic is used to solve problems. We can write and modify simple code. We also know the basics of Arduino IDE and studied how to verify, save and access your program. Resources for Article:  Further resources on this subject: Getting Started with Arduino [article] Connecting Arduino to the Web [article] Functions with Arduino [article]
Read more
  • 0
  • 0
  • 1906

article-image-ros-architecture-and-concepts
Packt
28 Dec 2016
24 min read
Save for later

ROS Architecture and Concepts

Packt
28 Dec 2016
24 min read
In this article by Anil Mahtani, Luis Sánchez, Enrique Fernández, and Aaron Martinez, authors of the book Effective Robotics Programming with ROS, Third Edition, you will learn the structure of ROS and the parts it is made up of. Furthermore, you will start to create nodes and packages and use ROS with examples using Turtlesim. The ROS architecture has been designed and divided into three sections or levels of concepts: The Filesystem level The Computation Graph level The Community level (For more resources related to this topic, see here.) The first level is the Filesystem level. In this level, a group of concepts are used to explain how ROS is internally formed, the folder structure, and the minimum number of files that it needs to work. The second level is the Computation Graph level where communication between processes and systems happens. In this section, we will see all the concepts and mechanisms that ROS has to set up systems, handle all the processes, and communicate with more than a single computer, and so on. The third level is the Community level, which comprises a set of tools and concepts to share knowledge, algorithms, and code between developers. This level is of great importance; as with most open source software projects, having a strong community not only improves the ability of newcomers to understand the intricacies of the software as well as solve the most common issues, it is also the main force driving its growth. Understanding the ROS Filesystem level The ROS Filesystem is one of the strangest concepts to grasp when starting to develop projects in ROS, but with time and patience, the reader will easily become familiar with it and realize its value for managing projects and its dependencies. The main goal of the ROS Filesystem is to centralize the build process of a project while at the same time provide enough flexibility and tooling to decentralize its dependencies. Similar to an operating system, an ROS program is divided into folders, and these folders have files that describe their functionalities: Packages: Packages form the atomic level of ROS. A package has the minimum structure and content to create a program within ROS. It may have ROS runtime processes (nodes), configuration files, and so on. Package manifests: Package manifests provide information about a package, licenses, dependencies, compilation flags, and so on. A package manifest is managed with a file called package.xml. Metapackages: When you want to aggregate several packages in a group, you will use metapackages. In ROS Fuerte, this form for ordering packages was called Stacks. To maintain the simplicity of ROS, the stacks were removed, and now, metapackages make up this function. In ROS, there exists a lot of these metapackages; for example, the navigation stack. Metapackage manifests: Metapackage manifests (package.xml) are similar to a normal package, but with an export tag in XML. It also has certain restrictions in its structure. Message (msg) types: A message is the information that a process sends to other processes. ROS has a lot of standard types of messages. Message descriptions are stored in my_package/msg/MyMessageType.msg. Service (srv) types: Service descriptions, stored in my_package/srv/MyServiceType.srv, define the request and response data structures for services provided by each process in ROS. In the following screenshot, you can see the content of the turtlesim package. What you see is a series of files and folders with code, images, launch files, services, and messages. Keep in mind that the screenshot was edited to show a short list of files; the real package has more: The workspace In general terms, the workspace is a folder which contains packages, those packages contain our source files and the environment or workspace provides us with a way to compile those packages. It is useful when you want to compile various packages at the same time and it is a good way to centralize all of our developments. A typical workspace is shown in the following screenshot. Each folder is a different space with a different role: The source space: In the source space (the src folder), you put your packages, projects, clone packages, and so on. One of the most important files in this space is CMakeLists.txt. The src folder has this file because it is invoked by cmake when you configure the packages in the workspace. This file is created with the catkin_init_workspace command. The build space: In the build folder, cmake and catkin keep the cache information, configuration, and other intermediate files for our packages and projects. Development (devel) space: The devel folder is used to keep the compiled programs. This is used to test the programs without the installation step. Once the programs are tested, you can install or export the package to share with other developers. You have two options with regard to building packages with catkin. The first one is to use the standard CMake workflow. With this, you can compile one package at a time, as shown in the following commands: $ cmakepackageToBuild/ $ make If you want to compile all your packages, you can use the catkin_make command line, as shown in the following commands: $ cd workspace $ catkin_make Both commands build the executable in the build space directory configured in ROS. Another interesting feature of ROS is its overlays. When you are working with a package of ROS, for example, turtlesim, you can do it with the installed version, or you can download the source file and compile it to use your modified version. ROS permits you to use your version of this package instead of the installed version. This is very useful information if you are working on an upgrade of an installed package. Packages Usually, when we talk about packages, we refer to a typical structure of files and folders. This structure looks as follows: include/package_name/: This directory includes the headers of the libraries that you would need. msg/: If you develop nonstandard messages, put them here. scripts/: These are executable scripts that can be in Bash, Python, or any other scripting language. src/: This is where the source files of your programs are present. You can create a folder for nodes and nodelets or organize it as you want. srv/: This represents the service (srv) types. CMakeLists.txt: This is the CMake build file. package.xml: This is the package manifest. To create, modify, or work with packages, ROS gives us tools for assistance, some of which are as follows: rospack: This command is used to get information or find packages in the system. catkin_create_pkg: This command is used when you want to create a new package. catkin_make: This command is used to compile a workspace. rosdep: This command installs the system dependencies of a package. rqt_dep: This command is used to see the package dependencies as a graph. If you want to see the package dependencies as a graph, you will find a plugin called package graph in rqt. Select a package and see the dependencies. To move between packages and their folders and files, ROS gives us a very useful package called rosbash, which provides commands that are very similar to Linux commands. The following are a few examples: roscd: This command helps us change the directory. This is similar to the cd command in Linux. rosed: This command is used to edit a file. roscp: This command is used to copy a file from a package. rosd: This command lists the directories of a package. rosls: This command lists the files from a package. This is similar to the ls command in Linux. Every package must contain a package.xml file, as it is used to specify information about the package. If you find this file inside a folder, it is very likely that this folder is a package or a metapackage. If you open the package.xml file, you will see information about the name of the package, dependencies, and so on. All of this is to make the installation and the distribution of these packages easy. Two typical tags that are used in the package.xml file are <build_depend> and <run _depend>. The <build_depend> tag shows which packages must be installed before installing the current package. This is because the new package might use functionality contained in another package. The <run_depend> tag shows the packages that are necessary to run the code of the package. The following screenshot is an example of the package.xml file: Metapackages As we have shown earlier, metapackages are special packages with only one file inside; this file is package.xml. This package does not have other files, such as code, includes, and so on. Metapackages are used to refer to others packages that are normally grouped following a feature-like functionality, for example, navigation stack, ros_tutorials, and so on. You can convert your stacks and packages from ROS Fuerte to Kinetic and catkin using certain rules for migration. These rules can be found at http://wiki.ros.org/catkin/migrating_from_rosbuild. In the following screenshot, you can see the content from the package.xml file in the ros_tutorialsmetapackage. You can see the <export> tag and the <run_depend> tag. These are necessary in the package manifest, which is also shown in the following screenshot: If you want to locate the ros_tutorialsmetapackage, you can use the following command: $ rosstack find ros_tutorials The output will be a path, such as /opt/ros/kinetic/share/ros_tutorials. To see the code inside, you can use the following command line: $ vim /opt/ros/kinetic/ros_tutorials/package.xml Remember that Kinetic uses metapackages, not stacks, but the rosstack find command-line tool is also capable of finding metapackages. Messages ROS uses a simplified message description language to describe the data values that ROS nodes publish. With this description, ROS can generate the right source code for these types of messages in several programming languages. ROS has a lot of messages predefined, but if you develop a new message, it will be in the msg/ folder of your package. Inside that folder, certain files with the .msg extension define the messages. A message must have two main parts: fields and constants. Fields define the type of data to be transmitted in the message, for example, int32, float32, and string, or new types that you have created earlier, such as type1 and type2. Constants define the name of the fields. An example of an msg file is as follows: int32 id float32vel string name In ROS, you can find a lot of standard types to use in messages, as shown in the following table list: Primitive type Serialization C++ Python bool (1) unsigned 8-bit int uint8_t(2) bool int8 signed 8-bit int int8_t int uint8 unsigned 8-bit int uint8_t int(3) int16 signed 16-bit int int16_t int uint16 unsigned 16-bit int uint16_t int int32 signed 32-bit int int32_t int uint32 unsigned 32-bit int uint32_t int int64 signed 64-bit int int64_t long uint64 unsigned 64-bit int uint64_t long float32 32-bit IEEE float float float float64 64-bit IEEE float double float string ascii string (4) std::string string time secs/nsecs signed 32-bit ints ros::Time rospy.Time duration secs/nsecs signed 32-bit ints ros::Duration rospy.Duration A special type in ROS is the header type. This is used to add the time, frame, and sequence number. This permits you to have the messages numbered, to see who is sending the message, and to have more functions that are transparent for the user and that ROS is handling. The header type contains the following fields: uint32seq time stamp string frame_id You can see the structure using the following command: $ rosmsg show std_msgs/Header Thanks to the header type, it is possible to record the timestamp and frame of what is happening with the robot. ROS provides certain tools to work with messages. The rosmsg tool prints out the message definition information and can find the source files that use a message type. In upcoming sections, we will see how to create messages with the right tools. Services ROS uses a simplified service description language to describe ROS service types. This builds directly upon the ROS msg format to enable request/response communication between nodes. Service descriptions are stored in .srv files in the srv/ subdirectory of a package. To call a service, you need to use the package name, along with the service name; for example, you will refer to the sample_package1/srv/sample1.srv file as sample_package1/sample1. Several tools exist to perform operations on services. The rossrv tool prints out the service descriptions and packages that contain the .srv files, and finds source files that use a service type. If you want to create a service, ROS can help you with the service generator. These tools generate code from an initial specification of the service. You only need to add the gensrv() line to your CMakeLists.txt file. In upcoming sections, you will learn how to create your own services. Understanding the ROS Computation Graph level ROS creates a network where all the processes are connected. Any node in the system can access this network, interact with other nodes, see the information that they are sending, and transmit data to the network: The basic concepts in this level are nodes, the master, Parameter Server, messages, services, topics, and bags, all of which provide data to the graph in different ways and are explained in the following list: Nodes: Nodes are processes where computation is done. If you want to have a process that can interact with other nodes, you need to create a node with this process to connect it to the ROS network. Usually, a system will have many nodes to control different functions. You will see that it is better to have many nodes that provide only a single functionality, rather than have a large node that makes everything in the system. Nodes are written with an ROS client library, for example, roscpp or rospy. The master: The master provides the registration of names and the lookup service to the rest of the nodes. It also sets up connections between the nodes. If you don't have it in your system, you can't communicate with nodes, services, messages, and others. In a distributed system, you will have the master in one computer, and you can execute nodes in this or other computers. Parameter Server: Parameter Server gives us the possibility of using keys to store data in a central location. With this parameter, it is possible to configure nodes while it's running or to change the working parameters of a node. Messages: Nodes communicate with each other through messages. A message contains data that provides information to other nodes. ROS has many types of messages, and you can also develop your own type of message using standard message types. Topics: Each message must have a name to be routed by the ROS network. When a node is sending data, we say that the node is publishing a topic. Nodes can receive topics from other nodes by simply subscribing to the topic. A node can subscribe to a topic even if there aren't any other nodes publishing to this specific topic. This allows us to decouple the production from the consumption. It's important that topic names are unique to avoid problems and confusion between topics with the same name. Services: When you publish topics, you are sending data in a many-to-many fashion, but when you need a request or an answer from a node, you can't do it with topics. Services give us the possibility of interacting with nodes. Also, services must have a unique name. When a node has a service, all the nodes can communicate with it, thanks to ROS client libraries. Bags: Bags are a format to save and play back the ROS message data. Bags are an important mechanism to store data, such as sensor data, that can be difficult to collect but is necessary to develop and test algorithms. You will use bags a lot while working with complex robots. In the following diagram, you can see the graphic representation of this level. It represents a real robot working in real conditions. In the graph, you can see the nodes, the topics, which node is subscribed to a topic, and so on. This graph does not represent messages, bags, Parameter Server, and services. It is necessary for other tools to see a graphic representation of them. The tool used to create the graph is rqt_graph. These concepts are implemented in the ros_comm repository. Nodes and nodelets Nodes are executable that can communicate with other processes using topics, services, or the Parameter Server. Using nodes in ROS provides us with fault tolerance and separates the code and functionalities, making the system simpler. ROS has another type of node called nodelets. These special nodes are designed to run multiple nodes in a single process, with each nodelet being a thread (light process). This way, we avoid using the ROS network among them, but permit communication with other nodes. With that, nodes can communicate more efficiently, without overloading the network. Nodelets are especially useful for camera systems and 3D sensors, where the volume of data transferred is very high. A node must have a unique name in the system. This name is used to permit the node to communicate with another node using its name without ambiguity. A node can be written using different libraries, such as roscpp and rospy; roscpp is for C++ and rospy is for Python. Throughout we will use roscpp. ROS has tools to handle nodes and give us information about it, such as rosnode. The rosnode tool is a command-line tool used to display information about nodes, such as listing the currently running nodes. The supported commands are as follows: rosnodeinfo NODE: This prints information about a node rosnodekill NODE: This kills a running node or sends a given signal rosnodelist: This lists the active nodes rosnode machine hostname: This lists the nodes running on a particular machine or lists machines rosnode ping NODE: This tests the connectivity to the node rosnode cleanup: This purges the registration information from unreachable nodes A powerful feature of ROS nodes is the possibility of changing parameters while you start the node. This feature gives us the power to change the node name, topic names, and parameter names. We use this to reconfigure the node without recompiling the code so that we can use the node in different scenes. An example of changing a topic name is as follows: $ rosrun book_tutorials tutorialX topic1:=/level1/topic1 This command will change the topic name topic1 to /level1/topic1. To change parameters in the node, you can do something similar to changing the topic name. For this, you only need to add an underscore (_) to the parameter name; for example: $ rosrun book_tutorials tutorialX _param:=9.0 The preceding command will set param to the float number 9.0. Bear in mind that you cannot use names that are reserved by the system. They are as follows: __name: This is a special, reserved keyword for the name of the node __log: This is a reserved keyword that designates the location where the node's log file should be written __ip and __hostname: These are substitutes for ROS_IP and ROS_HOSTNAME __master: This is a substitute for ROS_MASTER_URI __ns: This is a substitute for ROS_NAMESPACE Topics Topics are buses used by nodes to transmit data. Topics can be transmitted without a direct connection between nodes, which means that the production and consumption of data is decoupled. A topic can have various subscribers and can also have various publishers, but you should be careful when publishing the same topic with different nodes as it can create conflicts. Each topic is strongly typed by the ROS message type used to publish it, and nodes can only receive messages from a matching type. A node can subscribe to a topic only if it has the same message type. The topics in ROS can be transmitted using TCP/IP and UDP. The TCP/IP-based transport is known as TCPROS and uses the persistent TCP/IP connection. This is the default transport used in ROS. The UDP-based transport is known as UDPROS and is a low-latency, lossy transport. So, it is best suited to tasks such as teleoperation. ROS has a tool to work with topics called rostopic. It is a command-line tool that gives us information about the topic or publishes data directly on the network. This tool has the following parameters: rostopicbw /topic: This displays the bandwidth used by the topic. rostopic echo /topic: This prints messages to the screen. rostopic find message_type: This finds topics by their type. rostopichz /topic: This displays the publishing rate of the topic. rostopic info /topic: This prints information about the topic, such as its message type, publishers, and subscribers. rostopic list: This prints information about active topics. rostopic pub /topic type args: This publishes data to the topic. It allows us to create and publish data in whatever topic we want, directly from the command line. rostopic type /topic: This prints the topic type, that is, the type of message it publishes. We will learn to use this command-line tool in upcoming sections. Services When you need to communicate with nodes and receive a reply, in an RPC fashion, you cannot do it with topics; you need to do it with services. Services are developed by the user, and standard services don't exist for nodes. The files with the source code of the services are stored in the srv folder. Similar to topics, services have an associated service type that is the package resource name of the .srv file. As with other ROS filesystem-based types, the service type is the package name and the name of the .srv file. ROS has two command-line tools to work with services: rossrv and rosservice. With rossrv, we can see information about the services' data structure, and it has exactly the same usage as rosmsg. With rosservice, we can list and query services. The supported commands are as follows: rosservice call /service args: This calls the service with the arguments provided rosservice find msg-type: This finds services by service type rosservice info /service: This prints information about the service rosservice list: This lists the active services rosservice type /service: This prints the service type rosserviceuri /service: This prints the ROSRPC URI service Messages A node publishes information using messages which are linked to topics. The message has a simple structure that uses standard types or types developed by the user. Message types use the following standard ROS naming convention; the name of the package, then /, and then the name of the .msg file. For example, std_msgs/ msg/String.msg has the std_msgs/String message type. ROS has the rosmsg command-line tool to get information about messages. The accepted parameters are as follows: rosmsg show: This displays the fields of a message rosmsg list: This lists all messages rosmsg package: This lists all of the messages in a package rosmsg packages: This lists all of the packages that have the message rosmsg users: This searches for code files that use the message type rosmsgmd5: This displays the MD5 sum of a message Bags A bag is a file created by ROS with the .bag format to save all of the information of the messages, topics, services, and others. You can use this data later to visualize what has happened; you can play, stop, rewind, and perform other operations with it. The bag file can be reproduced in ROS just as a real session can, sending the topics at the same time with the same data. Normally, we use this functionality to debug our algorithms. To use bag files, we have the following tools in ROS: rosbag: This is used to record, play, and perform other operations rqt_bag: This is used to visualize data in a graphic environment rostopic: This helps us see the topics sent to the nodes The ROS master The ROS master provides naming and registration services to the rest of the nodes in the ROS system. It tracks publishers and subscribers to topics as well as services. The role of the master is to enable individual ROS nodes to locate one another. Once these nodes have located each other, they communicate with each other in a peer-to-peer fashion. You can see in a graphic example the steps performed in ROS to advertise a topic, subscribe to a topic, and publish a message, in the following diagram: The master also provides Parameter Server. The master is most commonly run using the roscore command, which loads the ROS master, along with other essential components. Parameter Server Parameter Server is a shared, multivariable dictionary that is accessible via a network. Nodes use this server to store and retrieve parameters at runtime. Parameter Server is implemented using XMLRPC and runs inside the ROS master, which means that its API is accessible via normal XMLRPC libraries. XMLRPC is a Remote Procedure Call (RPC) protocol that uses XML to encode its calls and HTTP as a transport mechanism. Parameter Server uses XMLRPC data types for parameter values, which include the following: 32-bit integers Booleans Strings Doubles ISO8601 dates Lists Base64-encoded binary data ROS has the rosparam tool to work with Parameter Server. The supported parameters are as follows: rosparam list: This lists all the parameters in the server rosparam get parameter: This gets the value of a parameter rosparam set parameter value: This sets the value of a parameter rosparam delete parameter: This deletes a parameter rosparam dump file: This saves Parameter Server to a file rosparam load file: This loads a file (with parameters) on Parameter Server Understanding the ROS Community level The ROS Community level concepts are the ROS resources that enable separate communities to exchange software and knowledge. These resources include the following: Distributions: ROS distributions are collections of versioned metapackages that you can install. ROS distributions play a similar role to Linux distributions. They make it easier to install a collection of software, and they also maintain consistent versions across a set of software. Repositories: ROS relies on a federated network of code repositories, where different institutions can develop and release their own robot software components. The ROS Wiki: The ROS Wiki is the main forum for documenting information about ROS. Anyone can sign up for an account, contribute their own documentation, provide corrections or updates, write tutorials, and more. Bug ticket system: If you find a problem or want to propose a new feature, ROS has this resource to do it. Mailing lists: The ROS user-mailing list is the primary communication channel about new updates to ROS as well as a forum to ask questions about the ROS software. ROS Answers: Users can ask questions on forums using this resource. Blog: You can find regular updates, photos, and news at http://www.ros.org/news. Summary This article provided you with general information about the ROS architecture and how it works. You saw certain concepts and tools of how to interact with nodes, topics, and services. Remember that if you have queries about something, you can use the official resources of ROS from http://www.ros.org. Additionally, you can ask the ROS Community questions at http://answers.ros.org. Resources for Article: Further resources on this subject: The ROS Filesystem levels [article] Using ROS with UAVs [article] Face Detection and Tracking Using ROS, Open-CV and Dynamixel Servos [article]
Read more
  • 0
  • 0
  • 16916

article-image-face-detection-and-tracking-using-ros-open-cv-and-dynamixel-servos
Packt
14 Nov 2016
13 min read
Save for later

Face Detection and Tracking Using ROS, Open-CV and Dynamixel Servos

Packt
14 Nov 2016
13 min read
In this article by Lentin Joseph, the author of the book ROS Robotic Projects, we learn how one of the capability in most of the service and social robots is face detection and tracking. The robot can identify faces and it can move its head according to the human face move around it. There are numerous implementation of face detection and tracking system in web. Most of the trackers are having a pan and tilt mechanism and a camera is mounted on the top of the servos. In this article, we are going to see a simple tracker which is having only pan mechanism. We are going to use a USB webcam which is mounted on AX-12 Dynamixel servo. (For more resources related to this topic, see here.) You can see following topics on this article: Overview of the project Hardware and software prerequisites Overview of the project The aim of the project is to build a simple face tracker which can track face only in the horizontal axis of camera. The tracker is having a webcam, Dynamixel servo called AX-12 and a supporting bracket to mount camera on the servo. The servo tracker will follow the face until it align to the center of the image which is getting from webcam. Once it reaches the center, it will stop and wait for the face movement. The face detection is done using OpenCV and ROS interface, and controlling the servo is done using Dynamixel motor driver in ROS. We are going to create two ROS packages for this complete tracking system, one is for face detection and finding centroid of face and next is for sending commands to servo to track the face using the centroid values. Ok!! Let's start discussing the hardware and software prerequisites of this project. Hardware and software prerequisites Following table of hardware components which can be used for building this project. You can also see a rough price of each component and purchase link of the same. List of hardware components: No Component name Estimated price (USD) Purchase link 1 Webcam 32 https://amzn.com/B003LVZO8S 2 Dynamixel AX -12 A servo with mounting bracket 76 https://amzn.com/B0051OXJXU 3 USB To Dynamixel Adapter 50 http://www.robotshop.com/en/robotis-usb-to-dynamixel-adapter.html 4 Extra 3 pin cables for AX-12 servos 12 http://www.trossenrobotics.com/p/100mm-3-Pin-DYNAMIXEL-Compatible-Cable-10-Pack 5 Power adapter 5 https://amzn.com/B005JRGOCM 6 6 Port AX/MX Power Hub 5 http://www.trossenrobotics.com/6-port-ax-mx-power-hub 7 USB extension cable 1 https://amzn.com/B00YBKA5Z0   Total Cost + Shipping + Tax ~ 190 - 200   The URLs and price can vary. If the links are not available, you can do a google search may do the job. The shipping charges and tax are excluded from the price. If you are thinking that, the total cost is not affordable for you, then there are cheap alternatives to do this project too. The main heart of this project is Dynamixel servo. We may can replace this servo with RC servos which only cost around $10 and using an Arduino board cost around $20 can be used to control the servo too, so you may can think about porting the face tracker project work using Arduino and RC servo Ok, let's look on to the software prerequisites of the project. The prerequisites include ROS framework, OS version and ROS packages: No Name of software Estimated price (USD) Download link 1 Ubuntu 16.04 L.T.S Free http://releases.ubuntu.com/16.04/ 2 ROS Kinetic L.T.S Free http://wiki.ros.org/kinetic/Installation/Ubuntu 3 ROS usb_cam package Free http://wiki.ros.org/usb_cam 3 ROS cv_bridge package Free http://wiki.ros.org/cv_bridge 4 ROS Dynamixel controller Free https://github.com/arebgun/dynamixel_motor 5 Windows 7 or higher ~ $120 https://www.microsoft.com/en-in/software-download/windows7 7 RoboPlus (Windows application) Free http://www.robotis.com/download/software/RoboPlusWeb%28v1.1.3.0%29.exe The above table will gives you an idea about which all are the software we are going to be used for this project. We may need both Windows and Ubuntu for doing this project. It will be great if you have dual operating system on your computer Let's see how to install these software first Installing dependent ROS packages We have already installed and configured Ubuntu 16.04 and ROS Kinetic on it. Here are the dependent packages we need to install for this project. Installing usb_cam ROS package Let's see the use of usb_cam package in ROS first. The usb_cam package is ROS driver for Video4Linux (V4L) USB camera. The V4L is a collection of devices drivers in Linux for real time video capture from webcams. The usb_cam ROS package work using the V4L devices and publish the video stream from devices as ROS image messages. We can subscribe it and do our own processing using it. The official ROS page of this package is given in the above table. You may can check this page for different settings and configuration this package offers. Creating ROS workspace for dependencies Before starting installing usb_cam package, let's create a ROS workspace for keeping the dependencies of the entire projects mentioned in the book. We may can create another workspace for keeping the project code. Create a ROS workspace called ros_project_dependencies_ws in home folder. Clone the usb_cam package into the src folder: $ git clone https://github.com/bosch-ros-pkg/usb_cam.git Build the workspace using catkin_make After building the package, install v4l-util Ubuntu package. It is a collection of command line V4L utilities which is using by usb_cam package: $ sudo apt-get install v4l-utils Configuring webcam on Ubuntu 16.04 After installing these two, we can connect the webcam to PC to check it properly detected in our PC. Take a terminal and execute dmesg command to check the kernel logs. If your camera is detected in Linux, it may give logs like this{ $ dmesg Kernels logs of webcam device You can use any webcam which has driver support in Linux. In this project, iBall Face2Face (http://www.iball.co.in/Product/Face2Face-C8-0--Rev-3-0-/90) webcam is used for tracking. You can also go for a popular webcam which is mentioned as a hardware prerequisite. You can opt that for better performance and tracking. If our webcam has support in Ubuntu, we may can open the video device using a tool called cheese. Cheese is simply a webcam viewer. Enter the command cheese in the terminal, if it is not available you can install it using following command: $ sudo apt-get install cheese If the driver and device are proper, you may get the video stream from webcam like this: Webcam video streaming using cheese Congratulation!!, your webcam is working well in Ubuntu, but are we done with everything? No. The next thing is to test the ROS usb_cam package. We have to make sure that is working well in ROS!! Interfacing Webcam to ROS Let's test the webcam using usb_cam package. The following command is used to launch the usb_cam nodes to display images from webcam and publishing ROS image topics at the same time: $ roslaunch usb_cam usb_cam-test.launch If everything works fine, you will get the image stream and logs in the terminal as shown below: Working of usb_cam package in ROS The image is displayed using image_view package in ROS, which is subscribing the topic called /usb_cam/image_raw Here are the topics, that usb_cam node is publishing: Figure 4: The topics publishing by usb_cam node We have just done with interfacing a webcam in ROS. So what's next? We have to interface AX-12 Dynamixel servo to ROS. Before proceeding to interfacing, we have to do something to configure this servo. Next we are going to see how to configure a Dynamixel servo AX-12A. Configuring a Dynamixel servo using RoboPlus The configuring of Dynamixel servo can be done using a software called RoboPlus providing by ROBOTIS INC (http://en.robotis.com/index/), the manufacturer of Dynamixel servos. For configuring Dynamixel, you have to switch your operating system to Windows. The tool RoboPlus will work on Windows. In this project, we are going to configure the servo in Windows 7. Here is the link to download RoboPlus: http://www.robotis.com/download/software/RoboPlusWeb%28v1.1.3.0%29.exe. If the link is not working, you can just search in google to get the RoboPlus 1.1.3 version. After installing the software, you will get the following window, navigate to Expert tab in the software for getting the application for configuring Dynamixel: Dynamixel Manager in RoboPlus Before taking the Dynamixel Wizard and do configuring, we have to connect the Dynamixel and properly powered. Following image of AX-12A servo that we are using for this project and its pin connection. AX-12A Dynamixel and its connection diagram Unlike other RC servos, AX-12 is an intelligent actuator which is having a microcontroller which can monitoring every parameters of servo and customize all the servo parameters. It is having a geared drive and the output of the servo is connected to servo horn. We may can connect any links on this servo horn. There are two connection ports behind each servo. Each port is having pins such as VCC, GND and Data. The ports of Dynamixel are daisy chained so that we can connect another servo from one servo. Here is the connection diagram of Dynamixel with PC. AX-12A Dynamixel and its connection diagram The main hardware component which interfacing Dynamixel to PC is called USB to Dynamixel. This is a USB to serial adapter which can convert USB to RS232, RS 484 and TTL. In AX-12 motors, the data communication is using TTL. From the Figure AX 12A Dynamixel and its connection diagram, we can seen that there are three pins in each port. The data pin is used to send and receive from AX-12 and power pins are used to power the servo. The input voltage range of AX-12A Dynamixel is from 9V to 12V. The second port in each Dynamixel can be used for daisy chaining. We can connect up to 254 servos using this chaining Official links of AX-12A servo and USB to Dynamixel AX-12A: http://www.trossenrobotics.com/dynamixel-ax-12-robot-actuator.aspx USB to Dynamixel: http://www.trossenrobotics.com/robotis-bioloid-usb2dynamixel.aspx For working with Dynamixel, we should know some more things. Let's have a look on some of the important specification of AX-12A servo. The specifications are taken from the servo manual. Figure 8: AX-12A Specification The Dynamixel servos can communicate to PC to a maximum speed of 1 Mbps. It can also give feedback of various parameters such as its position, temperature and current load. Unlike RC servos, this can rotate up to 300 degrees and communication is mainly using digital packets. Powering and connecting Dynamixel to PC Now we are going to connect Dynamixel to PC. Given below a standard way of connecting Dynamixel to PC: Connecting Dynamixel to PC The three pin cable can be first connected to any of the port of AX-12 and other side have to connect to the way to connect 6 port power hub. From the 6-port power hub, connect another cable to the USB to Dynamixel. We have to select the switch of USB to Dynamixel to TTL mode. The power can be either be connected through a 12V adapter or through battery. The 12V adapter is having 2.1X5.5 female barrel jack, so you should check the specification of male adapter plug while purchasing. Setting USB to Dynamixel driver on PC As we have already discussed the USB to Dynamixel adapter is a USB to serial convertor, which is having an FTDI chip (http://www.ftdichip.com/) on it. We have to install a proper FTDI driver on the PC for detecting the device. The driver may need for Windows but not for Linux, because FTDI drivers are built in the Linux kernel. If you install the RoboPlus software, the driver may be already installed along with it. If it is not, you can manually install from the RoboPlus installation folder. Plug the USB to Dynamixel to the Windows PC, and check the device manager. (Right click on My Computer | Properties | Device Manager). If the device is properly detected, you can see like following figure: Figure 10: COM Port of USB to Dynamixel If you are getting a COM port for USB to Dynamixel, then you can start the Dynamixel Manager from RoboPlus. You can connect to the serial port number from the list and click the Search button to scan for Dynamixel as shown in following figure. Select the COM port from the list and connecting to the port is marked as 1. After connecting to the COM port, select the default baud rate as 1 Mbps and click the Start searching button: COM Port of USB to Dynamixel If you are getting a list of servo in the left side panel, it means that your PC have detected a Dynamixel servo. If the servo is not detecting, you can do following steps to debug: Make sure that supply is proper and connections are proper using a multi meter. Make sure that servo LED on the back is blinking when power on. If it is not coming, it can be a problem with servo or power supply. Upgrade the firmware of servo using Dynamixel Manager from the option marked as 6. The wizard is shown in the following figure. During wizard, you may need power off the supply and ON it again for detecting the servo. After detecting the servo, you have to select the servo model and install the new firmware. This may help to detect the servo in the Dynamixel manager if the existing servo firmware is out dated. Dynamixel recovery wizard If the servos are listing on the Dynamixel manager, click on a servo and you can see its complete configuration. We have to modify some values inside the configurations for our current face tracker project. Here are the parameters: ID : Set the ID as 1 Baud rate: 1 Moving Speed: 100 Goal Position: 512 The modified servo settings are shown in the following figure: Modified Dynamixel firmware settings After doing these settings, you can check the servo is working good or not by changing its Goal position. Yes!! Now you are done with Dynamixel configuration, Congratulation!! What's next? We want to interface Dynamixel to ROS. Summary This article was about building a face tracker using webcam and Dynamixel motor. The software we have used was ROS and OpenCV. Initially you can see how to configure the webcam and Dynamixel motor and after configuring, we were trying to build two package for tracking. One package does the face detection and second package is a controller which can send position command to Dynamixel to track the face. We have discussed the use of all files inside the packages and did a final run to show the complete working of the system. Resources for Article: Further resources on this subject: Using ROS with UAVs [article] Hardware Overview [article] Arduino Development [article]
Read more
  • 0
  • 0
  • 6911

article-image-using-ros-uavs
Packt
10 Nov 2016
11 min read
Save for later

Using ROS with UAVs

Packt
10 Nov 2016
11 min read
In this article by Carol Fairchild and Dr. Thomas L. Harman, co-authors of the book ROS Robotics by Example, you will discover the field of ROS Unmanned Air Vehicles (UAVs), quadrotors, in particular. The reader is invited to learn about the simulated hector quadrotor and take it for a flight. The ROS wiki currently contains a growing list of ROS UAVs. These UAVs are as follows: (For more resources related to this topic, see here.) AscTec Pelican and Hummingbird quadrotors Berkeley's STARMAC Bitcraze Crazyflie DJI Matrice 100 Onboard SDK ROS support Erle-copter ETH sFly Lily CameraQuadrotor Parrot AR.Drone Parrot Bebop Penn's AscTec Hummingbird Quadrotors PIXHAWK MAVs Skybotix CoaX helicopter Refer to http://wiki.ros.org/Robots#UAVs for future additions to this list and to the website http://www.ros.org/news/robots/uavs/ to get the latest ROS UAV news. The preceding list contains primarily quadrotors except for the Skybotix helicopter. A number of universities have adopted the AscTec Hummingbird as their ROS UAV of choice. For this book, we present a simulator called Hector Quadrotor and two real quadrotors Crazyflie and Bebop that use ROS. Introducing Hector quadrotor The hardest part of learning about flying robots is the constant crashing. From the first-time learning of flight control to testing new hardware or flight algorithms, the resulting failures can have a huge cost in terms of broken hardware components. To answer this difficulty, a simulated air vehicle designed and developed for ROS is ideal. A simulated quadrotor UAV for the ROS Gazebo environment has been developed by the Team Hector Darmstadt of Technische Universität Darmstadt. This quadrotor, called Hector Quadrotor, is enclosed in the hector_quadrotor metapackage. This metapackage contains the URDF description for the quadrotor UAV, its flight controllers, and launch files for running the quadrotor simulation in Gazebo. Advanced uses of the Hector Quadrotor simulation allows the user to record sensor data such as Lidar, depth camera, and many more. The quadrotor simulation can also be used to test flight algorithms and control approaches in simulation. The hector_quadrotor metapackage contains the following key packages: hector_quadrotor_description: This package provides a URDF model of Hector Quadrotor UAV and the quadrotor configured with various sensors. Several URDF quadrotor models exist in this package each configured with specific sensors and controllers. hector_quadrotor_gazebo: This package contains launch files for executing Gazebo and spawning one or more Hector Quadrotors. hector_quadrotor_gazebo_plugins: This package contains three UAV specific plugins, which are as follows: The simple controller gazebo_quadrotor_simple_controller subscribes to a geometry_msgs/Twist topic and calculates the required forces and torques A gazebo_ros_baro sensor plugin simulates a barometric altimeter The gazebo_quadrotor_propulsion plugin simulates the propulsion, aerodynamics, and drag from messages containing motor voltages and wind vector input hector_gazebo_plugins: This package contains generic sensor plugins not specific to UAVs such as IMU, magnetic field, GPS, and sonar data. hector_quadrotor_teleop: This package provides a node and launch files for controlling a quadrotor using a joystick or gamepad. hector_quadrotor_demo: This package provides sample launch files that run the Gazebo quadrotor simulation and hector_slam for indoor and outdoor scenarios. The entire list of packages for the hector_quadrotor metapackage appears in the next section. Loading Hector Quadrotor The repository for the hector_quadrotor software is at the following website: https://github.com/tu-darmstadt-ros-pkg/hector_quadrotor The following commands will install the binary packages of hector_quadrotor into the ROS package repository on your computer. If you wish to install the source files, instructions can be found at the following website: http://wiki.ros.org/hector_quadrotor/Tutorials/Quadrotor%20outdoor%20flight%20demo (It is assumed that ros-indigo-desktop-full has been installed on your computer.) For the binary packages, type the following commands to install the ROS Indigo version of Hector Quadrotor: $ sudo apt-get update $ sudo apt-get install ros-indigo-hector-quadrotor-demo A large number of ROS packages are downloaded and installed in the hector_quadrotor_demo download with the main hector_quadrotor packages providing functionality that should now be somewhat familiar. This installation downloads the following packages: hector_gazebo_worlds hector_geotiff hector_map_tools hector_mapping hector_nav_msgs hector_pose_estimation hector_pose_estimation_core hector_quadrotor_controller hector_quadrotor_controller_gazebo hector_quadrotor_demo hector_quadrotor_description hector_quadrotor_gazebo hector_quadrotor_gazebo_plugins hector_quadrotor_model hector_quadrotor_pose_estimation hector_quadrotor_teleop hector_sensors_description hector_sensors_gazebo hector_trajectory_serve hector_uav_msgs message_to_tf A number of these packages will be discussed as the Hector Quadrotor simulations are described in the next section. Launching Hector Quadrotor in Gazebo Two demonstration tutorials are available to provide the simulated applications of the Hector Quadrotor for both outdoor and indoor environments. These simulations are described in the next sections. Before you begin the Hector Quadrotor simulations, check your ROS master using the following command in your terminal window: $ echo $ROS_MASTER_URI If this variable is set to localhost or the IP address of your computer, no action is needed. If not, type the following command: $ export ROS_MASTER_URI=http://localhost:11311 This command can also be added to your .bashrc file. Be sure to delete or comment out (with a #) any other commands setting the ROS_MASTER_URI variable. Flying Hector outdoors The quadrotor outdoor flight demo software is included as part of the hector_quadrotor metapackage. Start the simulation by typing the following command: $ roslaunch hector_quadrotor_demo outdoor_flight_gazebo.launch This launch file loads a rolling landscape environment into the Gazebo simulation and spawns a model of the Hector Quadrotor configured with a Hokuyo UTM-30LX sensor. An rviz node is also started and configured specifically for the quadrotor outdoor flight. A large number of flight position and control parameters are initialized and loaded into the Parameter Server. Note that the quadrotor propulsion model parameters for quadrotor_propulsion plugin and quadrotor drag model parameters for quadrotor_aerodynamics plugin are displayed. Then look for the following message: Physics dynamic reconfigure ready. The following screenshots show both the Gazebo and rviz display windows when the Hector outdoor flight simulation is launched. The view from the onboard camera can be seen in the lower left corner of the rviz window. If you do not see the camera image on your rviz screen, make sure that Camera has been added to your Displays panel on the left and that the checkbox has been checked. If you would like to pilot the quadrotor using the camera, it is best to uncheck the checkboxes for tf and robot_model because the visualizations sometimes block the view: Hector Quadrotor outdoor gazebo view Hector Quadrotor outdoor rviz view The quadrotor appears on the ground in the simulation ready for takeoff. Its forward direction is marked by a red mark on its leading motor mount. To be able to fly the quadrotor, you can launch the joystick controller software for the Xbox 360 controller. In a second terminal window, launch the joystick controller software with a launch file from the hector_quadrotor_teleop package: $ roslaunch hector_quadrotor_teleop xbox_controller.launch This launch file launches joy_node to process all joystick input from the left stick and right stick on the Xbox 360 controller as shown in the following figure. The message published by joy_node contains the current state of the joystick axes and buttons. The quadrotor_teleop node subscribes to these messages and publishes messages on the cmd_vel topic. These messages provide the velocity and direction for the quadrotor flight. Several joystick controllers are currently supported by the ROS joy package including PS3 and Logitech devices. For this launch, the joystick device is accessed as /dev/input/js0 and is initialized with a deadzone of 0.050000. Parameters to set the joystick axes are as follows: * /quadrotor_teleop/x_axis: 5 * /quadrotor_teleop/y_axis: 4 * /quadrotor_teleop/yaw_axis: 1 * /quadrotor_teleop/z_axis: 2 These parameters map to the Left Stick and the Right Stick controls on the Xbox 360 controller shown in the following figure. The direction of these sticks control are as follows: Left Stick: Forward (up) is to ascend Backward (down) is to descend Right is to rotate clockwise Left is to rotate counterclockwise Right Stick: Forward (up) is to fly forward Backward (down) is to fly backward Right is to fly right Left is to fly left Xbox 360 joystick controls for Hector Now use the joystick to fly around the simulated outdoor environment! The pilot's view can be seen in the Camera image view on the bottom left of the rviz screen. As you fly around in Gazebo, keep an eye on the Gazebo launch terminal window. The screen will display messages as follows depending on your flying ability: [ INFO] [1447358765.938240016, 617.860000000]: Engaging motors! [ WARN] [1447358778.282568898, 629.410000000]: Shutting down motors due to flip over! When Hector flips over, you will need to relaunch the simulation. Within ROS, a clearer understanding of the interactions between the active nodes and topics can be obtained by using the rqt_graph tool. The following diagram depicts all currently active nodes (except debug nodes) enclosed in oval shapes. These nodes publish to the topics enclosed in rectangles that are pointed to by arrows. You can use the rqt_graph command in a new terminal window to view the same display: ROS nodes and topics for Hector Quadrotor outdoor flight demo The rostopic list command will provide a long list of topics currently being published. Other command line tools such as rosnode, rosmsg, rosparam, and rosservice will help gather specific information about Hector Quadrotor's operation. To understand the orientation of the quadrotor on the screen, use the Gazebo GUI to show the vehicle's tf reference frame. Select quadrotor in the World panel on the left, then select the translation mode on the top environment toolbar (looks like crossed double-headed arrows). This selection will bring up the red-green-blue axis for the x-y-z axes of the tf frame, respectively. In the following figure, the x axis is pointing to the left, the y axis is pointing to the right (toward the reader), and the z axis is pointing up. Hector Quadrotor tf reference frame An YouTube video of hector_quadrotor outdoor scenario demo shows the hector_quadrotor in Gazebo operated with a gamepad controller: https://www.youtube.com/watch?v=9CGIcc0jeuI Flying Hector indoors The quadrotor indoor SLAM demo software is included as part of the hector_quadrotor metapackage. To launch the simulation, type the following command: $ roslaunch hector_quadrotor_demo indoor_slam_gazebo.launch The following screenshots show both the rviz and Gazebo display windows when the Hector indoor simulation is launched: Hector Quadrotor indoor rviz and gazebo views If you do not see this image for Gazebo, roll your mouse wheel to zoom out of the image. Then you will need to rotate the scene to a top-down view, in order to find the quadrotor press Shift + right mouse button. The environment was the offices at Willow Garage and Hector starts out on the floor of one of the interior rooms. Just like in the outdoor demo, the xbox_controller.launch file from the hector_quadrotor_teleop package should be executed: $ roslaunch hector_quadrotor_teleop xbox_controller.launch If the quadrotor becomes embedded in the wall, waiting a few seconds should release it and it should (hopefully) end up in an upright position ready to fly again. If you lose sight of it, zoom out from the Gazebo screen and look from a top-down view. Remember that the Gazebo physics engine is applying minor environment conditions as well. This can create some drifting out of its position. The rqt graph of the active nodes and topics during the Hector indoor SLAM demo is shown in the following figure. As Hector is flown around the office environment, the hector_mapping node will be performing SLAM and be creating a map of the environment. ROS nodes and topics for Hector Quadrotor indoor SLAM demo The following screenshot shows Hector Quadrotor mapping an interior room of Willow Garage: Hector mapping indoors using SLAM The 3D robot trajectory is tracked by the hector_trajectory_server node and can be shown in rviz. The map along with the trajectory information can be saved to a GeoTiff file with the following command: $ rostopic pub syscommand std_msgs/String "savegeotiff" The savegeotiff map can be found in the hector_geotiff/map directory. An YouTube video of hector_quadrotor stack indoor SLAM demo shows hector_quadrotor in Gazebo operated with a gamepad controller: https://www.youtube.com/watch?v=IJbJbcZVY28 Summary In this article, we learnt about Hector Quadrotors, loading Hector Quadrotors, launching Hector Quadrotor in Gazebo, and also about flying Hector outdoors and indoors. Resources for Article: Further resources on this subject: Working On Your Bot [article] Building robots that can walk [article] Detecting and Protecting against Your Enemies [article]
Read more
  • 0
  • 0
  • 11664
article-image-connecting-arduino-web
Packt
27 Sep 2016
6 min read
Save for later

Connecting Arduino to the Web

Packt
27 Sep 2016
6 min read
In this article by Marco Schwartz, author of Internet of Things with Arduino Cookbook, we will focus on getting you started by connecting an Arduino board to the web. This article will really be the foundation of the rest of the article, so make sure to carefully follow the instructions so you are ready to complete the exciting projects we'll see in the rest of the article. (For more resources related to this topic, see here.) You will first learn how to set up the Arduino IDE development environment, and add Internet connectivity to your Arduino board. After that, we'll see how to connect a sensor and a relay to the Arduino board, for you to understand the basics of the Arduino platform. Then, we are actually going to connect an Arduino board to the web, and use it to grab the content from the web and to store data online. Note that all the projects in this article use the Arduino MKR1000 board. This is an Arduino board released in 2016 that has an on-board Wi-Fi connection. You can make all the projects in the article with other Arduino boards, but you might have to change parts of the code. Setting up the Arduino development environment In this first recipe of the article, we are going to see how to completely set up the Arduino IDE development environment, so that you can later use it to program your Arduino board and build Internet of Things projects. How to do it… The first thing you need to do is to download the latest version of the Arduino IDE from the following address: https://www.arduino.cc/en/Main/Software This is what you should see, and you should be able to select your operating system: You can now install the Arduino IDE, and open it on your computer. The Arduino IDE will be used through the whole article for several tasks. We will use it to write down all the code, but also to configure the Arduino boards and to read debug information back from those boards using the Arduino IDE Serial monitor. What we need to install now is the board definition for the MKR1000 board that we are going to use in this article. To do that, open the Arduino boards manager by going to Tools | Boards | Boards Manager. In there, search for SAMD boards: To install the board definition, just click on the little Install button next to the board definition. You should now be able to select the Arduino/GenuinoMKR1000 board inside the Arduino IDE: You are now completely set to develop Arduino projects using the Arduino IDE and the MKR1000 board. You can, for example, try to open an example sketch inside the IDE: How it works... The Arduino IDE is the best tool to program a wide range of boards, including the MKR1000 board that we are going to use in this article. We will see that it is a great tool to develop Internet of Things projects with Arduino. As we saw in this recipe, the board manager makes it really easy to use new boards inside the IDE. See also These are really the basics of the Arduino framework that we are going to use in the whole article to develop IoT projects. Options for Internet connectivity with Arduino Most of the boards made by Arduino don't come with Internet connectivity, which is something that we really need to build Internet of Things projects with Arduino. We are now going to review all the options that are available to us with the Arduino platform, and see which one is the best to build IoT projects. How to do it… The first option, that has been available since the advent of the Arduino platform, is to use a shield. A shield is basically an extension board that can be placed on top of the Arduino board. There are many shields available for Arduino. Inside the official collection of shields, you will find motor shields, prototyping shields, audio shields, and so on. Some shields will add Internet connectivity to the Arduino boards, for example the Ethernet shield or the Wi-Fi shield. This is a picture of the Ethernet shield: The other option is to use an external component, for example a Wi-Fi chip mounted on a breakout board, and then connect this shield to Arduino. There are many Wi-Fi chips available on the market. For example, Texas Instruments has a chip called the CC3000 that is really easy to connect to Arduino. This is a picture of a breakout board for the CC3000 Wi-Fi chip: Finally, there is the possibility of using one of the few Arduino boards that has an onboard Wi-Fi chip or Ethernet connectivity. The first board of this type introduced by Arduino was the Arduino Yun board. It is a really powerful board, with an onboard Linux machine. However, it is also a bit complex to use compared to other Arduino boards. Then, Arduino introduced the MKR1000 board, which is a board that integrates a powerful ARM Cortex M0+ process and a Wi-Fi chip on the same board, all in the small form factor. Here is a picture of this board: What to choose? All the solutions above would work to build powerful IoT projects using Arduino. However, as we want to easily build those projects and possibly integrate them into projects that are battery-powered, I chose to use the MKR1000 board for all the projects in this article. This board is really simple to use, powerful, and doesn't required any connections to hook it up with a Wi-Fi chip. Therefore, I believe this is the perfect board for IoT projects with Arduino. There's more... Of course, there are other options to connect Arduino boards to the Web. One option that's becoming more and more popular is to use 3G or LTE to connect your Arduino projects to the Web, again using either shields or breakout boards. This solution has the advantage of not requiring an active Internet connection like a Wi-Fi router, and can be used anywhere, for example outdoors. See also Now we have chosen a board that we will use in our IoT projects with Arduino, you can move on to the next recipe to actually learn how to use it. Resources for Article: Further resources on this subject: Building a Cloud Spy Camera and Creating a GPS Tracker [article] Internet Connected Smart Water Meter [article] Getting Started with Arduino [article]
Read more
  • 0
  • 0
  • 3725

article-image-how-build-your-own-futuristic-robot
Packt
13 Sep 2016
5 min read
Save for later

How to Build your own Futuristic Robot

Packt
13 Sep 2016
5 min read
In this article by Richard Grimmett author of the book Raspberry Pi Robotic Projects - Third Edition we will start with simple but impressive project where you'll take a toy robot and give it much more functionality. You'll start with an R2D2 toy robot and modify it to add a web cam, voice recognition, and motors so that it can get around. Creating your own R2D2 will require a bit of mechanical work, you'll need a drill and perhaps a Dremel tool, but most of the mechanical work will be removing the parts you don't need so you can add some exciting new capabities. (For more resources related to this topic, see here.) Modifying the R2D2 There are several R2D2 toys that can provide the basis for this project. Both are available from online retailers. This project will use one that is both inexpensive but also provides such interesting features as a top that turns and a wonderful place to put a webcam. It is the Imperial Toy R2D2 bubble machine. Here is a picture of the unit: The unit can be purchased at amazon.com, toyrus.com, and a number of other retailers. It is normally used as a bubble machine that uses a canister of soap bubbles to produce bubbles, but you'll take all of that capability out to make your R2D2 much more like the original robot. Adding wheels and motors In order to make your R2D2 a reality the first thing you'll want to do is add wheels to the robot. In order to do this you'll need to take the robot apart, separating the two main plastic pieces that make up the lower body of the robot. Once you have done this both the right and left arms can be removed from the body. You'll need to add two wheels that are controlled by DC motors to these arms. Perhaps the best way to do this is to purchase a simple, two-wheeled car that is available at many online electronics stores like amazon.com, ebay.com, or bandgood.com. Here is a picture of the parts that come with the car: You'll be using these pieces to add mobility to your robot.  The two yellow pieces are dc motors. So, let's start with those. To add these to the two arms on either side of the robot, you'll need to separate the two halves of the arm, and then remove material from one of the halves, like this: You can use a Dremel tool to do this, or any kind of device that can cut plastic. This will leave a place for your wheel. Now you'll want to cut the plastic kit of your car up to provide a platform to connect to your R2D2 arm. You'll cut your plastic car up using this as a pattern, you'll want to end up with the two pieces that have the + sign cutouts, and this is where you'll mount your wheels and also the piece you'll attach to the R2D2 arm. The image below will help you understand this better. On the cut out side that has not been removed, mark and drill two holes to fix the clear plastic to the bottom of the arm. Then fix the wheel to the plastic, then the plastic to the bottom of the arm as shown in the picture. You'll connect two wires, one to each of the polarities on the motor, and then run the wires up to the top of the arm and out the small holes. These wires will eventually go into the body of the robot through small holes that you will drill where the arms connect to the body, like this: You'll repeat this process for the other arm. For the third, center arm, you'll want to connect the small, spinning white wheel to the bottom of the arm. Here is a picture: Now that you have motors and wheels connected to the bottom of arms you'll need to connect these to the Raspberry Pi. There are several different ways to connect and drive these two DC motors, but perhaps the easiest is to add a shield that can directly drive a DC motor. This motor shield is an additional piece of hardware that installs on the top of Raspberry Pi and can source the voltage and current to power both motors. The RaspiRobot Board V3 is available online and can provide these signals. The specifics on the board can be found at http://www.monkmakes.com/rrb3/. Here is a picture of the board: The board will provide the drive signals for the motors on each of the wheels. The following are the steps to connect Raspberry Pi to the board: First, connect the battery power connector to the power connector on the side of the board. Next, connect the two wires from one of the motors to the L motor connectors on the board. Connect the other two wires from the other motor to the R motor connectors on the board. Once completed your connections should look like this: The red and black wires go to the battery, the green and yellow to left motor, the blue and white to the right motor. Now you will be able to control both the speed and the direction of the motors through the motor control board. Summary Thus we have covered some aspect of building first project, your own R2D2. You can now move it around, program it to respond to voice commands, or run it remotely from a computer, tablet or phone. Following in this theme your next robot will look and act like WALL-E. Resources for Article: Further resources on this subject: The Raspberry Pi and Raspbian [article] Building Our First Poky Image for the Raspberry Pi [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 1678

article-image-building-our-first-poky-image-raspberry-pi
Packt
14 Apr 2016
12 min read
Save for later

Building Our First Poky Image for the Raspberry Pi

Packt
14 Apr 2016
12 min read
In this article by Pierre-Jean TEXIER, the author of the book Yocto for Raspberry Pi, covers basic concepts of the Poky workflow. Using the Linux command line, we will proceed to different steps, download, configure, and prepare the Poky Raspberry Pi environment and generate an image that can be used by the target. (For more resources related to this topic, see here.) Installing the required packages for the host system The steps necessary for the configuration of the host system depend on the Linux distribution used. It is advisable to use one of the Linux distributions maintained and supported by Poky. This is to avoid wasting time and energy in setting up the host system. Currently, the Yocto project is supported on the following distributions: Ubuntu 12.04 (LTS) Ubuntu 13.10 Ubuntu 14.04 (LTS) Fedora release 19 (Schrödinger's Cat) Fedora release 21 CentOS release 6.4 CentOS release 7.0 Debian GNU/Linux 7.0 (Wheezy) Debian GNU/Linux 7.1 (Wheezy) Debian GNU/Linux 7.2 (Wheezy) Debian GNU/Linux 7.3 (Wheezy) Debian GNU/Linux 7.4 (Wheezy) Debian GNU/Linux 7.5 (Wheezy) Debian GNU/Linux 7.6 (Wheezy) openSUSE 12.2 openSUSE 12.3 openSUSE 13.1 Even if your distribution is not listed here, it does not mean that Poky will not work, but the outcome cannot be guaranteed. If you want more information about Linux distributions, you can visitthis link: http://www.yoctoproject.org/docs/current/ref-manual/ref-manual.html Poky on Ubuntu The following list shows required packages by function, given a supported Ubuntu or Debian Linux distribution. The dependencies for a compatible environment include: Download tools: wget and git-core Decompression tools: unzip and tar Compilation tools: gcc-multilib, build-essential, and chrpath String-manipulation tools: sed and gawk Document-management tools: texinfo, xsltproc, docbook-utils, fop, dblatex, and xmlto Patch-management tools: patch and diffstat Here is the command to type on a headless system: $ sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath  Poky on Fedora If you want to use Fedora, you just have to type this command: $ sudo yum install gawk make wget tar bzip2 gzip python unzip perl patch diffutils diffstat git cpp gcc gcc-c++ glibc-devel texinfo chrpath ccache perl-Data-Dumper perl-Text-ParseWords perl-Thread-Queue socat Downloading the Poky metadata After having installed all the necessary packages, it is time to download the sources from Poky. This is done through the git tool, as follows: $ git clone git://git.yoctoproject.org/poky (branch master) Another method is to download tar.bz2 file directly from this repository: https://www.yoctoproject.org/downloads To avoid all hazardous and problematic manipulations, it is strongly recommended to create and switch to a specific local branch. Use these commands: $ cd poky $ git checkout daisy –b work_branch Downloading the Raspberry Pi BSP metadata At this stage, we only have the base of the reference system (Poky), and we have no support for the Broadcom BCM SoC. Basically, the BSP proposed by Poky only offers the following targets: $ ls meta/conf/machine/*.conf beaglebone.conf edgerouter.conf genericx86-64.conf genericx86.conf mpc8315e-rdb.conf This is in addition to those provided by OE-Core: $ ls meta/conf/machine/*.conf qemuarm64.conf qemuarm.conf qemumips64.conf qemumips.conf qemuppc.conf qemux86-64.conf qemux86.conf In order to generate a compatible system for our target, download the specific layer (the BSP Layer) for the Raspberry Pi: $ git clone git://git.yoctoproject.org/meta-raspberrypi If you want to learn more about git scm, you can visit the official website: http://git-scm.com/ Now we can verify whether we have the configuration metadata for our platform (the rasberrypi.conf file): $ ls meta-raspberrypi/conf/machine/*.conf raspberrypi.conf This screenshot shows the meta-raspberrypi folder: The examples and code presented in this article use Yocto Project version 1.7 and Poky version 12.0. For reference,the codename is Dizzy. Now that we have our environment freshly downloaded, we can proceed with its initialization and the configuration of our image through various configurations files. The oe-init-build-env script As can be seen in the screenshot, the Poky directory contains a script named oe-init-build-env. This is a script for the configuration/initialization of the build environment. It is not intended to be executed but must be "sourced". Its work, among others, is to initialize a certain number of environment variables and place yourself in the build directory's designated argument. The script must be run as shown here: $ source oe-init-build-env [build-directory] Here, build-directory is an optional parameter for the name of the directory where the environment is set (for example, we can use several build directories in a single Poky source tree); in case it is not given, it defaults to build. The build-directory folder is the place where we perform the builds. But, in order to standardize the steps, we will use the following command throughout to initialize our environment: $ source oe-init-build-env rpi-build ### Shell environment set up for builds. ### You can now run 'bitbake <target>' Common targets are:     core-image-minimal     core-image-sato     meta-toolchain     adt-installer     meta-ide-support You can also run generated qemu images with a command like 'runqemu qemux86' When we initialize a build environment, it creates a directory (the conf directory) inside rpi-build. This folder contain two important files: local.conf: It contains parameters to configure BitBake behavior. bblayers.conf: It lists the different layers that BitBake takes into account in its implementation. This list is assigned to the BBLAYERS variable. Editing the local.conf file The local.conf file under rpi-build/conf/ is a file that can configure every aspect of the build process. It is through this file that we can choose the target machine (the MACHINE variable), the distribution (the DISTRO variable), the type of package (the PACKAGE_CLASSES variable), and the host configuration (PARALLEL_MAKE, for example). The minimal set of variables we have to change from the default is the following: BB_NUMBER_THREADS ?= "${@oe.utils.cpu_count()}" PARALLEL_MAKE ?= "-j ${@oe.utils.cpu_count()}" MACHINE ?= raspberrypi MACHINE ?= "raspberrypi" The BB_NUMBER_THREADS variable determines the number of tasks that BitBake will perform in parallel (tasks under Yocto; we're not necessarily talking about compilation). By default, in build/conf/local.conf, this variable is initialized with ${@oe.utils.cpu_count()},corresponding to the number of cores detected on the host system (/proc/cpuinfo). The PARALLEL_MAKE variable corresponds to the -j of the make option to specify the number of processes that GNU Make can run in parallel on a compilation task. Again, it is the number of cores present that defines the default value used. The MACHINE variable is where we determine the target machine we wish to build for the Raspberry Pi (define in the .conf file; in our case, it is raspberrypi.conf). Editing the bblayers.conf file Now, we still have to add the specific layer to our target. This will have the effect of making recipes from this layer available to our build. Therefore, we should edit the build/conf/bblayers.conf file: # LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly LCONF_VERSION = "6" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto   /home/packt/RASPBERRYPI/poky/meta-yocto-bsp   " BBLAYERS_NON_REMOVABLE ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto " Add the following line: # LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly LCONF_VERSION = "6" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto   /home/packt/RASPBERRYPI/poky/meta-yocto-bsp   /home/packt/RASPBERRYPI/poky/meta-raspberrypi   " BBLAYERS_NON_REMOVABLE ?= "   /home/packt/RASPBERRYPI/poky/meta   /home/packt/RASPBERRYPI/poky/meta-yocto " Naturally, you have to adapt the absolute path (/home/packt/RASPBERRYPI here) depending on your own installation. Building the Poky image At this stage, we will have to look at the available images as to whether they are compatible with our platform (.bb files). Choosing the image Poky provides several predesigned image recipes that we can use to build our own binary image. We can check the list of available images by running the following command from the poky directory: $ ls meta*/recipes*/images/*.bb All the recipes provide images which are, in essence, a set of unpacked and configured packages, generating a filesystem that we can use on actual hardware (for further information about different images, you can visit http://www.yoctoproject.org/docs/latest/mega-manual/mega-manual.html#ref-images). Here is a small representation of the available images: We can add the layers proposed by meta-raspberrypi to all of these layers: $ ls meta-raspberrypi/recipes-core/images/*.bb rpi-basic-image.bb rpi-hwup-image.bb rpi-test-image.bb Here is an explanation of the images: rpi-hwup-image.bb: This is an image based on core-image-minimal. rpi-basic-image.bb: This is an image based on rpi-hwup-image.bb, with some added features (a splash screen). rpi-test-image.bb: This is an image based on rpi-basic-image.bb, which includes some packages present in meta-raspberrypi. We will take one of these three recipes for the rest of this article. Note that these files (.bb) describe recipes, like all the others. These are organized logically, and here, we have the ones for creating an image for the Raspberry Pi. Running BitBake At this point, what remains for us is to start the build engine Bitbake, which will parse all the recipes that contain the image you pass as a parameter (as an initial example, we can take rpi-basic-image): $ bitbake rpi-basic-image Loading cache: 100% |########################################################################################################################################################################| ETA:  00:00:00 Loaded 1352 entries from dependency cache. NOTE: Resolving any missing task queue dependencies Build Configuration: BB_VERSION        = "1.25.0" BUILD_SYS         = "x86_64-linux" NATIVELSBSTRING   = "Ubuntu-14.04" TARGET_SYS        = "arm-poky-linux-gnueabi" MACHINE           = "raspberrypi" DISTRO            = "poky" DISTRO_VERSION    = "1.7" TUNE_FEATURES     = "arm armv6 vfp" TARGET_FPU        = "vfp" meta              meta-yocto        meta-yocto-bsp    = "master:08d3f44d784e06f461b7d83ae9262566f1cf09e4" meta-raspberrypi  = "master:6c6f44136f7e1c97bc45be118a48bd9b1fef1072" NOTE: Preparing RunQueue NOTE: Executing SetScene Tasks NOTE: Executing RunQueue Tasks Once launched, BitBake begins by browsing all the (.bb and .bbclass)files that the environment provides access to and stores the information in a cache. Because the parser of BitBake is parallelized, the first execution will always be longer because it has to build the cache (only about a few seconds longer). However, subsequent executions will be almost instantaneous, because BitBake will load the cache. As we can see from the previous command, before executing the task list, BitBake displays a trace that details the versions used (target, version, OS, and so on). Finally, BitBake starts the execution of tasks and shows us the progress. Depending on your setup, you can go drink some coffee or even eat some pizza. Usually after this, , if all goes well, you will be pleased to see that the tmp/subdirectory's directory construction (rpi-build) is generally populated. The build directory (rpi-build) contains about20 GB after the creation of the image. After a few hours of baking, we can rejoice with the result and the creation of the system image for our target: $ ls rpi-build/tmp/deploy/images/raspberrypi/*sdimg rpi-basic-image-raspberrypi.rpi-sdimg This is this file that we will use to create our bootable SD card. Creating a bootable SD card Now that our environment is complete, you can create a bootable SD card with the following command (remember to change /dev/sdX to the proper device name and be careful not to kill your hard disk by selecting the wrong device name): $ sudo dd if=rpi-basic-image-raspberrypi.rpi-sdimg of=/dev/sdX bs=1M Once the copying is complete, you can check whether the operation was successful using the following command (look at mmcblk0): $ lsblk NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT mmcblk0     179:0    0   3,7G  0 disk ├─mmcblk0p1 179:1    0    20M  0 part /media/packt/raspberrypi └─mmcblk0p2 179:2    0   108M  0 part /media/packt/f075d6df-d8b8-4e85-a2e4-36f3d4035c3c You can also look at the left-hand side of your interface: Booting the image on the Raspberry Pi This is surely the most anticipated moment of this article—the moment where we boot our Raspberry Pi with a fresh Poky image. You just have to insert your SD card in a slot, connect the HDMI cable to your monitor, and connect the power supply (it is also recommended to use a mouse and a keyboard to shut down the device, unless you plan on just pulling the plug and possibly corrupting the boot partition). After connectingthe power supply, you could see the Raspberry Pi splash screen: The login for the Yocto/Poky distribution is root. Summary In this article, we learned the steps needed to set up Poky and get our first image built. We ran that image on the Raspberry Pi, which gave us a good overview of the available capabilities. Resources for Article:   Further resources on this subject: Programming on Raspbian [article] Working with a Webcam and Pi Camera [article] Creating a Supercomputer [article]
Read more
  • 0
  • 1
  • 16509
article-image-building-cloud-spy-camera-and-creating-gps-tracker
Packt
30 Oct 2015
4 min read
Save for later

Building a Cloud Spy Camera and Creating a GPS Tracker

Packt
30 Oct 2015
4 min read
In this article by Marco Schwartz author of the book Arduino for Secret Agents, we will build a GPS location tracker and use a spy camera for live streaming. Building a GPS location tracker It's now time to build a real GPS location tracker. For this project, we'll get the location just as before, using the GPS if available, and the GPRS location otherwise. However here, we are going to use the GPRS capabilities of the shield to send the latitude and longitude data to Dweet.io, which is a service we already used before. Then, we'll plot this data in Google Maps, allowing you to follow your device in real-time from anywhere in the world. (For more resources related to this topic, see here.) You need to define a name for the Thing that will contain the GPS location data: String dweetThing = "mysecretgpstracker"; Then, after getting the current location, we prepare the data to be sent to Dweet.io: uint16_t statuscode; int16_t length; char url[80]; String request = "www.dweet.io/dweet/for/" + dweetThing + "?latitude=" + latitude + "andlongitude=" + longitude; request.toCharArray(url, request.length()); After that, we actually send the data to Dweet.io: if (!fona.HTTP_GET_start(url, andstatuscode, (uint16_t *)andlength)) { Serial.println("Failed!"); } while (length > 0) { while (fona.available()) { char c = fona.read(); // Serial.write is too slow, we'll write directly to Serial register! #if defined(__AVR_ATmega328P__) || defined(__AVR_ATmega168__) loop_until_bit_is_set(UCSR0A, UDRE0); /* Wait until data register empty. */ UDR0 = c; #else Serial.write(c); #endif length--; } } fona.HTTP_GET_end(); Now, before testing the project, we are going to prepare our dashboard that will host the Google Maps widget. We are going to use Freeboard.io for this purpose. If you don't have an account yet, go to http://freeboard.io/. Create a new dashboard, and also a new datasource. Insert the name of your Thing inside the THING NAME field: Then, create a new Pane with a Google Maps widget. Link this widget to the latitude and longitude of your Location datasource: It's now time to test the project. Make sure to grab all the code, for example from the GitHub repository of the book. Also don't forget to modify the Thing name, as well as your GPRS settings. Then, upload the sketch to the board, and open the Serial monitor. This is what you should see: The most important line is the last one, which confirms that data has been sent to Dweet.io and has been stored there. Now, simply go back to the dashboard you just created: you can now see that the location on the map has been updated: Note that this map is also updated in real-time, as new measurements arrive from the board. You can also modify the delay between two updates of the position of the tracker, by changing the delay() function in the sketch. Congratulations, you just built your own GPS tracking device! Live streaming from the spy camera We are now going to use the camera to stream live video in a web browser. This stream will be accessible from any device connected to the same WiFi network as the Yun. To start with this project, log into your Yun using the following command (by changing the name of the board with the name of your Yun): ssh root@arduinoyun.local Then, type: mjpg_streamer -i "input_uvc.so -d /dev/video0 -r 640x480 -f 25" -o "output_http.so -p 8080 -w /www/webcam" & This will start the streaming from your Yun. You can now simply go the URL of your Yun, and add ':8080' at the end. For example, http://arduinoyun.local:8080. You should arrive on the streaming interface: You can now stream this video live to your mobile phone or any other device within the same network. It's the perfect project to spy in a room while you are sitting outside for example. Summary In this article, we built a device that allows us to track the GPS location of any object it is attached to and we built a spy camera project that can send pictures in the cloud whenever motion is detected Resources for Article: Further resources on this subject: Getting Started with Arduino [article] Arduino Development [article] Prototyping Arduino Projects using Python [article]
Read more
  • 0
  • 0
  • 2665

article-image-working-your-bot
Packt
28 Oct 2015
8 min read
Save for later

Working On Your Bot

Packt
28 Oct 2015
8 min read
 In this article by Kassandra Perch, the author of Learning JavaScript Robotics, we will learn how to wire up servos and motors, how to create a project with a motor and using the REPL, and how to create a project with a servo and a sensor (For more resources related to this topic, see here.) Wiring up servos and motors Wiring up servos will look similar to wiring up sensors, except the signal maps to an output. Wiring up a motor is similar to wiring an LED. Wiring up servos To wire up a servo, you'll have to use a setup similar to the following figure: A servo wiring diagram The wire colors may vary for your servo. If your wires are red, brown, and orange, red is 5V, brown is GND, and orange is signal. When in doubt, check the data sheet that came with your servo. After wiring up the servo, plug the board in and listen to your servo. If you hear a clicking noise, quickly unplug the board—this means your servo is trying to place itself in a position it cannot reach. Usually, there is a small screw at the bottom of most servos that you can use to calibrate them. Use a small screwdriver to rotate this until it stops clicking when the power is turned on. This procedure is the same for continuous servos—the diagram does not change much either. Just replace the regular servo with a continuous one and you're good to go. Wiring up motors Wiring up motors looks like the following diagram: A motor wiring diagram Again, you'll want the signal pin to go to a PWM pin. As there are only two pins, it can be confusing where the power pin goes—it goes to a PWM pin because, similar to our LED getting its power from the PWM pin, the same pin will provide the power to run the motor. Now that we know how to wire these up, let's work on a project involving a motor and Johnny-Five's REPL. Creating a project with a motor and using the REPL Grab your motor and board, and follow the diagram in the previous section to wire a motor. Let's use pin 6 for the signal pin, as shown in the preceding diagram. What we're going to do in our code is create a Motor object and inject it into the REPL, so we can play around with it in the command line. Create a motor.js file and put in the following code: var five = require('johnny-five'); var board = new five.Board(); board.on('ready', function(){ var motor = new five.Motor({ pin: 6 }); this.repl.inject({ motor: motor }); }); Then, plug in your board and use the motor.js node to start the program. Exploring the motor API If we take a look at the documentation on the Johnny-Five website, there are few things we can try here. First, let's turn our motor on at about half speed: > motor.start(125); The .start() method takes a value between 0 and 255. Sounds familiar? That's because these are the values we can assign to a PWM pin! Okay, let's tell our motor to coast to a stop: > motor.stop(); Note that while this function will cause the motor to coast to a stop, there is a dedicated .brake() method. However, this requires a dedicated break pin, which can be available using shields and certain motors. If you happen to have a directional motor, you can tell the motor to run in reverse using .reverse() with a value between 0 and 255: > motor.reverse(125); This will cause a directional motor to run in reverse at half speed. Note that this requires a shield. And that's about it. Operating motors isn't difficult and Johnny-Five makes it even easier. Now that we've learned how this operates, let's try a servo. Creating a project with a servo and a sensor Let's start with just a servo and the REPL, then we can add in a sensor. Use the diagram from the previous section as a reference to wire up a servo, and use pin 6 for signal. Before we write our program, let's take a look at some of the options the Servo object constructor gives us. You can set an arbitrary range by passing [min, max] to the range property. This is great for low quality servos that have trouble at very low and very high values. The type property is also important. We'll be using a standard servo, but you'll need to set this to continuous if you're using a continuous servo. Since standard is the default, we can leave this out for now. The offset property is important for calibration. If your servo is set too far in one direction, you can change the offset to make sure it can programmatically reach every angle it was meant to. If you hear clicking at very high or low values, try adjusting the offset. You can invert the direction of the servo with the invert property or initialize the servo at the center with center. Centering the servo helps you to know whether you need to calibrate it. If you center it and the arm isn't centered, try adjusting the offset property. Now that we've got a good grasp on the constructor, let's write some code. Create a file called servo-repl.js and enter the following: var five = require('johnny-five'); var board = new five.Board(); board.on('ready', function(){ var servo = new five.Servo({ pin: 6 }); this.repl.inject({ servo: servo }); }); This code simply constructs a standard servo object for pin 6 and injects it into the REPL. Then, run it using the following command line: > node servo-repl.js Your servo should jump to its initialization point. Now, let's figure out how to write the code that makes the servo move. Exploring the servo API with the REPL The most basic thing we can do with a servo is set it to a specific angle. We do this by calling the .to() function with a degree, as follows: > servo.to(90); This should center the servo. You can also set a time on the .to() function, which can take a certain amount of time: > servo.to(20, 500); This will move the servo from 90 degrees to 20 degrees in over 500 ms. You can even determine how many steps the servo takes to get to the new angle, as follows: > servo.to(120, 500, 10); This will move the servo to 120 degrees in over 500 ms in 10 discreet steps. The .to() function is very powerful and will be used in a majority of your Servo objects. However, there are many useful functions. For instance, checking whether a servo is calibrated correctly is easier when you can see all angles quickly. For this, we can use the .sweep() function, as follows: > servo.sweep(); This will sweep the servo back and forth between its minimum and maximum values, which are 0 and 180, unless set in the constructor via the range property. You can also specify a range to sweep, as follows: > servo.sweep({ range: [20, 120] }); This will sweep the servo from 20 to 120 repeatedly. You can also set the interval property, which will change how long the sweep takes, and a step property, which sets the number of discreet steps taken, as follows: > servo.sweep({ range: [20, 120], interval: 1000, step: 10 }); This will cause the servo to sweep from 20 to 120 every second in 10 discreet steps. You can stop a servo's movement with the .stop() method, as follows: > servo.stop(); For continuous servos, you can use .cw() and .ccw() with a speed between 0 and 255 to move the continuous servo back and forth. Now that we've seen the Servo object API at work, let's hook our servo up to a sensor. In this case, we'll use a photocell. This code is a good example for a few reasons: it shows off Johnny-Five's event API, allows us to use a servo with an event, and gets us used to wiring inputs to outputs using events. First, let's add a photocell to our project using the following diagram: A servo and photoresistor wiring diagram Then, create a photoresistor-servo.js file, and add the following. var five = require('johnny-five'); var board = new five.Board(); board.on('ready', function(){ var servo = new five.Servo({ pin: 6 }); var photoresistor = new five.Sensor({ pin: "A0", freq: 250 }); photoresistor.scale(0, 180).on('change', function(){ servo.to(this.value); }); }); How this works is as follows. During the data event, we tell our servo to move to the correct position based on the scaled data from our photoresistor. Now, run the following command line: > node photoresistor-servo.js Then, try turning the light on and covering up your photoresistor and watch the servo move! Summary We now know how to use servos and motors that helps to move small robots. Wheeled robots are good to go! But what about more complex projects, such as the hexapod? Walking takes timing. As we mentioned in the .to() function, we can time the servo movement, thanks to the Animation library. Resources for Article: Further resources on this subject: Internet-Connected Smart Water Meter [article] Raspberry Pi LED Blueprints [article] Welcome to JavaScript in the full stack [article]
Read more
  • 0
  • 0
  • 2215