Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - IoT and Hardware

152 Articles
article-image-iot-analytics-cloud
Packt
14 Jun 2017
19 min read
Save for later

IoT Analytics for the Cloud

Packt
14 Jun 2017
19 min read
In this article by Andrew Minteer, author of the book Analytics for the Internet of Things (IoT), that you understand how your data is transmitted back to the corporate servers, you feel you have more of a handle on it. You also have a reference frame in your head on how it is operating out in the real world. (For more resources related to this topic, see here.) Your boss stops by again. "Is that rolling average job done running yet?", he asks impatiently. It used to run fine and finished in an hour three months ago. It has steadily taken longer and longer and now sometimes does not even finish. Today, it has been going on six hours and you are crossing your fingers. Yesterday it crashed twice with what looked like out of memory errors. You have talked to your IT group and finance group about getting a faster server with more memory. The cost would be significant and likely will take months to complete the process of going through purchasing, putting it on order, and having it installed. Your friend in finance is hesitant to approve it. The money was not budgeted for this fiscal year. You feel bad especially since this is the only analytic job causing you problems. It just runs once a month but produces key data. Not knowing what else to say, you give your boss a hopeful, strained smile and show him your crossed fingers. “It’s still running...that’s good, right?” This article is about the advantages to cloud based infrastructure for handling and analyzing IoT data. We will discuss cloud services including Amazon Web Services (AWS), Microsoft Azure, and Thingworx. You will learn how to implement analytics elastically to enable a wide variety of capabilities. This article will cover: Building elastic analytics Designing for scale Cloud security and analytics Key Cloud Providers Amazon AWS Microsoft Azure PTC ThingWorx Building elastic analytics IoT data volumes increase quickly. Analytics for IoT is particularly compute intensive at times that are difficult to predict. Business value is uncertain and requires a lot of experimentation to find the right implementation. Combine all that together and you need something that scales quickly, is dynamic and responsive to resource needs, and virtually unlimited capacity at just the right time. And all of that needs to be implemented quickly with a low cost and low maintenance needs. Enter the cloud. IoT Analytics and cloud infrastructure fit together like a hand in a glove. What is the cloud infrastructure? The National Institute of Standards and Technology defines five essential characteristics: On-demand self-service: You can provision things like servers and storage as needed and without interacting with someone. Broad network access: Your cloud resources are accessible over the internet (if enabled) by various methods such as web browser or mobile phone. Resource pooling: Cloud providers pool their servers and storage capacity across many customers using a multi-tenant model. Resources, both physical and virtual, are dynamically assigned and reassigned as needed. Specific location of resources is unknown and generally unimportant. Rapid elasticity: Your resources can be elastically created and destroyed. This can happen automatically as needed to meet demand. You can scale outward rapidly. You can also contract rapidly. Supply of resources is effectively unlimited from your viewpoint. Measured service: Resource usage is monitored, controlled, and reported by the cloud provider. You have access to the same information, providing transparency to your utilization. Cloud systems continuously optimize resources automatically. There is a notion of private clouds that exist on premises or custom built by a third party for a specific organization. For our concerns, we will be discussing public clouds only. By and large, most analytics will be done on public clouds so we will concentrate our efforts there. The capacity available at your fingertips on public clouds is staggering. AWS, as of June 2016, has an estimated 1.3 million servers online. These servers are thought to be three times more efficient than enterprise systems. Cloud providers own the hardware and maintain the network and systems required for the available services. You just have to provision what you need to use, typically through a web application. Providers offer different levels of abstractions. They offer lower level servers and storage where you have fine grained control. They also offer managed services that handle the provisioning of servers, networking, and storage for you. These are used in conjunction with each other without much distinction between the two. Hardware failures are handled automatically. Resources are transferred to new hardware and brought back online. The physical components become unimportant when you design for the cloud, it is abstracted away and you can focus on resource needs. The advantages to using the cloud: Speed: You can bring cloud resources online in minutes. Agility: The ability to quickly create and destroy resources leads to ease of experimentation. This increases the agility of analytics organizations. Variety of services: Cloud providers have many services available to support analytics workflows that can be deployed in minutes. These services manage hardware and storage needs for you. Global reach: You can extend the reach of analytics to the other side of the world with a few clicks. Cost control: You only pay for the resources you need at the time you need them. You can do more for less. To get an idea of the power that is at your fingertips, here is an architectural diagram of something NASA built on AWS as part of an outreach program to school children. Source: Amazon Web Services; https://aws.amazon.com/lex/ By speaking voice commands, it will communicate with a Mars Rover replica to retrieve IoT data such as temperature readings. The process includes voice recognition, natural speech generation from text, data storage and processing, interaction with IoT device, networking, security, and ability to send text messages. This was not a years worth of development effort, it was built by tying together cloud based services already in place. And it is not just for big, funded government agencies like NASA. All of these services and many more are available to you today if your analytics runs in the cloud. Elastic analytics concepts What do we mean by Elastic Analytics? Let’s define it as designing your analytics processes so scale is not a concern. You want your focus to be on the analytics and not on the underlying technology. You want to avoid constraining your analytics capability so it will fit within some set hardware limitations. Focus instead on potential value versus costs. Trade hardware constraints for cost constraints. You also want your analytics to be able to scale. It should go from supporting 100 IoT devices to 1 Million IoT devices without requiring any fundamental changes. All that should happen if the costs increase. This reduces complexity and increases maintainability. That translates into lower costs which enables you to do more analytics. More analytics increases the probability of finding value. Finding more value enables even more analytics. Virtuous circle! Some core Elastic Analytics concepts: Separate compute from storage: We are used to thinking about resources like laptop specifications. You buy one device that has 16GB memory and 500GB hard drive because you think that will meet 90% of your needs and it is the top of your budget. Cloud infrastructure abstracts that away. Doing analytics in the cloud is like renting a magic laptop where you can change 4GB memory into 16GB by snapping your fingers. Your rental bill increases for only the time you have it at 16GB. You snap your fingers again and drop it back down to 4GB to save some money. Your hard drive can grow and shrink independently of the memory specification. You are not stuck having to choose a good balance between them. You can match compute needs with requirements. Build for scale from the start: Use software, services, and programming code that can scale from 1 to 1 million without changes. Each analytic process you put in production has continuing maintenance efforts to it that will build up over time as you add more and more. Make it easy on yourself later on. You do not want to have to stop what you are doing to re-architect a process you built a year ago because it hit limits of scale. Make your bottleneck wetware not hardware: By wetware, we mean brain power. “My laptop doesn’t have enough memory to run the job” should never be the problem. It should always be “I haven’t figured it out yet, but I have several possibilities in test as we speak.” Manage to a spend budget not to available hardware: Use as many cloud resources as you need as long as it fits within your spend budget. There is no need to limit analytics to fit within a set number of servers when you run analytics in the cloud. Traditional enterprise architecture purchases hardware ahead of time which incurs a capital expense. Your finance guy does not (usually) like capital expense. You should not like it either, as it means a ceiling has just been set on what you can do (at least in the near term). Managing to spend means keeping an eye on costs, not on resource limitations. Expand when needed and make sure to contract quickly to keep costs down. Experiment, experiment, experiment: Create resources, try things out, kill them off if it does not work. Then try something else. Iterate to the right answer. Scale out resources to run experiments. Stretch when you need to. Bring it back down when you are done. If Elastic Analytics is done correctly, you will find your biggest limitations are Time and Wetware. Not hardware and capital. Design with the endgame in mind Consider how the analytics you develop in the cloud would end up if successful. Would it turn into a regularly updated dashboard? Would it be something deployed to run under certain conditions to predict customer behavior? Would it periodically run against a new set of data and send an alert if an anomaly is detected? When you list out the likely outcomes, think about how easy it would be to transition from the analytics in development to the production version that will be embedded in your standard processes. Choose tools and analytics that make that transition quick and easy. Designing for scale Following some key concepts will help keep changes to your analytics processes to a minimum, as your needs scale. Decouple key components Decoupling means separating functional groups into components so they are not dependent upon each other to operate. This allows functionality to change or new functionality to be added with minimal impact on other components. Encapsulate analytics Encapsulate means grouping together similar functions and activity into distinct units. It is a core principle of object oriented programming and you should employ it in analytics as well. The goal is to reduce complexity and simplify future changes. As your analytics develop, you will have a list of actions that is either transforming the data, running it through a model or algorithm, or reacting to the result. It can get complicated quickly. By encapsulating the analytics, it is easier to know where to make changes when needed down the road. You will also be able reconfigure parts of the process without affecting the other components. Encapsulation process is carried out in the following steps: Make a list of the steps. Organize them into groups. Think which groups are likely to change together. Separate the groups that are independent into their own process It is a good idea to have the data transformation steps separate from the analytical steps if possible. Sometimes the analysis is tightly tied to the data transformation and it does not make sense to separate, but in most cases it can be separated. The action steps based on the analysis results almost always should be separate. Each group of steps will also have its own resource needs. By encapsulating them and separating the processes, you can assign resources independently and scale more efficiently where you need it. You can do more with less. Decouple with message queues Decoupling encapsulated analytics processes with message queues has several advantages. It allows for change in any process without requiring the other ones to adjust. This is because there is no direct link between them. It also builds in some robustness in case one process has a failure. The queue can continue to expand without losing data while the down process restarts and nothing will be lost after things get going again. What is a message queue? Simple diagram of a message queue New data comes into a queue as a message, it goes into line for delivery, and then is delivered to the end server when it gets its turn. The process adding a message is called the publisher and the process receiving the message is called the subscriber. The message queue exists regardless of if the publisher or subscriber is connected and online. This makes it robust against intermittent connections (intentional or unintentional). The subscriber does not have to wait until the publisher is willing to chat and vice versa. The size of the queue can also grow and shrink as needed. If the subscriber gets behind, the queue just grows to compensate until it can catch up. This can be useful if there is a sudden burst in messages by the publisher. The queue will act as a buffer and expand to capture the messages while the subscriber is working through the sudden influx. There is a limit, of course. If the queue reaches some set threshold, it will reject (and you will most likely lose) any incoming messages until the queue gets back under control. A contrived but real world example of how this can happen: Joe Cut-rate (the developer): Hey, when do you want this doo-hickey device to wake up and report? Jim Unawares (the engineer): Every 4 hours Joe Cut-rate: No sweat. I’ll program it to start at 12am UTC, then every 4 hours after. How many of these you gonna sell again? Jim Unawares: About 20 million Joe Cut-rate: Um….friggin awesome! I better hardcode that 12am UTC then, huh? 4 months later Jim Unawares: We’re only getting data from 10% of the devices. And it is never the same 10%. What the heck? Angela the analyst: Every device in the world reports at exactly the same time, first thing I checked. The message queues are filling up since our subscribers can’t process that fast, new messages are dropped. If you hard coded the report time, we’re going to have to get the checkbook out to buy a ton of bandwidth for the queues. And we need to do it NOW since we are losing 90% of the data every 4 hours. You guys didn’t do that, did you? Although queues in practice typically operate with little lag, make sure the origination time of the data is tracked and not just the time the data was pulled off the queue. It can be tempting to just capture the time the message was processed to save space but that can cause problems for your analytics. Why is this important for analytics? If you only have the date and time the message was received by the subscribing server, it may not be as close as you think to the time the message was generated at the originating device. If there are recurring problems with message queues, the spread in time difference would ebb and flow without you being aware of it. You will be using time values extensively in predictive modeling. If the time values are sometimes accurate and sometimes off, the models will have a harder time finding predictive value in your data. Your potential revenue from repurposing the data can also be affected. Customers are unlikely to pay for a service tracking event times for them if it is not always accurate. There is a simple solution. Make sure the time the device sends the data is tracked along with the time the data is received. You can monitor delivery times to diagnose issues and keep a close eye on information lag times. For example, if you notice the delivery time steadily increases just before you get a data loss, it is probably the message queue filling up. If there is no change in delivery time before a loss, it is unlikely to be the queue. Another benefit to using the cloud is (virtually) unlimited queue sizes when use a managed queue service. This makes the situation described much less likely to occur. Distributed computing Also called cluster computing, distributed computing refers to spreading processes across multiple servers using frameworks that abstract the coordination of each individual server. The frameworks make it appear as if you are using one unified system. Under the covers, it could be a few servers (called nodes) to thousands. The framework handles that orchestration for you. Avoid containing analytics to one server The advantage to this for IoT analytics is in scale. You can add resources by adding nodes to the cluster, no change to the analytics code is required. Try and avoid containing analytics to one server (with a few exceptions). This puts a ceiling on scale. When to use distributed and when to use one server There is a complexity cost to distributed computing though. It is not as simple as single server analytics. Even though the frameworks handle a lot of the complexity for you, you still have to think and design your analytics to work across multiple nodes. Some guidelines on when to keep it simple on one server: There is not much need for scale: Your analytics needs little change even if the number of IoT devices and data explodes. For example, the analytics runs a forecast on data already summarized by month. The volume of devices makes little difference in that case. Small data instead of big data: The analytics runs on a small subset of data without much impact from data size. Analytics on random samples is an example. Resource needs are minimal: Even at orders of magnitude more data, you are unlikely to need more than what is available with a standard server. In that case, keep it simple. Assuming change is constant The world of IoT analytics moves quickly. The analytics you create today will change many times over as you get feedback on results and adapt to changing business conditions. Your analytics processes will need to change. Assume this will happen continuously and design for change. That brings us to the concept of continuous delivery. Continuous delivery is a concept from software development. It automates the release of code into production. The idea is to make change a regular process. Bring this concept into your analytics by keeping a set of simultaneous copies that you use to progress through three stages: Development: Keep a copy of your analytics for improving and trying out new things. Test: When ready, merge your improvements into this copy where the functionality stays the same but it is repeatedly tested. The testing ensures it is working as intended. Keeping a separate copy for test allows development to continue on other functionality. Master: This is the copy that goes into production. When you merge things from test to the Master copy, it is the same as putting it into live use. Cloud providers often have a continuous delivery service that can make this process simpler. For any software developer readers out there, this is a simplification of the Git Flow method, which is a little outside the scope of this article. If the author can drop a suggestion, it is worth some additional research to learn Git Flow and apply it to your analytics development in the cloud. Leverage managed services Cloud infrastructure providers, like AWS and Microsoft Azure, offer services for things like message queues, big data storage, and machine learning processing. The services handle the underlying resource needs like server and storage provisioning and also network requirements. You do not have to worry about how this happens under the hood and it scales as big as you need it. They also manage global distribution of services to ensure low latency. The following image shows the AWS regional data center locations combined with the underwater internet cabling. AWS Regional Data Center Locations and Underwater Internet Cables. Source: http://turnkeylinux.github.io/aws-datacenters/ This reduces the amount of things you have to worry about for analytics. It allows you to focus more on the business application and less on the technology. That is a good thing and you should take advantage of it. An example of a managed service is Amazon Simple Queue Service (SQS). SQS is a message queue where the underlying server, storage, and compute needs is managed automatically by AWS systems. You only need to setup and configure it which takes just a few minutes. Summary In this article, we reviewed what is meant by elastic analytics and the advantages to using cloud infrastructure for IoT analytics. Designing for scale was discussed along with distributed computing. The two main cloud providers were introduced, Amazon Web Services and Microsoft Azure. We also reviewed a purpose built software platform, ThingWorx, made for IoT devices, communications, and analysis. Resources for Article:  Further resources on this subject: Building Voice Technology on IoT Projects [article] IoT and Decision Science [article] Introducing IoT with Particle's Photon and Electron [article]
Read more
  • 0
  • 0
  • 2493

article-image-systems-and-logics
Packt
06 Apr 2017
19 min read
Save for later

Systems and Logics

Packt
06 Apr 2017
19 min read
In this article by Priya Kuber, Rishi Gaurav Bhatnagar, and Vijay Varada, authors of the book Arduino for Kids explains structure and various components of a code: How does a code work What is code What is a System How to download, save and access a file in the Arduino IDE (For more resources related to this topic, see here.) What is a System? Imagine system as a box which in which a process is completed. Every system is solving a larger problem, and can be broken down into smaller problems that can be solved and assembled. Sort of like a Lego set! Each small process has 'logic' as the backbone of the solution. Logic, can be expressed as an algorithm and implemented in code. You can design a system to arrive at solutions to a problem. Another advantage to breaking down a system into small processes is that in case your solution fails to work, you can easily spot the source of your problem, by checking if your individual processes work. What is Code? Code is a simple set of written instructions, given to a specific program in a computer, to perform a desired task. Code is written in a computer language. As we all know by now, a computer is an intelligent, electronic device capable of solving logical problems with a given set of instructions. Some examples of computer languages are Python, Ruby, C, C++ and so on. Find out some more examples of languages from the internet and write it down in your notebook. What is an Algorithm? A logical set by step process, guided by the boundaries (or constraints) defined by a problem, followed to find a solution is called an algorithm. In a better and more pictorial form, it can be represented as follows: (Logic + Control = Algorithm) (A picture depicting this equation) What does that even mean? Look at the following example to understand the process. Let's understand what an algorithm means with the help of an example. It's your friend's birthday and you have been invited for the party (Isn't this exciting already?). You decide to gift her something. Since it's a gift, let's wrap it. What would you do to wrap the gift? How would you do it? Look at the size of the gift Fetch the gift wrapping paper Fetch the scissors Fetch the tape Then you would proceed to place the gift inside the wrapping paper. You will start start folding the corners in a way that it efficiently covers the Gift. In the meanwhile, to make sure that your wrapping is tight, you would use a scotch tape. You keep working on the wrapper till the whole gift is covered (and mind you, neatly! you don't want mommy scolding you, right?). What did you just do? You used a logical step by step process to solve a simple task given to you. Again coming back to the sentence: (Logic + Control = Algorithm) 'Logic' here, is the set of instructions given to a computer to solve the problem. 'Control' are the words making sure that the computer understands all your boundaries. Logic Logic is the study of reasoning and when we add this to the control structures, they become algorithms. Have you ever watered the plants using a water pipe or washed a car with it? How do you think it works? The pipe guides the water from the water tap to the car. It makes sure optimum amount of water reaches the end of the pipe. A pipe is a control structure for water in this case. We will understand more about control structures in the next topic. How does a control structure work? A very good example to understand how a control structure works, is taken from wikiversity. (https://en.wikiversity.org/wiki/Control_structures) A precondition is the state of a variable before entering a control structure. In the gift wrapping example, the size of the gift determines the amount of gift wrapping paper you will use. Hence, it is a condition that you need to follow to successfully finish the task. In programming terms, such condition is called precondition. Similarly, a post condition is the state of the variable after exiting the control structure. And a variable, in code, is an alphabetic character, or a set of alphabetic characters, representing or storing a number, or a value. Some examples of variables are x, y, z, a, b, c, kitten, dog, robot Let us analyze flow control by using traffic flow as a model. A vehicle is arriving at an intersection. Thus, the precondition is the vehicle is in motion. Suppose the traffic light at the intersection is red. The control structure must determine the proper course of action to assign to the vehicle. Precondition: The vehicle is in motion. Control Structure: Is the traffic light green? If so, then the vehicle may stay in motion. Is the traffic light red? If so, then the vehicle must stop. End of Control Structure: Post condition: The vehicle comes to a stop. Thus, upon exiting the control structure, the vehicle is stopped. If you wonder where you learnt to wrap the gift, you would know that you learnt it by observing other people doing a similar task through your eyes. Since our microcontroller does not have eyes, we need to teach it to have a logical thinking using Code. The series of logical steps that lead to a solution is called algorithm as we saw in the previous task. Hence, all the instructions we give to a micro controller are in the form of an algorithm. A good algorithm solves the problem in a fast and efficient way. Blocks of small algorithms form larger algorithms. But algorithm is just code! What will happen when you try to add sensors to your code? A combination of electronics and code can be called a system. Picture: (block diagram of sensors + Arduino + code written in a dialogue box ) Logic is universal. Just like there can be multiple ways to fold the wrapping paper, there can be multiple ways to solve a problem too! A micro controller takes the instructions only in certain languages. The instructions then go to a compiler that translates the code that we have written to the machine. What language does your Arduino Understand? For Arduino, we will use the language 'processing'. Quoting from processing.org, Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts. Processing is an open source programming language and integrated development environment (IDE). Processing was originally built for designers and it was extensively used in electronics arts and visual design communities with the sole purpose of teaching the fundamentals of computer sciences in a visual context. This also served as the foundations of electronic sketchbooks. From the previous example of gift wrapping, you noticed that before you need to bring in the paper and other stationery needed, you had to see the size of the problem at hand (the gift). What is a Library? In computer language, the stationery needed to complete your task, is called "Library". A library is a collection of reusable code that a programmer can 'call' instead of writing everything again. Now imagine if you had to cut a tree, make paper, then color the paper into the beautiful wrapping paper that you used, when I asked you to wrap the gift. How tiresome would it be? (If you are inventing a new type of paper, sure, go ahead chop some wood!) So before writing a program, you make sure that you have 'called' all the right libraries. Can you search the internet and make a note of a few arduino libraries in your inventor's diary? Please remember, that libraries are also made up of code! As your next activity, we will together learn more about how a library is created. Activity: Understanding the Morse Code During the times before the two-way mobile communication, people used a one-way communication called the Morse code. The following image is the experimental setup of a Morse code. Do not worry; we will not get into how you will perform it physically, but by this example, you will understand how your Arduino will work. We will show you the bigger picture first and then dissect it systematically so that you understand what a code contains. The Morse code is made up of two components "Short" and "Long" signals. The signals could be in the form of a light pulse or sound. The following image shows how the Morse code looks like. A dot is a short signal and a dash is a long signal. Interesting, right? Try encrypting your message for your friend with this dots and dashes. For example, "Hello" would be: The image below shows how the Arduino code for Morse code will looks like. The piece of code in dots and dashes is the message SOS that I am sure you all know, is an urgent appeal for help. SOS in Morse goes: dot dot dot; dash dash dash; dot dot dot. Since this is a library, which is being created using dots and dashes, it is important that we define how the dot becomes dot, and dash becomes dash first. The following sections will take smaller sections or pieces of main code and explain you how they work. We will also introduce some interesting concepts using the same. What is a function? Functions have instructions in a single line of code, telling the values in the bracket how to act. Let us see which one is the function in our code. Can you try to guess from the following screenshot? No? Let me help you! digitalWrite() in the above code is a Function, that as you understand, 'writes' on the correct pin of the Arduino. delay is a Function that tells the controller how frequently it should send the message. The higher the delay number, the slower will be the message (Imagine it as a way to slow down your friend who speaks too fast, helping you to understand him better!) Look up the internet to find out what is the maximum number that you stuff into delay. What is a constant? A constant is an identifier with pre-defined, non-changeable values. What is an identifier you ask? An identifier is a name that labels the identity of a unique object or value. As you can see from the above piece of code, HIGH and LOW are Constants. Q: What is the opposite of Constant? Ans: Variable The above food for thought brings us to the next section. What is a variable? A variable is a symbolic name for information. In plain English, a 'teacher' can have any name; hence, the 'teacher' could be a variable. A variable is used to store a piece of information temporarily. The value of a variable changes, if any action is taken on it for example; Add, subtract, multiply etc. (Imagine how your teacher praises you when you complete your assignment on time and scolds you when you do not!) What is a Datatype? Datatypes are sets of data that have a pre-defined value. Now look at the first block of the example program in the following image: int as shown in the above screenshot, is a Datatype The following table shows some of the examples of a Datatype. Datatype Use Example int describes an integer number is used to represent whole numbers 1, 2, 13, 99 etc float used to represent that the numbers are decimal 0.66, 1.73 etc char represents any character. Strings are written in single quotes 'A', 65 etc str represent string "This is a good day!" With the above definition, can we recognize what pinMode is? Every time you have a doubt in a command or you want to learn more about it, you can always look it up at Arduino.cc website. You could do the same for digitalWrite() as well! From the pinMode page of Arduino.cc we can define it as a command that configures the specified pin to behave either as an input or an output. Let us now see something more interesting. What is a control structure? We have already seen the working of a control structure. In this section, we will be more specific to our code. Now I draw your attention towards this specific block from the main example above: Do you see void setup() followed by a code in the brackets? Similarly void loop() ? These make the basics of the structure of an Arduino program sketch. A structure, holds the program together, and helps the compiler to make sense of the commands entered. A compiler is a program that turns code understood by humans into the code that is understood by machines. There are other loop and control structures as you can see in the following screenshot: These control structures are explained next. How do you use Control Structures? Imagine you are teaching your friend to build 6 cm high lego wall. You ask her to place one layer of lego bricks, and then you further ask her to place another layer of lego bricks on top of the bottom layer. You ask her to repeat the process until the wall is 6 cm high. This process of repeating instructions until a desired result is achieved is called Loop. A micro-controller is only as smart as you program it to be. Hence, we will move on to the different types of loops. While loop: Like the name suggests, it repeats a statement (or group of statements) while the given condition is true. The condition is tested before executing the loop body. For loop: Execute a sequence of statements multiple times and abbreviates the code that manages the loop variable. Do while loop: Like a while statement, except that it tests the condition at the end of the loop body Nested loop: You can use one or more loop inside any another while, for or do..while loop. Now you were able to successfully tell your friend when to stop, but how to control the micro controller? Do not worry, the magic is on its way! You introduce Control statements. Break statements: Breaks the flow of the loop or switch statement and transfers execution to the statement that is immediately following the loop or switch. Continue statements: This statement causes the loop to skip the remainder of its body and immediately retest its condition before reiterating. Goto statements: This transfers control to a statement which is labeled . It is no advised to use goto statement in your programs. Quiz time: What is an infinite loop? Look up the internet and note it in your inventor-notebook. The Arduino IDE The full form of IDE is Integrated Development Environment. IDE uses a Compiler to translate code in a simple language that the computer understands. Compiler is the program that reads all your code and translates your instructions to your microcontroller. In case of the Arduino IDE, it also verifies if your code is making sense to it or not. Arduino IDE is like your friend who helps you finish your homework, reviews it before you give it for submission, if there are any errors; it helps you identify them and resolve them. Why should you love the Arduino IDE? I am sure by now things look too technical. You have been introduced to SO many new terms to learn and understand. The important thing here is not to forget to have fun while learning. Understanding how the IDE works is very useful when you are trying to modify or write your own code. If you make a mistake, it would tell you which line is giving you trouble. Isn't it cool? The Arduino IDE also comes with loads of cool examples that you can plug-and-play. It also has a long list of libraries for you to access. Now let us learn how to get the library on to your computer. Ask an adult to help you with this section if you are unable to succeed. Make a note of the following answers in your inventor's notebook before downloading the IDE. Get your answers from google or ask an adult. What is an operating system?What is the name of the operating system running on your computer?What is the version of your current operating system?Is your operating system 32 bit or 64 bit?What is the name of the Arduino board that you have? Now that we did our homework, let us start playing! How to download the IDE? Let us now, go further and understand how to download something that's going to be our playground. I am sure you'd be eager to see the place you'll be working in for building new and interesting stuff! For those of you wanting to learn and do everything my themselves, open any browser and search for "Arduino IDE" followed by the name of your operating system with "32 bits" or "64 bits" as learnt in the previous section. Click to download the latest version and install! Else, the step-by-step instructions are here: Open your browser (Firefox, Chrome, Safari)(Insert image of the logos of firefox, chrome and safari) Go to www.arduino.ccas shown in the following screenshot Click on the 'Download' section of the homepage, which is the third option from your left as shown in the following screenshot. From the options, locate the name of your operating system, click on the right version (32 bits or 64 bits) Then click on 'Just Download' after the new page appears. After clicking on the desired link and saving the files, you should be able to 'double click' on the Arduino icon and install the software. If you have managed to install successfully, you should see the following screens. If not, go back to step 1 and follow the procedure again. The next screenshot shows you how the program will look like when it is loading. This is how the IDE looks when no code is written into it. Your first program Now that you have your IDE ready and open, it is time to start exploring. As promised before, the Arduino IDE comes with many examples, libraries, and helping tools to get curious minds such as you to get started soon. Let us now look at how you can access your first program via the Arduino IDE. A large number of examples can be accessed in the File > Examples option as shown in the following screenshot. Just like we all have nicknames in school, a program, written in in processing is called a 'sketch'. Whenever you write any program for Arduino, it is important that you save your work. Programs written in processing are saved with the extension .ino The name .ino is derived from the last 3 letters of the word ArduINO. What are the other extensions are you aware of? (Hint: .doc, .ppt etc) Make a note in your inventor's notebook. Now ask yourself why do so many extensions exist. An extension gives the computer, the address of the software which will open the file, so that when the contents are displayed, it makes sense. As we learnt above, that the program written in the Arduino IDE is called a 'Sketch'. Your first sketch is named 'blink'. What does it do? Well, it makes your Arduino blink! For now, we can concentrate on the code. Click on File | Examples | Basics | Blink. Refer to next image for this. When you load an example sketch, this is how it would look like. In the image below you will be able to identify the structure of code, recall the meaning of functions and integers from the previous section. We learnt that the Arduino IDE is a compiler too! After opening your first example, we can now learn how to hide information from the compiler. If you want to insert any extra information in plain English, you can do so by using symbols as following. /* your text here*/ OR // your text here Comments can also be individually inserted above lines of code, explaining the functions. It is good practice to write comments, as it would be useful when you are visiting back your old code to modify at a later date. Try editing the contents of the comment section by spelling out your name. The following screenshot will show you how your edited code will look like. Verifying your first sketch Now that you have your first complete sketch in the IDE, how do you confirm that your micro-controller with understand? You do this, by clicking on the easy to locate Verify button with a small . You will now see that the IDE informs you "Done compiling" as shown in the screenshot below. It means that you have successfully written and verified your first code. Congratulations, you did it! Saving your first sketch As we learnt above, it is very important to save your work. We now learn the steps to make sure that your work does not get lost. Now that you have your first code inside your IDE, click on File > SaveAs. The following screenshot will show you how to save the sketch. Give an appropriate name to your project file, and save it just like you would save your files from Paintbrush, or any other software that you use. The file will be saved in a .ino format. Accessing your first sketch Open the folder where you saved the sketch. Double click on the .ino file. The program will open in a new window of IDE. The following screen shot has been taken from a Mac Os, the file will look different in a Linux or a Windows system. Summary Now we know about systems and how logic is used to solve problems. We can write and modify simple code. We also know the basics of Arduino IDE and studied how to verify, save and access your program. Resources for Article:  Further resources on this subject: Getting Started with Arduino [article] Connecting Arduino to the Web [article] Functions with Arduino [article]
Read more
  • 0
  • 0
  • 1924

article-image-conditional-statements-functions-and-lists
Packt
05 Apr 2017
14 min read
Save for later

Conditional Statements, Functions, and Lists

Packt
05 Apr 2017
14 min read
In this article, by Sai Yamanoor and Srihari Yamanoor, author of the book Python Programming with Raspberry Pi Zero, you will learn about conditional statements and how to make use of logical operators to check conditions using conditional statements. Next, youwill learn to write simple functions in Python and discuss interfacing inputs to the Raspberry Pi's GPIO header using a tactile switch (momentary push button). We will also discuss motor control (this is a run-up to the final project) using the Raspberry Pi Zero and control the motors using the switch inputs. Let's get to it! In this article, we will discuss the following topics: Conditional statements in Python Using conditional inputs to take actions based on GPIO pin states Breaking out of loops using conditional statement Functions in Python GPIO callback functions Motor control in Python (For more resources related to this topic, see here.) Conditional statements In Python, conditional statements are used to determine if a specific condition is met by testing whether a condition is True or False. Conditional statements are used to determine how a program is executed. For example,conditional statements could be used to determine whether it is time to turn on the lights. The syntax is as follows: if condition_is_true: do_something() The condition is usually tested using a logical operator, and the set of tasks under the indented block is executed. Let's consider the example,check_address_if_statement.py where the user input to a program needs to be verified using a yesor no question: check_address = input("Is your address correct(yes/no)? ") if check_address == "yes": print("Thanks. Your address has been saved") if check_address == "no": del(address) print("Your address has been deleted. Try again")check_address = input("Is your address correct(yes/no)? ") In this example, the program expects a yes or no input. If the user provides the input yes, the condition if check_address == "yes"is true, the message Your address has been savedis printed on the screen. Likewise, if the user input is no, the program executes the indented code block under the logical test condition if check_address == "no" and deletes the variable address. An if-else statement In the precedingexample, we used an ifstatement to test each condition. In Python, there is an alternative option named the if-else statement. The if-else statement enables testing an alternative condition if the main condition is not true: check_address = input("Is your address correct(yes/no)? ") if check_address == "yes": print("Thanks. Your address has been saved") else: del(address) print("Your address has been deleted. Try again") In this example, if the user input is yes, the indented code block under if is executed. Otherwise, the code block under else is executed. An if-elif-else statement In the precedingexample, the program executes any piece of code under the else block for any user input other than yesthat is if the user pressed the return key without providing any input or provided random characters instead of no, the if-elif-else statement works as follows: check_address = input("Is your address correct(yes/no)? ") if check_address == "yes": print("Thanks. Your address has been saved") elifcheck_address == "no": del(address) print("Your address has been deleted. Try again") else: print("Invalid input. Try again") If the user input is yes, the indented code block under the ifstatement is executed. If the user input is no, the indented code block under elif (else-if) is executed. If the user input is something else, the program prints the message: Invalid input. Try again. It is important to note that the code block indentation determines the block of code that needs to be executed when a specific condition is met. We recommend modifying the indentation of the conditional statement block and find out what happens to the program execution. This will help understand the importance of indentation in python. In the three examples that we discussed so far, it could be noted that an if-statement does not need to be complemented by an else statement. The else and elif statements need to have a preceding if statement or the program execution would result in an error. Breaking out of loops Conditional statements can be used to break out of a loop execution (for loop and while loop). When a specific condition is met, an if statement can be used to break out of a loop: i = 0 while True: print("The value of i is ", i) i += 1 if i > 100: break In the precedingexample, the while loop is executed in an infinite loop. The value of i is incremented and printed onthe screen. The program breaks out of the while loop when the value of i is greater than 100 and the value of i is printed from 1 to 100. The applications of conditional statements: executing tasks using GPIO Let's discuss an example where a simple push button is pressed. A button press is detected by reading the GPIO pin state. We are going to make use of conditional statements to execute a task based on the GPIO pin state. Let us connect a button to the Raspberry Pi's GPIO. All you need to get started are a button, pull-up resistor, and a few jumper wires. The figure given latershows an illustration on connecting the push button to the Raspberry Pi Zero. One of the push button's terminals is connected to the ground pin of the Raspberry Pi Zero's GPIO pin. The schematic of the button's interface is shown here: Raspberry Pi GPIO schematic The other terminal of the push button is pulled up to 3.3V using a 10K resistor.The junction of the push button terminal and the 10K resistor is connected to the GPIO pin 2. Interfacing the push button to the Raspberry Pi Zero's GPIO—an image generated using Fritzing Let's review the code required to review the button state. We make use of loops and conditional statements to read the button inputs using the Raspberry Pi Zero. We will be making use of the gpiozero. For now, let’s briefly discuss the concept of classes for this example. A class in Python is a blueprint that contains all the attributes that define an object. For example, the Button class of the gpiozero library contains all attributes required to interface a button to the Raspberry Pi Zero’s GPIO interface.These attributes include button states and functions required to check the button states and so on. In order to interface a button and read its states, we need to make use of this blueprint. The process of creating a copy of this blueprint is called instantiation. Let's get started with importing the gpiozero library and instantiate the Button class of the gpiozero. The button is interfaced to GPIO pin 2. We need to pass the pin number as an argument during instantiation: from gpiozero import Button #button is interfaced to GPIO 2 button = Button(2) The gpiozero library's documentation is available athttp://gpiozero.readthedocs.io/en/v1.2.0/api_input.html.According to the documentation, there is a variable named is_pressed in the Button class that could be tested using a conditional statement to determine if the button is pressed: if button.is_pressed: print("Button pressed") Whenever the button is pressed, the message Button pressed is printed on the screen. Let's stick this code snippet inside an infinite loop: from gpiozero import Button #button is interfaced to GPIO 2 button = Button(2) while True: if button.is_pressed: print("Button pressed") In an infinite while loop, the program constantly checks for a button press and prints the message as long as the button is being pressed. Once the button is released, it goes back to checking whether the button is pressed. Breaking out a loop by counting button presses Let's review another example where we would like to count the number of button presses and break out of the infinite loop when the button has received a predetermined number of presses: i = 0 while True: if button.is_pressed: button.wait_for_release() i += 1 print("Button pressed") if i >= 10: break In this example, the program checks for the state of the is_pressed variable. On receiving a button press, the program can be paused until the button is released using the method wait_for_release.When the button is released, the variable used to store the number of presses is incremented by 1. The program breaks out of the infinite loop, when the button has received 10 presses. A red momentary push button interfaced to Raspberry Pi Zero GPIO pin 2 Functions in Python We briefly discussed functions in Python. Functions execute a predefined set of task. print is one example of a function in Python. It enables printing something to the screen. Let's discuss writing our own functions in Python. A function can be declared in Python using the def keyword. A function could be defined as follows: defmy_func(): print("This is a simple function") In this function my_func, the print statement is written under an indented code block. Any block of code that is indented under the function definition is executed when the function is called during the code execution. The function could be executed as my_func(). Passing arguments to a function: A function is always defined with parentheses. The parentheses are used to pass any requisite arguments to a function. Arguments are parameters required to execute a function. In the earlierexample, there are no arguments passed to the function. Let's review an example where we pass an argument to a function: defadd_function(a, b): c = a + b print("The sum of a and b is ", c) In this example, a and b are arguments to the function. The function adds a and b and prints the sum onthe screen. When the function add_function is called by passing the arguments 3 and 2 as add_function(3,2) where a=3 and b=2, respectively. Hence, the arguments a and b are required to execute function, or calling the function without the arguments would result in an error. Errors related to missing arguments could be avoided by setting default values to the arguments: defadd_function(a=0, b=0): c = a + b print("The sum of a and b is ", c) The preceding function expects two arguments. If we pass only one argument to thisfunction, the other defaults to zero. For example,add_function(a=3), b defaults to 0, or add_function(b=2), a defaults to 0. When an argument is not furnished while calling a function, it defaults to zero (declared in the function). Similarly,the print function prints any variable passed as an argument. If the print function is called without any arguments, a blank line is printed. Returning values from a function Functions can perform a set of defined operations and finally return a value at the end. Let's consider the following example: def square(a): return a**2 In this example, the function returns a square of the argument. In Python, the return keyword is used to return a value requested upon completion of execution. The scope of variables in a function There are two types of variables in a Python program:local and global variables. Local variables are local to a function,that is, it is a variable declared within a function is accessible within that function.Theexample is as follows: defadd_function(): a = 3 b = 2 c = a + b print("The sum of a and b is ", c) In this example, the variables a and b are local to the function add_function. Let's consider an example of a global variable: a = 3 b = 2 defadd_function(): c = a + b print("The sum of a and b is ", c) add_function() In this case, the variables a and b are declared in the main body of the Python script. They are accessible across the entire program. Now, let's consider this example: a = 3 defmy_function(): a = 5 print("The value of a is ", a) my_function() print("The value of a is ", a) In this case, when my_function is called, the value of a is5 and the value of a is 3 in the print statement of the main body of the script. In Python, it is not possible to explicitly modify the value of global variables inside functions. In order to modify the value of a global variable, we need to make use of the global keyword: a = 3 defmy_function(): global a a = 5 print("The value of a is ", a) my_function() print("The value of a is ", a) In general, it is not recommended to modify variables inside functions as it is not a very safe practice of modifying variables. The best practice would be passing variables as arguments and returning the modified value. Consider the following example: a = 3 defmy_function(a): a = 5 print("The value of a is ", a) return a a = my_function(a) print("The value of a is ", a) In the preceding program, the value of a is 3. It is passed as an argument to my_function. The function returns 5, which is saved to a. We were able to safely modify the value of a. GPIO callback functions Let's review some uses of functions with the GPIO example. Functions can be used in order tohandle specific events related to the GPIO pins of the Raspberry Pi. For example,the gpiozero library provides the capability of calling a function either when a button is pressed or released: from gpiozero import Button defbutton_pressed(): print("button pressed") defbutton_released(): print("button released") #button is interfaced to GPIO 2 button = Button(2) button.when_pressed = button_pressed button.when_released = button_released while True: pass In this example, we make use of the attributeswhen_pressed and when_releasedof the library's GPIO class. When the button is pressed, the function button_pressed is executed. Likewise, when the button is released, the function button_released is executed. We make use of the while loop to avoid exiting the program and keep listening for button events. The pass keyword is used to avoid an error and nothing happens when a pass keyword is executed. This capability of being able to execute different functions for different events is useful in applications like Home Automation. For example, it could be used to turn on lights when it is dark and vice versa. DC motor control in Python In this section, we will discuss motor control using the Raspberry Pi Zero. Why discuss motor control? In order to control a motor, we need an H Bridge motor driver (Discussing H bridge is beyond our scope. There are several resources for H bridge motor drivers: http://www.mcmanis.com/chuck/robotics/tutorial/h-bridge/). There are several motor driver kits designed for the Raspberry Pi. In this section, we will make use of the following kit: https://www.pololu.com/product/2753. The Pololu product page also provides instructions on how to connect the motor. Let's get to writing some Python code tooperate the motor: from gpiozero import Motor from gpiozero import OutputDevice import time motor_1_direction = OutputDevice(13) motor_2_direction = OutputDevice(12) motor = Motor(5, 6) motor_1_direction.on() motor_2_direction.on() motor.forward() time.sleep(10) motor.stop() motor_1_direction.off() motor_2_direction.off() Raspberry Pi based motor control In order to control the motor, let's declare the pins, the motor's speed pins and direction pins. As per the motor driver's documentation, the motors are controlled by GPIO pins 12,13 and 5,6, respectively. from gpiozero import Motor from gpiozero import OutputDevice import time motor_1_direction = OutputDevice(13) motor_2_direction = OutputDevice(12) motor = Motor(5, 6) Controlling the motor is as simple as turning on the motor using the on() method and moving the motor in the forward direction using the forward() method: motor.forward() Similarly, reversing the motor direction could be done by calling the method reverse(). Stopping the motor could be done by: motor.stop() Some mini-project challenges for the reader: In this article, we discussed interfacing inputs for the Raspberry Pi and controlling motors. Think about a project where we could drive a mobile robot that reads inputs from whisker switches and operate a mobile robot. Is it possible to build a wall following robot in combination with the limit switches and motors? We discussed controlling a DC motor in this article. How do we control a stepper motor using a Raspberry Pi? How can we interface a motion sensor to control the lights at home using a Raspberry Pi Zero. Summary In this article, we discussed conditional statements and the applications of conditional statements in Python. We also discussed functions in Python, passing arguments to a function, returning values from a function and scope of variables in a Python program. We discussed callback functions and motor control in Python. Resources for Article: Further resources on this subject: Sending Notifications using Raspberry Pi Zero [article] Raspberry Pi Gaming Operating Systems [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 3557

article-image-preparing-initial-two-nodes
Packt
02 Mar 2017
8 min read
Save for later

Preparing the Initial Two Nodes

Packt
02 Mar 2017
8 min read
In this article by Carlos R. Morrison the authors of the book Build Supercomputers with Raspberry Pi 3, we will learn following topics: Preparing master node Transferring the code Preparing slave node (For more resources related to this topic, see here.) Preparing master node During boot-up, select the US or (whatever country you reside in) keyboard option located at the middle-bottom of the screen. After boot-up completes, start with the following figure. Click on Menu, then Preferences. Select Raspberry Pi Configuration, refer the following screenshot: Menu options The System tab appears (see following screenshot). Type in a suitable Hostname for your master Pi. Auto login is already checked: System tab Go ahead and change the password to your preference. Refer the following screenshot: Change password Select the Interfaces tab (see following screenshot). Click Enable on all options, especially Secure Shell (SSH), as you will be remotely logging into the Pi2 or Pi3 from your main PC. The other options are enabled for convenience, as you may be using the Pi2 or Pi3 in other projects outside of supercomputing: Interface options The next important step is to boost the processing speed from 900 MHz to 1000 MHz or 1 GHz (this option is only available on the Pi2). Click on the Performance tab (see following screenshot) and select High (1000MHz). You are indeed building a supercomputer, so you need to muster all the available computing muscle. Leave the GPU Memory as is (default). The author used a 32-GB SD card in the master Pi2, hence the 128 Mb default setting: Performance, overclock option Changing the processor clock speed requires reboot of the Pi2. Go ahead and click the Yes button. After reboot, the next step is to update and upgrade the Pi2 or Pi3 software. Go ahead and click on the Terminal icon located at the top left of the Pi2 or Pi3 monitor. Refer the following screenshot: Terminal icon After the terminal window appears, enter sudo apt-get update at the “$” prompt. Refer the following screenshot: Terminal screen This update process takes several minutes. After update concludes, enter sudo apt-get upgrade; again, this upgrade process takes several minutes. At the completion of the upgrade, enter at the $ prompt each of the following commands: sudo apt-get install build-essential sudo apt-get install manpages-dev sudo apt-get install gfortran sudo apt-get install nfs-common sudo apt-get install nfs-kernel-server sudo apt-get install vim sudo apt-get install openmpi-bin sudo apt-get install libopenmpi-dev sudo apt-get install openmpi-doc sudo apt-get install keychain sudo apt-get install nmap These updates and installs will allow you to edit (vim), and run Fortran, C, MPI codes, and allow you to manipulate and further configure your master Pi2 or Pi3. We will now transfer codes from the main PC to the master Pi2 or Pi3. Transferring the code IP address The next step is to transfer your codes from the main PC to the master Pi2 or Pi3, after which you will again compile and run the codes, same as you did earlier. But before we proceed, you need to ascertain the IP address of the master Pi2 or Pi3. Go ahead and enter the command ifconfig, in the Pi terminal window: The author’s Pi2 IP address is 192.168.0.9 (see second line of the displayed text). The MAC address is b8:27:eb:81:e5:7d (see first line of displayed text), and the net mask address is 255.255.255.0. Write down these numbers, as they will be needed later when configuring the Pis and switch. Your IP address may be similar, possibly except for the last two highlighted numbers depicted previously. Return to your main PC, and list the contents in the code folder located in the Desktop directory; that is, type, and enter ls -la. The list of the folder content is displayed. You can now Secure File Transfer Protocol (SFTP) your codes to your master Pi. Note the example as shown in following screenshot. The author’s processor name is gamma in the Ubuntu/Linux environment: If you are not in the correct code folder, change directory by typing cd Desktop, followed by the path to your code files. The author’s files are stored in the Book_b folder. The requisite files are highlighted in red. At the $ prompt, enter sftp pi@192.168.0.9, using, of course, your own Pi IP address following the @ character. You will be prompted for a password. Enter the password for your Pi. At the sftp> prompt, enter put MPI_08_b.c, again replacing the author’s file name with your own, if so desired. Go ahead and sftp the other files also. Next, enter Exit. You should now be back at your code folder. Now, here comes the fun part. You will, from here onward, communicate remotely with your Pi from your main PC, same as the way hackers due to remote computers. So, go ahead now and log into your master Pi. Enter ssh pi@192.168.09 at the $ prompt, and enter your password. Now do a listing of the files in the home directory; enter ls -la, refer the following screenshot: You should see the files (the files shown in the previous figure) in your home directory that you recently sftp over from your main PC. You can now roam around freely inside your Pi, and see what secret data can be stolen – never mind, I’m getting a little carried away here. Go ahead and compile the requisite files, call-procs.c, C_PI.c, and MPI_08_b.c, by entering the command mpicc [file name.c] -o [file name] –lm in each case. The extension -lm is required when compiling C files containing the math header file <math.h>. Execute the file call-procs, or the file name you created, using the command mpiexec -n 1 call-procs. After the first execution, use a different process number in each subsequent execution. Your run should approximate to the ones depicted following. Note that, because of the multithreaded nature of the program, the number of processes executes in random order, and there is clearly no dependence on one another: Note, from here onward, the run data was from a Pi2 computer. The Pi3 computer will give a faster run time (its processor clock speed is 1.2 GHz, as compared to the Pi2, which has an overclocked speed of 1 GHz). Execute the serial Pi code C_PI as depicted following: Execute the M_PI code MPI_08_b, as depicted following: The difference in execution time between the four cores on gamma (0m59.076s), and the four cores on Mst0 (16m35.140s). Each core in gamma is running at 4 GHz, while each core in Mst0 is running at 1 GHz. Core clock speed matters. Preparing slave node You will now prepare or configure the first slave node of your supercomputer. Switch the HDMI monitor cable from the master Pi to the slave Pi. Check to see whether the SD NOOBS/Raspbian card is inserted in the drive. Label side should face outwards. The drive is spring-loaded, so you must gently insert the card. To remove the card, apply a gentle inward pressure to unlock the card. Insert the slave power cord into a USB power slot adjacent the master Pi, in the rapid charger. Follow the same procedure outlined previously for installing, updating, and upgrading the Raspbian OS on the master Pi, this time naming the slave node Slv1, or any other name you desire. Use the same password for all the Pi2s or Pi3s in the cluster, as it simplifies the configuration procedure. At the completion of the update, upgrade, and installs, acquire the IP address of the slave1 Pi, as you will be remotely logging into it. Use the command ifconfig to obtain its IP address. Return to your main PC, and use sftp to transfer the requisite files from your main computer – same as you did for the master Pi2. Compile the transferred .c files. Test or run the codes as discussed earlier. You are now ready to configure the master Pi and slave Pi for communication between themselves and the main PC. We now discuss configuring the static IP address for the Pis, and the switch they are connected to. Summary In this article we have learned how to prepare master node and slave node, and also how to transfer the code for supercomputer. Resources for Article: Further resources on this subject: Bug Tracking [article] API with MongoDB and Node.js [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 2438

article-image-functions-arduino
Packt
02 Mar 2017
13 min read
Save for later

Functions with Arduino

Packt
02 Mar 2017
13 min read
In this article by Syed Omar Faruk Towaha, the author of the book Learning C for Arduino, we will learn about functions and file handling with Arduino. We learned about loops and conditions. Let’s begin out journey into Functions with Arduino. (For more resources related to this topic, see here.) Functions Do you know how to make instant coffee? Don’t worry; I know. You will need some water, instant coffee, sugar, and milk or creamer. Remember, we want to drink coffee, but we are doing something that makes coffee. This procedure can be defined as a function of coffee making. Let’s finish making coffee now. The steps can be written as follows: Boil some water. Put some coffee inside a mug. Add some sugar. Pour in boiled water. Add some milk. And finally, your coffee is ready! Let’s write a pseudo code for this function: function coffee (water, coffee, sugar, milk) { add coffee beans; add sugar; pour boiled water; add milk; return coffee; } In our pseudo code we have four items (we would call them parameters) to make coffee. We did something with our ingredients, and finally, we got our coffee, right? Now, if anybody wants to get coffee, he/she will have to do the same function (in programming, we will call it calling a function) again. Let’s move into the types of functions. Types of functions A function returns something, or a value, which is called the return value. The return values can be of several types, some of which are listed here: Integers Float Double Character Void Boolean Another term, argument, refers to something that is passed to the function and calculated or used inside the function. In our previous example, the ingredients passed to our coffee-making process can be called arguments (sugar, milk, and so on), and we finally got the coffee, which is the return value of the function. By definition, there are two types of functions. They are, a system-defined function and a user-defined function. In our Arduino code, we have often seen the following structure: void setup() { } void loop() { } setup() and loop() are also functions. The return type of these functions is void. Don’t worry, we will discuss the type of function soon. The setup() and loop() functions are system-defined functions. There are a number of system-defined functions. The user-defined functions cannot be named after them. Before going deeper into function types, let’s learn the syntax of a function. Functions can be as follows: void functionName() { //statements } Or like void functionName(arg1, arg2, arg3) { //statements } So, what’s the difference? Well, the first function has no arguments, but the second function does. There are four types of function, depending on the return type and arguments. They are as follows: A function with no arguments and no return value A function with no arguments and a return value A function with arguments and no return value A function with arguments and a return value Now, the question is, can the arguments be of any type? Yes, they can be of any type, depending on the function. They can be Boolean, integers, floats, or characters. They can be a mixture of data types too. We will look at some examples later. Now, let’s define and look at examples of the four types of function we just defined. Function with no arguments and no return value, these functions do not accept arguments. The return type of these functions is void, which means the function returns nothing. Let me clear this up. As we learned earlier, a function must be named by something. The naming of a function will follow the rule for the variable naming. If we have a name for a function, we need to define its type also. It’s the basic rule for defining a function. So, if we are not sure of our function’s type (what type of job it will do), then it is safe to use the void keyword in front of our function, where void means no data type, as in the following function: void myFunction(){ //statements } Inside the function, we may do all the things we need. Say we want to print I love Arduino! ten times if the function is called. So, our function must have a loop that continues for ten times and then stops. So, our function can be written as follows: void myFunction() { int i; for (i = 0; i < 10; i++) { Serial.println(“I love Arduino!“); } } The preceding function does not have a return value. But if we call the function from our main function (from the setup() function; we may also call it from the loop() function, unless we do not want an infinite loop), the function will print I love Arduino! ten times. No matter how many times we call, it will print ten times for each call. Let’s write the full code and look at the output. The full code is as follows: void myFunction() { int i; for (i = 0; i < 10; i++) { Serial.println(“I love Arduino!“); } } void setup() { Serial.begin(9600); myFunction(); // We called our function Serial.println(“................“); //This will print some dots myFunction(); // We called our function again } void loop() { // put your main code here, to run repeatedly: } In the code, we placed our function (myFunction) after the loop() function. It is a good practice to declare the custom function before the setup() loop. Inside our setup() function, we called the function, then printed a few dots, and finally, we called our function again. You can guess what will happen. Yes, I love Arduino! will be printed ten times, then a few dots will be printed, and finally, I love Arduino! will be printed ten times. Let’s look at the output on the serial monitor: Yes. Your assumption is correct! Function with no arguments and a return value In this type of function, no arguments are passed, but they return a value. You need to remember that the return value depends on the type of the function. If you declare a function as an integer function, the return value’s type will have to have be an integer also. If you declare a function as a character, the return type must be a character. This is true for all other data types as well. Let’s look at an example. We will declare an integer function, where we will define a few integers. We will add them and store them to another integer, and finally, return the addition. The function may look as follows: int addNum() { int a = 3, b = 5, c = 6, addition; addition = a + b + c; return addition; } The preceding function should return 14. Let’s store the function’s return value to another integer type of variable in the setup() function and print in on the serial monitor. The full code will be as follows: void setup() { Serial.begin(9600); int fromFunction = addNum(); // added values to an integer Serial.println(fromFunction); // printed the integer } void loop() { } int addNum() { int a = 3, b = 5, c = 6, addition; //declared some integers addition = a + b + c; // added them and stored into another integers return addition; // Returned the addition. } The output will look as follows: Function with arguments and no return value This type of function processes some arguments inside the function, but does not return anything directly. We can do the calculations inside the function, or print something, but there will be no return value. Say we need find out the sum of two integers. We may define a number of variables to store them, and then print the sum. But with the help of a function, we can just pass two integers through a function; then, inside the function, all we need to do is sum them and store them in another variable. Then we will print the value. Every time we call the function and pass our values through it, we will get the sum of the integers we pass. Let’s define a function that will show the sum of the two integers passed through the function. We will call the function sumOfTwo(), and since there is no return value, we will define the function as void. The function should look as follows: void sumOfTwo(int a, int b) { int sum = a + b; Serial.print(“The sum is “ ); Serial.println(sum); } Whenever we call this function with proper arguments, the function will print the sum of the number we pass through the function. Let’s look at the output first; then we will discuss the code: We pass the arguments to a function, separating them with commas. The sequence of the arguments must not be messed up while we call the function. Because the arguments of a function may be of different types, if we mess up while calling, the program may not compile and will not execute correctly: Say a function looks as follows: void myInitialAndAge(int age, char initial) { Serial.print(“My age is “); Serial.println(age); Serial.print(“And my initial is “); Serial.print(initial); } Now, we must call the function like so: myInitialAndAge(6,’T’); , where 6 is my age and T is my initial. We should not do it as follows: myInitialAndAge(‘T’, 6);. We called the function and passed two values through it (12 and 24). We got the output as The sum is 36. Isn’t it amazing? Let’s go a little bit deeper. In our function, we declared our two arguments (a and b) as integers. Inside the whole function, the values (12 and 24) we passed through the function are as follows: a = 12 and b =24; If we called the function this sumOfTwo(24, 12), the values of the variables would be as follows: a = 24 and b = 12; I hope you can now understand the sequence of arguments of a function. How about an experiment? Call the sumOfTwo() function five times in the setup() function, with different values of a and b, and compare the outputs. Function with arguments and a return value This type of function will have both the arguments and the return value. Inside the function, there will be some processing or calculations using the arguments, and later, there would be an outcome, which we want as a return value. Since this type of function will return a value, the function must have a type. Let‘s look at an example. We will write a function that will check if a number is prime or not. From your math class, you may remember that a prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. The basic logic behind checking whether a number is prime or not is to check all the numbers starting from 2 to the number before the number itself by dividing the number. Not clear? Ok, let’s check if 9 is a prime number. No, it is not a prime number. Why? Because it can be divided by 3. And according to the definition, the prime number cannot be divisible by any number other than 1 and the number itself. So, we will check if 9 is divisible by 2. No, it is not. Then we will divide by 3 and yes, it is divisible. So, 9 is not a prime number, according to our logic. Let’s check if 13 is a prime number. We will check if the number is divisible by 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. No, the number is not divisible by any of those numbers. We may also shorten our checking by only checking the number that is half of the number we are checking. Look at the following code: int primeChecker(int n) // n is our number which will be checked { int i; // Driver variable for the loop for (i = 2; i <= n / 2; i++) // Continued the loop till n/2 { if (n % i == 0) // If no reminder return 1; // It is not prime } return 0; // else it is prime } The code is quite simple. If the remainder is equal to zero, our number is fully divisible by a number other than 1 and itself, so it is not a prime number. If there is any remainder, the number is a prime number. Let’s write the full code and look at the output for the following numbers: 23 65 235 4,543 4,241 The full source code to check if the numbers are prime or not is as follows: void setup() { Serial.begin(9600); primeChecker(23); // We called our function passing our number to test. primeChecker(65); primeChecker(235); primeChecker(4543); primeChecker(4241); } void loop() { } int primeChecker(int n) { int i; //driver variable for the loop for (i = 2; i <= n / 2; ++i) //loop continued until the half of the numebr { if (n % i == 0) return Serial.println(“Not a prime number“); // returned the number status } return Serial.println(“A prime number“); } This is a very simple code. We just called our primeChecker() function and passed our numbers. Inside our primeChecker() function, we wrote the logic to check our number. Now let’s look at the output: From the output, we can see that, other than 23 and 4,241, none of the numbers are prime. Let’s look at an example, where we will write four functions: add(), sub(), mul(), and divi(). Into these functions, we will pass two numbers, and print the value on the serial monitor. The four functions can be defined as follows: float sum(float a, float b) { float sum = a + b; return sum; } float sub(float a, float b) { float sub = a - b; return sub; } float mul(float a, float b) { float mul = a * b; return mul; } float divi(float a, float b) { float divi = a / b; return divi; } Now write the rest of the code, which will give the following outputs: Usages of functions You may wonder which type of function we should use. The answer is simple. The usages of the functions are dependent on the operations of the programs. But whatever the function is, I would suggest to do only a single task with a function. Do not do multiple tasks inside a function. This will usually speed up your processing time and the calculation time of the code. You may also want to know why we even need to use functions. Well, there a number of uses of functions, as follows: Functions help programmers write more organized code Functions help to reduce errors by simplifying code Functions make the whole code smaller Functions create an opportunity to use the code multiple times Exercise To extend your knowledge of functions, you may want to do the following exercise: Write a program to check if a number is even or odd. (Hint: you may remember the % operator). Write a function that will find the largest number among four numbers. The numbers will be passed as arguments through the function. (Hint: use if-else conditions). Suppose you work in a garment factory. They need to know the area of a cloth. The area can be in float. They will provide you the length and height of the cloth. Now write a program using functions to find out the area of the cloth. (Use basic calculation in the user-defined function). Summary This article gave us a hint about the functions that Arduino perform. With this information we can create even more programs that is supportive in nature with Arduino. Resources for Article: Further resources on this subject: Connecting Arduino to the Web [article] Getting Started with Arduino [article] Arduino Development [article]
Read more
  • 0
  • 0
  • 15042

article-image-lightweight-messaging-mqtt-311-and-mosquitto
Packt
01 Mar 2017
10 min read
Save for later

Lightweight messaging with MQTT 3.1.1 and Mosquitto

Packt
01 Mar 2017
10 min read
In this article by Gastón C. Hillar, author of the book, MQTT Essentials, we will start our journey towards the usage of the preferred IoT publish-subscribe lightweight messaging protocol in diverse IoT solutions combined with mobile apps and Web applications. We will learn how MQTT and its lightweight messaging system work. We will learn MQTT basics, the specific vocabulary for MQTT, and its working modes. We will use different utilities and diagrams to understand the most important concepts related to MQTT. We will: Understand convenient scenarios for the MQTT protocol Work with the publish-subscribe pattern Work with message filtering (For more resources related to this topic, see here.) Understanding convenient scenarios for the MQTT protocol Imagine that we have dozens of different devices that must exchange data between them. These devices have to request data to other devices and the devices that receive the requests must respond with the demanded data. The devices that requested the data must process the data received from the devices that responded with the demanded data. The devices are IoT (Internet of Things) boards that have dozens of sensors wired to them. We have the following IoT boards with different processing powers: Raspberry Pi 3, Raspberry Pi Model B, Intel Edison, and Intel Joule 570x. Each of these boards has to be able to send and receive data. In addition, we want many mobile devices to be able to send and receive data, some of them running iOS and others Android. We have to work with many programming languages. We want to send and receive data in near real time through the Internet and we might face some network problems, that is, our wireless networks are somewhat unreliable and we have some high-latency environments. Some devices have low power, many of them are powered by batteries and their resources are scarce. In addition, we must be careful with the network bandwidth usage because some of the devices use metered connections. A metered connection is a network connection in which we have a limited amount of data usage per month. If we go over this amount of data, we get billed extra charges. We can use HTTP requests and a build a publish-subscribe model to exchange data between different devices. However, there is a protocol that has been specifically designed to be lighter than the HTTP 1.1 protocol and work better when unreliable networks are involved and connectivity is intermittent. The MQTT (short for MQ Telemetry Transport) is better suited for this scenario in which many devices have to exchange data between themselves in near real time through the Internet and we need to consume the least possible network bandwidth. The MQTT protocol is an M2M (Machine-to-Machine) and IoT connectivity protocol. MQTT is a lightweight messaging protocol that works with a broker-based publish-subscribe mechanism and runs on top of TCP/IP (Transmission Control Protocol/Internet Protocol). The following diagram shows the MQTT protocol on top of the TCP/IP stack: The most popular versions of MQTT are 3.1 and 3.1.1. In this article, we will work with MQTT 3.1.1. Whenever we reference MQTT, we are talking about MQTT 3.1.1, that is, the newest version of the protocol. The MQTT 3.1.1 specification has been standardised by the OASIS consortium. In addition, MQTT 3.1.1 became an ISO standard (ISO/IEC 20922) in 2016. MQTT is lighter than the HTTP 1.1 protocol, and therefore, it is a very interesting option whenever we have to send and receive data in near real time with a publish-subscribe model while requiring the lowest possible footprint. MQTT is very popular in IoT, M2M, and embedded projects, but it is also gaining presence in web applications and mobile apps that require assured messaging and an efficient message distribution. As a summary, MQTT is suitable for the following application domains in which data exchange is required: Asset tracking and management Automotive telematics Chemical detection Environment and traffic monitoring Field force automation Fire and gas testing Home automation IVI (In-Vehicle Infotainment) Medical POS (Point of Sale) kiosks Railway RFID (Radio-Frequency Identification) SCADA (Supervisory Control and Data Acquisition) Slot machines As a summary, MQTT was designed to be suitable to support the following typical challenges in IoT, M2M, embedded, and mobile applications: Be lightweight to make it possible to transmit high volumes of data without huge overheads Distribute minimal packets of data in huge volumes Support an event-oriented paradigm with asynchronous bidirectional low latency push delivery of messages Easily emit data from one client to many clients Make it possible to listen for events whenever they happen (event-oriented architecture) Support always-connected and sometimes-connected models Publish information over unreliable networks and provide reliable deliveries over fragile connections Work very well with battery-powered devices or require low power consumption Provide responsiveness to make it possible to achieve near real-time delivery of information Offer security and privacy for all the data Be able to provide the necessary scalability to distribute data to hundreds of thousands of clients Working with the publish-subscribe pattern Before we dive deep into MQTT, we must understand the publish-subscribe pattern, also known as the pub-sub pattern. In the publish-subscribe pattern, a client that publishes a message is decoupled from the other client or clients that receive the message. The clients don’t know about the existence of the other clients. A client can publish messages of a specific type and only the clients that are interested in specific types of messages will receive the published messages. The publish-subscribe pattern requires a broker, also known as server. All the clients establish a connection with the broker. The client that sends a message through the broker is known as the publisher. The broker filters the incoming messages and distributes them to the clients that are interested in the type of received messages. The clients that register to the broker as interested in specific types of messages are known as subscribers. Hence, both publishers and subscribers establish a connection with the broker. It is easy to understand how things work with a simple diagram. The following diagram shows one publisher and two subscribers connected to a broker: A Raspberry Pi 3 board with an altitude sensor wired to it is a publisher that establishes a connection with the broker. An iOS smartphone and an Android tablet are two subscribers that establish a connection with the broker. The iOS smartphone indicates the broker that it wants to subscribe to all the messages that belong to the sensor1/altitude topic. The Android tablet indicates the same to the broker. Hence, both the iOS smartphone and the Android tablet are subscribed to the sensor1/altitude topic. A topic is a named logical channel and it is also referred to as a channel or subject. The broker will send publishers only the messages published to topics to which they are subscribed. The Raspberry Pi 3 board publishes a message with 100 feet as the payload and sensor1/altitude as the topic. The board, that is, the publisher, sends the publish request to the broker. The data for a message is known as payload. A message includes the topic to which it belongs and the payload. The broker distributes the message to the two clients that are subscribed to the sensor1/altitude topic: the iOS smartphone and the Android tablet. Publishers and subscribers are decoupled in space because they don’t know each other. Publishers and subscribers don’t have to run at the same time. The publisher can publish a message and the subscriber can receive it later. In addition, the publish operation isn’t synchronized with the receive operation. A publisher requests the broker to publish a message and the different clients that have subscribed to the appropriate topic can receive the message at different times. The publisher can perform asynchronous requests to avoid being blocked until the clients receive the messages. However, it is also possible to perform a synchronous request to the broker and to continue the execution only after the request was successful. In most cases, we will want to take advantage of asynchronous requests. A publisher that requires sending a message to hundreds of clients can do it with a single publish operation to a broker. The broker is responsible of sending the published message to all the clients that have subscribed to the appropriate topic. Because publishers and subscribers are decoupled, the publisher doesn’t know whether there is any subscriber that is going to listen to the messages it is going to send. Hence, sometimes it is necessary to make the subscriber become a publisher too and to publish a message indicating that it has received and processed a message. The specific requirements depend on the kind of solution we are building. MQTT offers many features that make our lives easier in many of the scenarios we have been analyzing. Working with message filtering The broker has to make sure that subscribers only receive the messages they are interested in. It is possible to filter messages based on different criteria in a publish-subscribe pattern. We will focus on analyzing topic-based filtering, also known as subject-based filtering. Consider that each message belongs to a topic. When a publisher requests the broker to publish a message, it must specify both the topic and the message. The broker receives the message and delivers it to all the subscribers that have subscribed to the topic to which the message belongs. The broker doesn’t need to check the payload for the message to deliver it to the corresponding subscribers, it just needs to check the topic for each message that has arrived and needs to be filtered before publishing it to the corresponding subscribers. A subscriber can subscribe to more than one topic. In this case, the broker has to make sure that the subscriber receives the messages that belong to all the topics to which it has subscribed. It is easy to understand how things work with another simple diagram. The following diagram shows two future publishers that haven’t published any message yet, a broker and two subscribers connected to the broker: A Raspberry Pi 3 board with an altitude sensor wired to it and an Intel Edison board with a temperature sensor wired to it will be two publishers. An iOS smartphone and an Android tablet are two subscribers that establish a connection to the broker. The iOS smartphone indicates the broker that it wants to subscribe to all the messages that belong to the sensor1/altitude topic. The Android tablet indicates the broker that it wants to subscribe to all the messages that belong to any of the following two topics: sensor1/altitude and sensor42/temperature. Hence, the Android tablet is subscribed to two topics while the iOS smartphone is subscribed to just one topic. The following diagram shows what happens after the two publishers connect and publish messages to different topics through the broker: The Raspberry Pi 3 board publishes a message with 120 feet as the payload and sensor1/altitude as the topic. The board, that is, the publisher, sends the publish request to the broker. The broker distributes the message to the two clients that are subscribed to the sensor1/altitude topic: the iOS smartphone and the Android tablet. The Intel Edison board publishes a message with 75 F as the payload and sensor42/temperature as the topic. The board, that is, the publisher, sends the publish request to the broker. The broker distributes the message to the only client that is subscribed to the sensor42/temperature topic: the Android Tablet. Thus, the Android tablet receives two messages. Summary In this article, we started our journey towards the MQTT protocol. We understood convenient scenarios for this protocol, the details of the publish-subscribe pattern and message filtering. We learned basic concepts related to MQTT and understood the different components: clients, brokers, and connections. Resources for Article: Further resources on this subject: All About the Protocol [article] Analyzing Transport Layer Protocols [article] IoT and Decision Science [article]
Read more
  • 0
  • 0
  • 3686
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ble-and-internet-things
Packt
02 Feb 2017
11 min read
Save for later

BLE and the Internet of Things

Packt
02 Feb 2017
11 min read
In this article by Muhammad Usama bin Aftab, the author of the book Building Bluetooth Low Energy (BLE) Systems, this article is a practical guide to the world of Internet of Things (IoT), where readers will not only learn the theoretical concepts of the Internet of Things but also will get a number of practical examples. The purpose of this article is to bridge the gap between the knowledge base and its interpretation. Much literature is available for the understanding of this domain but it is difficult to find something that follows a hands-on approach to the technology. In this article, the readers will get an introduction of Internet of Things with a special focus on Bluetooth Low Energy (BLE). There is no problem justifying that the most important technology for the Internet of Things is Bluetooth Low Energy as it is widely available throughout the world and almost every cell phone user keeps this technology in his pocket. The article will then go beyond Bluetooth Low Energy and will discuss many other technologies available for the Internet of Things. In this article we'll explore the following topics: Introduction to Internet of Things Current statistics about IoT and how we are living in a world which is going towards Machine to Machine (M2M) communication Technologies in IoT (Bluetooth Low Energy, Bluetooth beacons, Bluetooth mesh and wireless gateways and so on) Typical examples of IoT devices (catering wearables, sports gadgets and autonomous vehicles and so on) (For more resources related to this topic, see here.) Internet of Things The Internet is a system of interconnected devices which uses a full stack of protocols over a number of layers. In early 1960, the first packet-switched network ARPANET was introduced by the United States Department of Defense (DOD) which used a variety of protocols. Later, with the invention of TCP/IP protocols the possibilities were infinite. Many standards were evolved over time to facilitate the communication between devices over a network. Application layer protocols, routing layer protocols, access layer protocols, and physical layer protocols were designed to successfully transfer the Internet packets from the source address to the destination address. Security risks were also taken care of during this process and now we live in the world where the Internet is an essential part of our lives. The world had progressed quite afar from ARPANET and the scientific communities had realized that the need of connecting more and more devices was inevitable. Thus came the need of more Internet addresses. The Internet Protocol version 6 (IPv6) was developed to give support to an almost infinite number of devices. It uses 128 bits' address, allowing 2^128 (3.4 e38) devices to successfully transmit packets over the internet. With this powerful addressing mechanism, it was now possible to think beyond the traditional communication over the Internet. The availability of more addresses opened the way to connect more and more devices. Although, there are other limitations in expanding the number of connected devices, addressing scheme opened up significant ways. Modern Day IoT The idea of modern day Internet of Things is not significantly old. In 2013, the perception of the Internet of Things evolved. The reasons being the merger of wireless technologies, increase the range of wireless communication and significant advancement in embedded technology. It was now possible to connect devices, buildings, light bulbs and theoretically any device which has a power source and can be connected wirelessly. The combination of electronics, software, and network connectivity has already shown enough marvels in the computer industry in the last century and Internet of Things is no different. Internet of Things is a network of connected devices that are aware of their surrounding. Those devices are constantly or eventually transferring data to its neighboring devices in order to fulfil certain responsibility. These devices can be automobiles, sensors, lights, solar panels, refrigerators, heart monitoring implants or any day-to-day device. These things have their dedicated software and electronics to support the wireless connectivity. It also implements the protocol stack and the application level programming to achieve the required functionality: An illustration of connected devices in the Internet of Things Real life examples of the Internet of Things Internet of Things is fascinatingly spread in our surroundings and the best way to check it is to go to a shopping mall and turn on your Bluetooth. The devices you will see are merely a drop in the bucket of the Internet of Things. Cars, watches, printers, jackets, cameras, light bulbs, street lights, and other devices that were too simple before are now connected and continuously transferring data. It is to keep in mind that this progress in the Internet of Things is only 3 years old and it is not improbable to expect that the adoption rate of this technology will be something that we have never seen before. Last decade tells us that the increase in the internet users was exponential where it reached the first billion in 2005, second in 2010 and third in 2014. Currently, there are 3.4 billion internet users present in the world. Although this trend looks unrealistic, the adoption rate of the Internet of Things is even more excessive. The reports say that by 2020, there will be 50 billion connected devices in the world and 90 percent of the vehicles will be connected to the Internet. This expansion will bring $19 trillion in profits by the same year. By the end of this year, wearables will become a $6 billion market with 171 million devices sold. As the article suggests, we will discuss different kinds of IoT devices available in the market today. The article will not cover them all, but to an extent where the reader will get an idea about the possibilities in future. The reader will also be able to define and identify the potential candidates for future IoT devices. Wearables The most important and widely recognized form of Internet of Things is wearables. In the traditional definition, wearables can be any item that can be worn. Wearables technology can range from fashion accessories to smart watches. The Apple Watch is a prime example of wearable technology. It contains fitness tracking and health-oriented sensors/apps which work with iOS and other Apple products. A competitor of Apple Watch is Samsung Gear S2 which provides compatibility with Android devices and fitness sensors. Likewise, there are many other manufacturers who are building smart watches including, Motorola, Pebble, Sony, Huawei, Asus, LG and Tag Heuer. These devices are more than just watches as they form a part of the Internet of Things—they can now transfer data, talk to your phone, read your heart rate and connect directly to Wi-Fi. For example, a watch can now keep track of your steps and transfer this information to the cellphone: Fitbit Blaze and Apple Watch The fitness tracker The fitness tracker is another important example of the Internet of Things where the physical activities of an athlete are monitored and maintained. Fitness wearables are not confined to the bands, there are smart shirts that monitor the fitness goals and progress of the athlete. We will discuss two examples of fitness trackers in this article. Fitbit and Athos smart apparel. The Blaze is a new product from Fitbit which resembles a smart watch. Although it resembles a smart watch, it a fitness-first watch targeted at the fitness market. It provides step tracking, sleep monitoring, and 24/7 heart rate monitoring. Some of Fitbit's competitors like Garmin's vívoactive watch provide a built-in GPS capability as well. Athos apparel is another example of a fitness wearable which provides heart rate and EMG sensors. Unlike watch fitness tracker, their sensors are spread across the apparel. The theoretical definition of wearables may include augmented and virtual reality headsets and Bluetooth earphones/headphones in the list. Smart home devices The evolution of the Internet of Things is transforming the way we live our daily lives as people use wearables and other Internet of Things devices in their daily lives. Another growing technology in the field of the Internet of Things is the smart home. Home automation, sometimes referred to as smart homes, results from extending the home by including automated controls to the things like heating, ventilation, lighting, air-conditioning, and security. This concept is fully supported by the Internet of Things which demands the connection of devices in an environment. Although the concept of smart homes has already existed for several decades 1900s, it remained a niche technology that was either too expensive to deploy or with limited capabilities. In the last decade, many smart home devices have been introduced into the market by major technology companies, lowering costs and opening the doors to mass adoption. Amazon Echo A significant development in the world of home automation was the launch of Amazon Echo in late 2014. The Amazon Echo is a voice enabled device that performs tasks just by recognizing voice commands. The device responds to the name Alexa, a key word that can be used to wake up the device and perform an number of tasks. This keyword can be used followed by a command to perform specific tasks. Some basic commands that can be used to fulfil home automation tasks are: Alexa, play some Adele. Alexa, play playlist XYZ. Alexa, turn the bedroom lights on (Bluetooth enabled lights bulbs (for example Philips Hue) should be present in order to fulfil this command). Alexa, turn the heat up to 80 (A connected thermostat should be present to execute this command). Alexa, what is the weather? Alexa, what is my commute? Alexa, play audiobook a Game of Thrones. Alexa, Wikipedia Packt Publishing. Alexa, How many teaspoons are in one cup? Alexa, set a timer for 10 minutes. With these voice commands, Alexa is fully operable: Amazon Echo, Amazon Tap and Amazon Dot (From left to right) Amazon Echo's main connectivity is through Bluetooth and Wi-Fi. Wi-Fi connectivity enables it to connect to the Internet and to other devices present on the network or worldwide. Bluetooth Low Energy, on the other hand, is used to connect to other devices in the home which are Bluetooth Low Energy capable. For example, Philips Hue and Thermostat are controlled through Bluetooth Low Energy. In Google IO 2016, Google announced a competing smart home device that will use Google as a backbone to perform various tasks, similar to Alexa. Google intends to use this device to further increase their presence in the smart home market, challenging Amazon and Alexa. Amazon also launched Amazon Dot and Amazon Tap. Amazon Dot is a smaller version of Echo which does not have speakers. External speakers can be connected to the Dot in order to get full access to Alexa. Amazon Tap is a more affordable, cheaper and wireless version of Amazon Echo. Wireless bulbs The Philips Hue wireless bulb is another example of a smart home device. It is a Bluetooth Low Energy connected light bulb that's give full control to the user through his smartphone. These colored bulbs can display millions of colors and can be also controlled remotely through the away from home feature. The lights are also smart enough to sync with music: Illustration of controlling Philips Hue bulbs with smartphones Smart refrigerators Our discussion of home automation would not be complete incomplete without discussing kitchen and other house electronics, as several major vendors such as Samsung have begun offering smart appliances for a smarter home. The Family Hub refrigerator is a smart fridge that lets you access the Internet and runs applications. It is also categorized in the Internet of Things devices as it is fully connected to the Internet and provides various controls to the users: Samsung Family Hub refrigerator with touch controls Summary In this article we spoke about the Internet of Things technology and how it is rooting in our real lives. The introduction of the Internet of Things discussed wearable devices, autonomous vehicles, smart light bulbs, and portable media streaming devices. Internet of Things technologies like Wireless Local Area Network (WLAN), Mobile Ad-hoc Networks (MANETs) and Zigbee was discussed in order to have a better understanding of the available choices in the IoT. Resources for Article: Further resources on this subject: Get Connected – Bluetooth Basics [article] IoT and Decision Science [article] Building Voice Technology on IoT Projects [article]
Read more
  • 0
  • 0
  • 2576

article-image-internet-things-technologies
Packt
30 Jan 2017
9 min read
Save for later

Internet of Things Technologies

Packt
30 Jan 2017
9 min read
In this article by Rubén Oliva Ramos author of the book Internet of Things Programming with JavaScript we will understand different platform for the use of Internet of Things, and how to install Raspbian on micro SD card. (For more resources related to this topic, see here.) Technology has played a huge role in increasing efficiency in the work place, improving living conditions at home, monitoring health and environmental conditions and saving energy and natural resources. This has been made possible through continuous development of sensing and actuation devices. Due to the huge data handling requirements of these devices, the need for a more sophisticated, yet versatile data handling and storage medium, such as the Internet, has arisen. That’s why many developers are adopting the different Internet of Things platforms for prototyping that are available. There are several different prototyping platforms that one can use to add internet connectivity to his or her prototypes. It is important for a developer to understand the capabilities and limitations of the different platforms if he or she wants to make the best choice. So, here is a quick look at each platform. Arduino Arduino is a popular open-source hardware prototyping platform that is used in many devices. It comprises several boards including the UNO, Mega, YUN and DUE among several others. However, out of all the boards, only the Arduino Yun has built-in capability to connect to Wi-Fi and LAN networks. The rest of the boards rely on the shields, such as the Wi-Fi shield and Ethernet shield, to connect to the Internet. The official Arduino Ethernet shield allows you to connect to a network using an RJ45 cable. It has a Wiznet W5100 Ethernet chip that provides a network IP stack that is compatible with UDP and TCP. Using the Ethernet library provided by Arduino you will be able to connect to your network in a few simple steps. If you are thinking of connecting to the Internet wirelessly, you could consider getting the Arduino Wi-Fi shield. It is based on the HDG204 wireless LAN 802.11b/g System in-Package and has an AT32UC3 that provides a network IP stack that is compatible with UDP and TCP. Using this shield and the Wi-Fi library provided by Arduino, you can quickly connect to your prototypes to the web. The Arduino Yun is a hybrid board that has both Ethernet and Wi-Fi connectivity. It is based on the ATmega32u4 and the Atheros AR9331 which supports OpenWrt-Yun. The Atheros processor handles the WiFi and Ethernet interfaces while the ATmega32u4 handles the USB communication. So, if you are looking for a more versatile Arduino board for an Internet of Things project, this is it. There are several advantages to using the Arduino platform for internet of things projects. For starters the Arduino platform is easy to use and has a huge community that you can rely on for technical support. It also is easy to create a prototype using Arduino, since you can design a PCB based on the boards. Moreover, apart from Arduino official shields, the Arduino platform can also work with third party Wi-Fi and Ethernet shields such as the WiFi shield. Therefore, your options are limitless. On the down side, all Arduino boards, apart from Yun, need an external module so as to connect to the internet. So, you have to invest more. In addition to this, there are very many available shields that are compatible with the Arduino. This makes it difficult for you to choose. Also, you still need to choose the right Internet of Things platform for your project, such as Xively or EasyIoT. Raspberry Pi The Raspberry Pi is an open source prototyping platform that features credit-card sized computer boards. The boards have USB ports for a keyboard and a mouse, a HDMI port for display, an Ethernet port for network connection and an SD card to store the operating system. There are several versions of the Raspberry Pi available in the market. They include the Raspberry Pi 1 A, B, A+ and B+ and the Raspberry Pi 2 B. When using a Raspberry Pi board you can connect to the Internet either wirelessly or via an Ethernet cable. Raspberry Pi boards, except version A and A+, have an Ethernet port where you can connect an Ethernet cable. Normally, the boards gain internet connection immediately after you connect the Ethernet cable. However, your router must be configured for Dynamic Host Configuration Protocol (DHCP) for this to happen. Otherwise, you will have to set the IP address of the Raspberry Pi manually and the restart it. To connect your Raspberry Pi to the internet wirelessly, you have to use a WiFi adapter, preferably one that supports the RTL8192cu chipset. This is because Raspbian and Ocidentalis distributions have built-in support for that chip. However, there is no need to be choosy. Almost all Wi-Fi adapters in the market, including the very low cost budget adapters will work without any trouble. Using Raspberry Pi boards for IoT projects is advantageous because you don’t need extra shields or hardware to connect to the internet. Moreover, connecting to your wireless or LAN network happens automatically, so long as the router that has DHCP configured (most routers do). Also, you don’t have to worry if you are a newbie, since there is a huge Raspberry Pi community. You can get help quickly. The disadvantage of using the Raspberry Pi platform to make IoT devices is that it is not easy to use. It would take tremendous time for newbies to learn how to set up everything and code apps. Another demerit is that the Raspberry Pi boards cannot be easily integrated into a product. There are also numerous operating systems that the Pi boards can run on and it is not easy to decide on the best operating system for the device you are creating. Which Platform to Choose? The different features that come with each platform make it ideal for certain applications, but not all. Therefore, if at all you have not yet made up your mind on the platform to use, try selecting them based on usage. The Raspberry Pi is ideal for IoT applications that are server based. This is because it has high storage space and RAM and a powerful processor. Moreover, it supports many programming languages that can create Server-Side apps, such as Node.js. You can also use the Raspberry Pi in instances where you want to access web pages and view data posted on online servers, since you can connect a display to it and view the pages on the web browser. The Raspberry Pi can connect to both LAN and Wi-Fi networks. However, it is not advisable to use the raspberry Pi for projects where you would want to integrate it into a finished product or create a custom made PCB. On the other hand, Arduino comes in handy as a client. It can log data on an online server and retrieve data from the server. It is ideal for logging sensor data and controlling actuators via commands posted on the server by another client. However, there are instances where you can use Arduino boards for server functions such as hosting a simple web page that you can use to control your Arduino from the local network it is connected to. The Arduino platform can connect to both LAN and Wi-Fi networks. The ESP8266 has a very simple hardware architecture and is best used in client applications such as data logging and control of actuators from online server applications. You can use it as a webserver as well, but for applications or web pages that you would want to access from the local network that the module is connected to. The Spark Core platform is ideal for both server and client functions. It can be used to log sensor data onto the Spark.io cloud or receive commands from the cloud. Moreover, you don’t have to worry about getting online server space, since the Spark cloud is available for free. You can also create Graphical User Interfaces based on Node.js to visualize incoming data from sensors connected to the Spark Core and send commands to the Spark Core for activation of actuators. Setting up Raspberry Pi Zero Raspberry Pi is low-cost board dedicated for project purpose this will use a Raspberry Pi Zero board. See the following link https://www.adafruit.com/products/2816 for the Raspberry Pi Zero board and Kit. In order to make the Raspberry Pi work, we need an operating system that acts as a bridge between the hardware and the user, we will be using the Raspbian Jessy that you can download from https://www.raspberrypi.org/downloads/, in this link you will find all the information that you need to download all the software that it’s necessary to use with your Raspberry Pi to deploy Raspbian, we need a micro SD card of at least 4 GB. The kit that I used for testing the Raspberry Pi Zero includes all the necessary items for installing every thing and ready the board. Preparing the SD card The Raspberry Pi Zero only boots from a SD card, and cannot boot from an external drive or USB stick. For now it is recommended to use a Micro SD 4 GB of space. Installing the Raspian Operating System When we have the image file for preparing the SD card that has previously got from the page. Also we insert the micro SD into the adapter, download Win32DiskImager from https://sourceforge.net/projects/win32diskimager/ In the following screenshot you will see the files after downloading the folder It appears the window, open the file image and select the path you have the micro SD card and click on the Write button. After a few seconds we have here the download image and converted files into the micro SD In the next screenshot you can see the progress of the installation Summary In this article we discussed about the different platform for the use of Internet of Things like Ardunio and Raspberry Pi. We also how to setup and prepare Raspberry Pi for further use. Resources for Article: Further resources on this subject: Classes and Instances of Ember Object Model [article] Using JavaScript with HTML [article] Introducing the Ember.JS framework [article]
Read more
  • 0
  • 0
  • 1559

article-image-our-first-program
Packt
03 Jan 2017
5 min read
Save for later

Our First Program!

Packt
03 Jan 2017
5 min read
In this article by Syed Omar Faruk Towaha, author of Learning C for Arduino, we will be learning how we can connect our Arduino to our system and we will learn how to write program on the Arduino IDE. (For more resources related to this topic, see here.) First connect your A-B cable to your PC and then connect the cable to the PC. Your PC will make a sound which will confirm that the Arduino has connected to the PC. Now open the Arduino IDE. From menu go to Tools | Board:"Arduino/Genuino Uno". You can select any board you have bought from the list. See the following screenshot for the list: You have to select the port on which the Arduino is connected. There are a lot of things you can do to see on which port your Arduino is connected. Hello Arduino! Let's write our first program on the Arduino IDE. Go to File and click on New. A text editor will open with few lines of code. Delete those lines first, and then type the following code: void setup() { Serial.begin(9600); } void loop() { Serial.print("Hello Arduino!n"); } From menu go to Sketch and click Upload. It is a good practice to verify or compile the code before uploading to the Arduino board. To verify or compile the code you need to go to Sketch and click Verify/Compile. You will see a message Done compiling. on the bottom of the IDE if the code is error free. See the following figure for the explanation: After the successful upload, you need to open the serial monitor of the Arduino IDE. To open the serial monitor, you need to go to Tools and click on Serial Monitor. You will see the following screen: setup() function The setup() function helps to initialize variables, set pin modes, and we can also use libraries here. This function is called first when we compile or upload the whole code. The setup() runs only for the first time it is uploaded to the board and later it runs every time we press the reset button on the Arduino. On our code we used Serial.begin(9600); which sets the data rate per second for the serial communication. We already know that serial communication is the process by which we send data one bit at a time over a communication channel. For Arduino to communicate with the computer we used the data rate 9600 which is a standard data rate. This is also known as baud rate. We can set the baud rate any of the following depending on the connection speed and type. I will recommend using 9600 for general data communication purposes. If you use higher baud rate, the characters we print might be broken: 300 1200 2400 4800 9600 19200 38400 57600 74880 115200 230400 250000 If we do not declare the baud rate inside the setup() function there will be no output on the serial monitor. Baud rate 9600 means 960 characters per second. This is because in serial communication, you need to transmit one start bit, eight data bits and one stop bit, for a total of 10 bits at a speed of 9600 bits per second. loop() function loop() function will let the code run again and again until we disconnect the Arduino. We have written a print statement on the loop() function which will execute for the infinity time. To print something on the serial monitor we need to write the following line: Serial.print("Anything We Want To Print"); Between the quotations we can write anything we want to print. On the last code we have written Serial.print("Hello Arduino!n");. That's why on the serial monitor we have seen Hello Arduino! had been printing for an infinity time. We have used n after Hello Arduino!.This is called escape sequence. For now, just remember we need to put this after each of our line inside the print statement to break a line and print the next command on the net line. We can use Serial.println("Hello Arduino!"); instead of Serial.print("Hello Arduino!n");. Both will give the same result. We need to put Serial.println("Hello Arduino!"); inside the setup() function. Now let's see what happens if we put a print statement inside the setup() function. Have a look at the following figure. Hello Arduino! is printed for only one time: Since C is a case sensitive language we need to be careful about the casing. We cannot use serial.print("Hello Arduino!"); instead of Serial.print("Hello Arduino!n");. Summary In this article we have learned how to connect the Arduino to our system and uploaded the code to our board. We have learned how the Arduino code work. If you are new to Arduino this article is most important. Resources for Article: Further resources on this subject: Zabbix Configuration [article] A Configuration Guide [article] Thorium and Salt API [article]
Read more
  • 0
  • 0
  • 1696

article-image-programming-linux
Packt
02 Jan 2017
20 min read
Save for later

Programming with Linux

Packt
02 Jan 2017
20 min read
In this article by Edward Snajder, the author of Raspberry Pi Zero Cookbook, we pick up from having our operating system installed and our Raspberry Pi Zero on our home network. We can now dive into some basic Linux commands. You will find knowing these commands useful any time you are working on a Linux machine. In this article, we'll start prepping with some Linux recipes: (For more resources related to this topic, see here.) Navigating a filesystem and viewing and searching the contents of a directory Creating a new file, editing it in an editor, and changing ownership Renaming and copying/moving the file/folder into a new directory Installing and uninstalling a program Navigating a filesystem and viewing and searching the contents of a directory If you aren’t already a Linux or Mac user, getting around the filesystem can seem pretty alien at first. Truly, if you’ve only used Windows Explorer, this is going to seem like a strange, alien process. Once you start getting the hang of things, though, you’ll find that getting around the Linux filesystem is easy and fun. Getting ready The only thing you need to get started is a client connection to your Raspberry Pi Zero. I like to use SSH, but you can certainly connect using the serial connection or a terminal in X Windows. How to do it… If you want to find out where you are in the filesystem, use pwd: pi@rpz14101:~$ pwd /home/pi This tells me I’m in the /home/pi directory, which is the default home directory for the pi user. Generally, every user you create should get a /home/username directory to keep their own files in. This can be done automatically with user creation and the adduser command. To look at the contents of the directory you are in, use the ls command: pi@rpz14101:~$ ls Desktop Downloads Pictures python_games share Videos Documents Music Public Scratch Templates To look in another directory, simply specify the directory you want to list (you may need to use sudo depending on where you are looking): pi@rpz14101:~$ sudo ls /opt/ cookbook.share pigpio sonic-pi vc minecraft-pi share testsudo.deleteme Wolfram This is a nice quick summary of what files are in the directory, but you will usually want a little more information about the files and directories. The ls command has a ton of options, all of which can be displayed with ls –help and explained in more detail with man ls. Some of the best ones to know are as follows: -a show all files (regular and hidden) -l show long format (more file information, in columns) -h human readable (turns bytes into MB or GB as appropriate) -t or -tr order in time order, or reverse time order My typical command when I start looking in a directory is this one: ls -ltrh This produces all non-hidden files, with human-readable sizes, in column format, and the newest file at the bottom: pi@rpz14101:~$ ls -ltrh /opt/ total 513M drwxr-xr-x 7 root root 4.0K May 27 04:11 vc drwxr-xr-x 3 root root 4.0K May 27 04:32 Wolfram drwxr-xr-x 3 root root 4.0K May 27 04:34 pigpio drwxr-xr-x 4 root root 4.0K May 27 04:36 minecraft-pi drwxr-xr-x 5 root root 4.0K May 27 04:36 sonic-pi -rw-r--r-- 1 root root 0 Jul 4 13:41 testsudo.deleteme drwxr-xr-x 2 root root 4.0K Jul 9 13:05 share -rwxr-xr-x 1 root root 512M Jul 24 17:53 cookbook.share One last trick: if you need this format but there are a lot of files in the directory you are searching, you will see a ton of text scroll by. Maybe you just need the most recent or largest files? We can do this with a pipe (|) and the tail command. Let’s take a directory with a lot of files, such as /usr/lib/. To list the five most recently modified files, I can pipe ls -ltrh to the tail command: pi@rpz14101:~$ ls -ltrh /usr/lib/ | tail -5 lrwxrwxrwx 1 root root 22 May 27 04:40 libwiringPiDev.so -> libwiringPiDev.so.2.32 drwxr-xr-x 2 root root 4.0K Jun 5 10:38 samba drwxr-xr-x 3 root root 4.0K Jul 4 22:48 pppd drwxr-xr-x 65 root root 60K Jul 24 15:48 arm-linux-gnueabihf drwxr-xr-x 2 root root 4.0K Jul 24 15:48 tmpfiles.d What about the five largest files? Instead of the t in -ltrh, I can use S: pi@rpz14101:~$ ls -lSrh /usr/lib/ | tail -5 -rw-r--r-- 1 root root 2.8M Sep 17 2014 libmozjs185.so.1.0.0 -rw-r--r-- 1 root root 2.8M Sep 30 2014 libqscintilla2.so.11.3.0 -rw-r--r-- 1 root root 2.9M Jun 5 2014 libcmis-0.4.so.4.0.1 -rw-r--r-- 1 root root 3.4M Jun 12 2015 libv8.so.3.14.5 -rw-r--r-- 1 root root 5.1M Aug 18 2014 libmwaw-0.3.so.3.0.1 A little creative piping and you can find exactly the file you are looking for. If not, another great tool for exploring the filesystem is tree. This gives a pseudo-graphical tree that shows how the files are structured in the system. It produces a lot of text, especially if you have it print an entire directory tree. If just looking into directory structures, you can use tree with the -d flag for directories only. The -L flag will reduce how deep you dive into nested directories: pi@rpz14101:~$ tree -d -L 2 /opt/ /opt/ ├── minecraft-pi │ ├── api │ └── data ├── pigpio │ └── cgi ├── share ├── sonic-pi │ ├── app │ ├── bin │ └── etc ├── vc │ ├── bin │ ├── include │ ├── lib │ ├── sbin │ └── src └── Wolfram └── WolframEngine Last, we will look at a couple of searching utilities, find and grep. The find command is a powerful function that finds files in whatever directories you specify. It is great for trying to find that mystery piece of software that installed itself in an odd place or the needle-in-a-haystack file in a directory that contains hundreds of files. For example, if I were to run tree in the /opt/sonic-pi/ directory, it would run on for several seconds, and thousands of files would shoot by. I, however, am only interested in finding files with cowbell in the name. I can use the find command to look for it: pi@rpz14101:~$ find /opt/sonic-pi/ -name *cowbell* /opt/sonic-pi/etc/samples/drum_cowbell.flac When looking for anything with cowbell in the filename, the find command returns the exactly location of anything that matches. There are tons of options for using the find command; start with find –help, and then try man find when you want to get really deep. The grep command can be used in a couple different ways when searching for files, and it is one of those commands you will find yourself using constantly while both loving and hating its awesome power. Let’s say you need to find something inside of a file—grep is the tool for you. It can also find things like find can, but generally, find is more efficient at finding filename patterns than grep is. If I use grep to look for cowbells in my sonic-pi directory, I’ll get a different, and more colorful, output: We don’t see the file with cowbell in the name like we did using find, but we find every file that contains cowbell inside of it. The -r flag tells grep to delve into subdirectories, and -i tells it to ignore cases with cowbells (so Cowbell and cowbell are both found, as shown in the screenshot). As you use Linux more often, both find and grep become regularly used tools for administration and file management. This won’t be the last time you use them! Creating a new file, editing it in an editor, and changing ownership There are a lot of different text editors to use on a Linux system from the command line. The program vi is the Ubuntu default, and the program you will find installed on pretty much any Linux system. Emacs is another popular editor, and lots of Linux users get quite passionate about which one is better. My preference is vim, which is generally known as vi improved. The nano text editor is another one that is commonly installed on Linux distros, and it is one of the most lightweight editors available. Getting ready For this recipe, we will work with vi, since that’s definitely going to be installed on your system. If you want to try out vim, you can install it using this: sudo apt-get vim How to do it… First we will go to our share directory: cd /home/pi/share Then, we will create an empty file using the touch command: touch ch3_touchfile.txt If you use the ls command from the previous directory, you can see that the size of the file is 0. You can also display the contents of the file with the cat command, which will return nothing in this case. The touch command is a great way to test whether you have permissions to create files in a specific directory. You can also create a new file with the editor itself: vi ch3_vifile.txt This will open the vi editor with a blank file named ch3_vifile.txt: Using vi or vim (or Emacs) for the first time is completely different from using something like OpenOffice or Microsoft Word. Vi works in two modes: insert (or edit) and command. Once you learn how to use command mode, vi becomes a very efficient editor for working on scripts in bash or Python. Edit mode, more or less, is the mode where you can type and edit text like a regular WYSIWYG editor. There are books written on becoming a power user of vi, well beyond the scope of this book. Getting a handle on the basics is the best place to start: With the empty file, you can jump into edit mode by pressing the i or a keys. The editor will switch to insert mode, as shown by the -- INSERT -- in the bottom left of the screen. Then you can you start typing in your text: To get out of insert mode, press the Esc key. The :w command will save the file, and the :q command will quit. You can combine them, so :wq saves the file and quits. You can verify that the contents were saved with the cat command: pi@rpz14101:~$ cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! Let’s take another look at the ls command and some of the information the -l format includes. We will take a look at the files we’ve created so far in this recipe: pi@rpz14101:~$ ls -ltrh *.txt -rw-r--r-- 1 pi pi 43 Jul 25 11:23 ch3_vifile.txt -rw-r--r-- 1 pi pi 0 Jul 25 11:24 ch3_touchfile.txt File Permissions Number of links Owner:Group Size Modification Date File Name -rw-r--r-- 1 pi:pi 43 Jul 25 11:23 ch3_vifile.txt We can see that since we made the files as the pi user, the owner of the file and the group owner are pi. By default, when a new user is created, a group container is created as well, so root has a root group, user rpz has an rpz group, and so on. We can change the ownership settings of a file with the chown command. Be careful, since you can take away your own access, though you can always sudo your way back. The chmod command will change who is allowed to do what with a file. Let’s look at ownership changes and what impact they will have with a few examples: pi@rpz14101:~ $ ls -ltrh *.txt -rw-r--r-- 1 pi pi 0 Jul 25 13:28 ch3_touchfile.txt -rwx------ 1 pi pi 43 Jul 25 13:28 ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! pi@rpz14101:~ $ sudo chown rpz:rpz ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt cat: ch3_vifile.txt: Permission denied pi@rpz14101:~ $ sudo cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! pi@rpz14101:~ $ sudo chown rpz:pi ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt cat: ch3_vifile.txt: Permission denied pi@rpz14101:~ $ sudo chmod 750 ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! pi@rpz14101:~ $ sudo chown root:root ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt cat: ch3_vifile.txt: Permission denied pi@rpz14101:~ $ sudo chmod 755 ch3_vifile.txt pi@rpz14101:~ $ cat ch3_vifile.txt Hello from the Raspberry Pi Zero Cookbook! The chmod values are documented very well, and with a little practice, you can get your file permissions and ownership set up in a way that is both secure and easy to work with. Renaming and copying/moving the file/folder into a new directory A common activity on any filesystem is the practice of copying and moving files, and even directories, from one place to another. You might do it to make a backup copy of something, or you might decide that the contents should live in a more appropriate location. This recipe will explore how to manipulate files in the Raspbian system. Getting ready If you are still in your terminal from the last recipe, we are going to use the same files from the previous recipe. We should have the ownership back to pi:pi; if not, run the following: sudo chown pi:pi /home/pi/*.txt How to do it… First, let’s make a new directory. We’ll put it under the /home/pi/share/ folder so it is accessible to other computers on your home network. To make a directory, use the mkdir command: pi@rpz14101:~$ mkdir /home/pi/share/ch3 We can look at the new directory with the ls command: pi@rpz14101:~$ ls -ltrh /home/pi/share/ total 4.0K -rw-r--r-- 1 pi pi 0 Jul 24 15:56 helloNetwork.yes drwxr-xr-x 2 pi pi 4.0K Jul 25 13:06 ch3 A great flag to go with the mkdir command is -p. This will allow you to create directories and subdirectories in one command. Without it, if I try to create a subdirectory that doesn’t already exist, I’ll get an error: pi@rpz14101:~$ mkdir /home/pi/share/ch3/nested/folders mkdir: cannot create directory ‘/home/pi/share/ch3/nested/folders’: No such file or directory With the -p flag, it works without a problem: pi@rpz14101:~$ mkdir -p /home/pi/share/ch3/nested/folders The tree command shows the structure of our ch3 directory: pi@rpz14101:~$ tree /home/pi/share /home/pi/share ├── ch3 │ └── nested │ └── folders └── helloNetwork.yes 3 directories, 1 file Now, let’s move our files to our new ch3 directory. The copy and move commands—cp and mv, respectively—are the tools we will use. Copying a file from one place to another is as simple as indicating the file's source and destination. The following command will make a copy of vifile.txt and save it as vifile.txt.copy in the /home/pi/share/ch3/ directory: cp /home/pi/ch3_vifile.txt /home/pi/share/ch3/ch3_vifile.txt.copy We can copy files as well as directories and their contents as long as you have enough disk space. To move or rename a file, we use the mv command. This takes the file given in the source and moves it to the destination provided. As simple as the cp command, let’s move all of our files to the share directory: mv /home/pi/ch3_vifile.txt /home/pi/share/ch3/ch3_vifile.txt.moved mv /home/pi/ch3_touchfile.txt /home/pi/share/ch3/ch3_touchfile.txt If we look at the tree of our share directory, we will see everything nicely organized: pi@rpz14101:~$ tree /home/pi/share/ /home/pi/share/ ├── ch3 │ ├── ch3_touchfile.txt │ ├── ch3_vifile.txt.copy │ ├── ch3_vifile.txt.moved │ └── nested │ └── folders └── helloNetwork.yes 3 directories, 4 files Installing and uninstalling a program We’ve installed a few programs throughout the book so far, but have yet to delve into the apt-get command and the family of software-installation utilities. Now, we will learn how to install and uninstall any program available for Raspbian as well as how to search for new software and run updates. Getting ready Stay in your terminal window, and get ready to install some applications! How to do it… The apt-* commands are a suite of utilities that allow you to do various things with installed packages. To install a package, we use the apt-get tool, and the install command, like this: sudo apt-get install <packagename> Let’s install something cool—how about a Matrix screensaver? It is super easy and works great from the command line. To look for a package, we use the apt-cache search command. apt-cache is another tool in the apt-* family of utilities, and it checks the software database for matches. Running sudo apt-cache search matrix results tons of results! The word "matrix" is a little too popular for us computer and math nerds—we have matrixes everywhere! It would take forever to go through that list to find what we are looking for. Fortunately, we can take advantage of grep, which we touched on in an earlier recipe, to narrow down our results. One of the fun things about using Linux and the command line is the ways you can chain commands to do cool things: pi@rpz14101:~ $ sudo apt-cache search matrix | grep "The Matrix" cmatrix - simulates the display from "The Matrix" wmmatrix - View The Matrix in a Window Maker dock application That’s a bit more manageable! We could also have narrowed the list using this command: sudo apt-cache search “The Matrix” This returns fewer results than before, but a few more than the grep command. Whichever way you find it, we see that the cmatrix package is the one we are looking for. Installing is as simple as running this: pi@rpz14101:~ $ ` Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: cmatrix-xfont The following NEW packages will be installed: cmatrix 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 16.2 kB of archives. After this operation, 27.6 kB of additional disk space will be used. Get:1 http://mirrordirector.raspbian.org/raspbian/ jessie/main cmatrix armhf 1.2a-5 [16.2 kB] Fetched 16.2 kB in 1s (15.3 kB/s) Selecting previously unselected package cmatrix. (Reading database ... 121906 files and directories currently installed.) Preparing to unpack .../cmatrix_1.2a-5_armhf.deb ... Unpacking cmatrix (1.2a-5) ... Processing triggers for man-db (2.7.0.2-5) ... Setting up cmatrix (1.2a-5) ... After that, we are ready to go! Channel your inner Neo and run this command: cmatrix -s -b You should be in the Matrix! Try it on your serial and SSH connections, and even in the terminal on VNC: you’ll notice differences in the rendering behavior. There are literally thousands of software packages available to install in the repositories of our awesome open source communities. Pretty much anything you think a computer should be able to do, someone, or a group of people, has worked on a solution and pushed it out to the repositories.  We will be using apt-get a lot throughout this cookbook; it is one of the commands you’ll find yourself using all the time as you get more interested in Raspberry Pis and the Linux operating system. Running sudo apt-get update will check all repositories to see whether there are any version updates available. Here, you can see all of the locations it checks to see whether there is anything new for Raspbian: pi@rpz14101:~ $ sudo apt-get update Get:1 http://archive.raspberrypi.org jessie InRelease [13.2 kB] Get:2 http://mirrordirector.raspbian.org jessie InRelease [14.9 kB] Get:3 http://archive.raspberrypi.org jessie/main armhf Packages [144 kB] Get:4 http://mirrordirector.raspbian.org jessie/main armhf Packages [8,981 kB] Hit http://archive.raspberrypi.org jessie/ui armhf Packages Ign http://archive.raspberrypi.org jessie/main Translation-en_GB Get:5 http://mirrordirector.raspbian.org jessie/contrib armhf Packages [37.5 kB] Ign http://archive.raspberrypi.org jessie/main Translation-en Get:6 http://mirrordirector.raspbian.org jessie/non-free armhf Packages [70.3 kB] Ign http://archive.raspberrypi.org jessie/ui Translation-en_GB Ign http://archive.raspberrypi.org jessie/ui Translation-en Get:7 http://mirrordirector.raspbian.org jessie/rpi armhf Packages [1,356 B] Ign http://mirrordirector.raspbian.org jessie/contrib Translation-en_GB Ign http://mirrordirector.raspbian.org jessie/contrib Translation-en Ign http://mirrordirector.raspbian.org jessie/main Translation-en_GB Ign http://mirrordirector.raspbian.org jessie/main Translation-en Ign http://mirrordirector.raspbian.org jessie/non-free Translation-en_GB Ign http://mirrordirector.raspbian.org jessie/non-free Translation-en Ign http://mirrordirector.raspbian.org jessie/rpi Translation-en_GB Ign http://mirrordirector.raspbian.org jessie/rpi Translation-en Fetched 9,263 kB in 34s (272 kB/s) Reading package lists... Done After updating, apt-get upgrade will look at the versions of everything you have installed and upgrade anything to the latest version if there is one available. Depending on how many updates you have, this can take quite a while: pi@rpz14101:~ $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: dpkg-dev gir1.2-gdkpixbuf-2.0 initramfs-tools libavcodec56 libavformat56 libavresample2 libavutil54 libdevmapper-event1.02.1 libdevmapper1.02.1 libdpkg-perl python-picamera python3-picamera raspberrypi-kernel raspberrypi-net-mods ssh tzdata xarchiver 40 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 57.0 MB of archives. After this operation, 415 kB of additional disk space will be used. Do you want to continue? [Y/n] y Get:1 http://archive.raspberrypi.org/debian/ jessie/main nodered armhf 0.14.5 [5,578 kB] … Adding 'diversion of /boot/overlays/w1-gpio.dtbo to /usr/share/rpikernelhack/overlays/w1-gpio.dtbo by rpikernelhack' … run-parts: executing /etc/kernel/postrm.d/initramfs-tools 4.4.11-v7+ /boot/kernel7.img Preparing to unpack .../raspberrypi-net-mods_1.2.3_armhf.deb ... Unpacking raspberrypi-net-mods (1.2.3) over (1.2.2) ... Processing triggers for man-db (2.7.0.2-5) ... … Setting up libssl1.0.0:armhf (1.0.1t-1+deb8u2) ... Setting up libxml2:armhf (2.9.1+dfsg1-5+deb8u2) ... … Removing 'diversion of /boot/overlays/w1-gpio-pullup.dtbo to /usr/share/rpikernelhack/overlays/w1-gpio-pullup.dtbo by rpikernelhack' … Setting up raspberrypi-net-mods (1.2.3) ... Modified /etc/network/interfaces detected. Leaving unchanged and writing new file as interfaces.new. Processing triggers for libc-bin (2.19-18+deb8u4) ... Processing triggers for initramfs-tools (0.120+deb8u2) ... You don’t really have to understand the details of what’s going on during the upgrade, and it will let you know if there were any problems at the end (and often what to do to fix them). Regularly updating and upgrading will keep all of your software current with all of the latest bug fixes and security patches. There’s more… You can also add and remove software from the GUI. If you log on to your Pi, either directly to a monitor or over VNC Server (a recipe we covered earlier), you can find the Add / Remove Software option under Menu | Preferences: The PiPackages utility makes it very easy to find software when you only have a general idea of what you are looking for. While you can do the same things with the apt commands, if you are browsing, this is a little easier on the eyes. The utility provides categorizations so you don’t have to scroll through every package. Clicking on a package provides a detailed description: Simply check the box and click on Apply or OK, and the software will be installed. Now you can install software on your Raspberry Pi Zero from the command line or GUI. Summary In this article, we looked at the basic file manipulation functionalities of Linux on a Raspberry Pi. We saw how to navigate the filesystem and create, edit, rename, copy, and move files and folders. We also saw how to change ownership of a file and install and uninstall programs. Resources for Article: Further resources on this subject: Sending Notifications using Raspberry Pi Zero [Article] Raspberry Pi LED Blueprints [Article] Hacking a Raspberry Pi project? Understand electronics first! [Article]
Read more
  • 0
  • 0
  • 2542
article-image-ros-architecture-and-concepts
Packt
28 Dec 2016
24 min read
Save for later

ROS Architecture and Concepts

Packt
28 Dec 2016
24 min read
In this article by Anil Mahtani, Luis Sánchez, Enrique Fernández, and Aaron Martinez, authors of the book Effective Robotics Programming with ROS, Third Edition, you will learn the structure of ROS and the parts it is made up of. Furthermore, you will start to create nodes and packages and use ROS with examples using Turtlesim. The ROS architecture has been designed and divided into three sections or levels of concepts: The Filesystem level The Computation Graph level The Community level (For more resources related to this topic, see here.) The first level is the Filesystem level. In this level, a group of concepts are used to explain how ROS is internally formed, the folder structure, and the minimum number of files that it needs to work. The second level is the Computation Graph level where communication between processes and systems happens. In this section, we will see all the concepts and mechanisms that ROS has to set up systems, handle all the processes, and communicate with more than a single computer, and so on. The third level is the Community level, which comprises a set of tools and concepts to share knowledge, algorithms, and code between developers. This level is of great importance; as with most open source software projects, having a strong community not only improves the ability of newcomers to understand the intricacies of the software as well as solve the most common issues, it is also the main force driving its growth. Understanding the ROS Filesystem level The ROS Filesystem is one of the strangest concepts to grasp when starting to develop projects in ROS, but with time and patience, the reader will easily become familiar with it and realize its value for managing projects and its dependencies. The main goal of the ROS Filesystem is to centralize the build process of a project while at the same time provide enough flexibility and tooling to decentralize its dependencies. Similar to an operating system, an ROS program is divided into folders, and these folders have files that describe their functionalities: Packages: Packages form the atomic level of ROS. A package has the minimum structure and content to create a program within ROS. It may have ROS runtime processes (nodes), configuration files, and so on. Package manifests: Package manifests provide information about a package, licenses, dependencies, compilation flags, and so on. A package manifest is managed with a file called package.xml. Metapackages: When you want to aggregate several packages in a group, you will use metapackages. In ROS Fuerte, this form for ordering packages was called Stacks. To maintain the simplicity of ROS, the stacks were removed, and now, metapackages make up this function. In ROS, there exists a lot of these metapackages; for example, the navigation stack. Metapackage manifests: Metapackage manifests (package.xml) are similar to a normal package, but with an export tag in XML. It also has certain restrictions in its structure. Message (msg) types: A message is the information that a process sends to other processes. ROS has a lot of standard types of messages. Message descriptions are stored in my_package/msg/MyMessageType.msg. Service (srv) types: Service descriptions, stored in my_package/srv/MyServiceType.srv, define the request and response data structures for services provided by each process in ROS. In the following screenshot, you can see the content of the turtlesim package. What you see is a series of files and folders with code, images, launch files, services, and messages. Keep in mind that the screenshot was edited to show a short list of files; the real package has more: The workspace In general terms, the workspace is a folder which contains packages, those packages contain our source files and the environment or workspace provides us with a way to compile those packages. It is useful when you want to compile various packages at the same time and it is a good way to centralize all of our developments. A typical workspace is shown in the following screenshot. Each folder is a different space with a different role: The source space: In the source space (the src folder), you put your packages, projects, clone packages, and so on. One of the most important files in this space is CMakeLists.txt. The src folder has this file because it is invoked by cmake when you configure the packages in the workspace. This file is created with the catkin_init_workspace command. The build space: In the build folder, cmake and catkin keep the cache information, configuration, and other intermediate files for our packages and projects. Development (devel) space: The devel folder is used to keep the compiled programs. This is used to test the programs without the installation step. Once the programs are tested, you can install or export the package to share with other developers. You have two options with regard to building packages with catkin. The first one is to use the standard CMake workflow. With this, you can compile one package at a time, as shown in the following commands: $ cmakepackageToBuild/ $ make If you want to compile all your packages, you can use the catkin_make command line, as shown in the following commands: $ cd workspace $ catkin_make Both commands build the executable in the build space directory configured in ROS. Another interesting feature of ROS is its overlays. When you are working with a package of ROS, for example, turtlesim, you can do it with the installed version, or you can download the source file and compile it to use your modified version. ROS permits you to use your version of this package instead of the installed version. This is very useful information if you are working on an upgrade of an installed package. Packages Usually, when we talk about packages, we refer to a typical structure of files and folders. This structure looks as follows: include/package_name/: This directory includes the headers of the libraries that you would need. msg/: If you develop nonstandard messages, put them here. scripts/: These are executable scripts that can be in Bash, Python, or any other scripting language. src/: This is where the source files of your programs are present. You can create a folder for nodes and nodelets or organize it as you want. srv/: This represents the service (srv) types. CMakeLists.txt: This is the CMake build file. package.xml: This is the package manifest. To create, modify, or work with packages, ROS gives us tools for assistance, some of which are as follows: rospack: This command is used to get information or find packages in the system. catkin_create_pkg: This command is used when you want to create a new package. catkin_make: This command is used to compile a workspace. rosdep: This command installs the system dependencies of a package. rqt_dep: This command is used to see the package dependencies as a graph. If you want to see the package dependencies as a graph, you will find a plugin called package graph in rqt. Select a package and see the dependencies. To move between packages and their folders and files, ROS gives us a very useful package called rosbash, which provides commands that are very similar to Linux commands. The following are a few examples: roscd: This command helps us change the directory. This is similar to the cd command in Linux. rosed: This command is used to edit a file. roscp: This command is used to copy a file from a package. rosd: This command lists the directories of a package. rosls: This command lists the files from a package. This is similar to the ls command in Linux. Every package must contain a package.xml file, as it is used to specify information about the package. If you find this file inside a folder, it is very likely that this folder is a package or a metapackage. If you open the package.xml file, you will see information about the name of the package, dependencies, and so on. All of this is to make the installation and the distribution of these packages easy. Two typical tags that are used in the package.xml file are <build_depend> and <run _depend>. The <build_depend> tag shows which packages must be installed before installing the current package. This is because the new package might use functionality contained in another package. The <run_depend> tag shows the packages that are necessary to run the code of the package. The following screenshot is an example of the package.xml file: Metapackages As we have shown earlier, metapackages are special packages with only one file inside; this file is package.xml. This package does not have other files, such as code, includes, and so on. Metapackages are used to refer to others packages that are normally grouped following a feature-like functionality, for example, navigation stack, ros_tutorials, and so on. You can convert your stacks and packages from ROS Fuerte to Kinetic and catkin using certain rules for migration. These rules can be found at http://wiki.ros.org/catkin/migrating_from_rosbuild. In the following screenshot, you can see the content from the package.xml file in the ros_tutorialsmetapackage. You can see the <export> tag and the <run_depend> tag. These are necessary in the package manifest, which is also shown in the following screenshot: If you want to locate the ros_tutorialsmetapackage, you can use the following command: $ rosstack find ros_tutorials The output will be a path, such as /opt/ros/kinetic/share/ros_tutorials. To see the code inside, you can use the following command line: $ vim /opt/ros/kinetic/ros_tutorials/package.xml Remember that Kinetic uses metapackages, not stacks, but the rosstack find command-line tool is also capable of finding metapackages. Messages ROS uses a simplified message description language to describe the data values that ROS nodes publish. With this description, ROS can generate the right source code for these types of messages in several programming languages. ROS has a lot of messages predefined, but if you develop a new message, it will be in the msg/ folder of your package. Inside that folder, certain files with the .msg extension define the messages. A message must have two main parts: fields and constants. Fields define the type of data to be transmitted in the message, for example, int32, float32, and string, or new types that you have created earlier, such as type1 and type2. Constants define the name of the fields. An example of an msg file is as follows: int32 id float32vel string name In ROS, you can find a lot of standard types to use in messages, as shown in the following table list: Primitive type Serialization C++ Python bool (1) unsigned 8-bit int uint8_t(2) bool int8 signed 8-bit int int8_t int uint8 unsigned 8-bit int uint8_t int(3) int16 signed 16-bit int int16_t int uint16 unsigned 16-bit int uint16_t int int32 signed 32-bit int int32_t int uint32 unsigned 32-bit int uint32_t int int64 signed 64-bit int int64_t long uint64 unsigned 64-bit int uint64_t long float32 32-bit IEEE float float float float64 64-bit IEEE float double float string ascii string (4) std::string string time secs/nsecs signed 32-bit ints ros::Time rospy.Time duration secs/nsecs signed 32-bit ints ros::Duration rospy.Duration A special type in ROS is the header type. This is used to add the time, frame, and sequence number. This permits you to have the messages numbered, to see who is sending the message, and to have more functions that are transparent for the user and that ROS is handling. The header type contains the following fields: uint32seq time stamp string frame_id You can see the structure using the following command: $ rosmsg show std_msgs/Header Thanks to the header type, it is possible to record the timestamp and frame of what is happening with the robot. ROS provides certain tools to work with messages. The rosmsg tool prints out the message definition information and can find the source files that use a message type. In upcoming sections, we will see how to create messages with the right tools. Services ROS uses a simplified service description language to describe ROS service types. This builds directly upon the ROS msg format to enable request/response communication between nodes. Service descriptions are stored in .srv files in the srv/ subdirectory of a package. To call a service, you need to use the package name, along with the service name; for example, you will refer to the sample_package1/srv/sample1.srv file as sample_package1/sample1. Several tools exist to perform operations on services. The rossrv tool prints out the service descriptions and packages that contain the .srv files, and finds source files that use a service type. If you want to create a service, ROS can help you with the service generator. These tools generate code from an initial specification of the service. You only need to add the gensrv() line to your CMakeLists.txt file. In upcoming sections, you will learn how to create your own services. Understanding the ROS Computation Graph level ROS creates a network where all the processes are connected. Any node in the system can access this network, interact with other nodes, see the information that they are sending, and transmit data to the network: The basic concepts in this level are nodes, the master, Parameter Server, messages, services, topics, and bags, all of which provide data to the graph in different ways and are explained in the following list: Nodes: Nodes are processes where computation is done. If you want to have a process that can interact with other nodes, you need to create a node with this process to connect it to the ROS network. Usually, a system will have many nodes to control different functions. You will see that it is better to have many nodes that provide only a single functionality, rather than have a large node that makes everything in the system. Nodes are written with an ROS client library, for example, roscpp or rospy. The master: The master provides the registration of names and the lookup service to the rest of the nodes. It also sets up connections between the nodes. If you don't have it in your system, you can't communicate with nodes, services, messages, and others. In a distributed system, you will have the master in one computer, and you can execute nodes in this or other computers. Parameter Server: Parameter Server gives us the possibility of using keys to store data in a central location. With this parameter, it is possible to configure nodes while it's running or to change the working parameters of a node. Messages: Nodes communicate with each other through messages. A message contains data that provides information to other nodes. ROS has many types of messages, and you can also develop your own type of message using standard message types. Topics: Each message must have a name to be routed by the ROS network. When a node is sending data, we say that the node is publishing a topic. Nodes can receive topics from other nodes by simply subscribing to the topic. A node can subscribe to a topic even if there aren't any other nodes publishing to this specific topic. This allows us to decouple the production from the consumption. It's important that topic names are unique to avoid problems and confusion between topics with the same name. Services: When you publish topics, you are sending data in a many-to-many fashion, but when you need a request or an answer from a node, you can't do it with topics. Services give us the possibility of interacting with nodes. Also, services must have a unique name. When a node has a service, all the nodes can communicate with it, thanks to ROS client libraries. Bags: Bags are a format to save and play back the ROS message data. Bags are an important mechanism to store data, such as sensor data, that can be difficult to collect but is necessary to develop and test algorithms. You will use bags a lot while working with complex robots. In the following diagram, you can see the graphic representation of this level. It represents a real robot working in real conditions. In the graph, you can see the nodes, the topics, which node is subscribed to a topic, and so on. This graph does not represent messages, bags, Parameter Server, and services. It is necessary for other tools to see a graphic representation of them. The tool used to create the graph is rqt_graph. These concepts are implemented in the ros_comm repository. Nodes and nodelets Nodes are executable that can communicate with other processes using topics, services, or the Parameter Server. Using nodes in ROS provides us with fault tolerance and separates the code and functionalities, making the system simpler. ROS has another type of node called nodelets. These special nodes are designed to run multiple nodes in a single process, with each nodelet being a thread (light process). This way, we avoid using the ROS network among them, but permit communication with other nodes. With that, nodes can communicate more efficiently, without overloading the network. Nodelets are especially useful for camera systems and 3D sensors, where the volume of data transferred is very high. A node must have a unique name in the system. This name is used to permit the node to communicate with another node using its name without ambiguity. A node can be written using different libraries, such as roscpp and rospy; roscpp is for C++ and rospy is for Python. Throughout we will use roscpp. ROS has tools to handle nodes and give us information about it, such as rosnode. The rosnode tool is a command-line tool used to display information about nodes, such as listing the currently running nodes. The supported commands are as follows: rosnodeinfo NODE: This prints information about a node rosnodekill NODE: This kills a running node or sends a given signal rosnodelist: This lists the active nodes rosnode machine hostname: This lists the nodes running on a particular machine or lists machines rosnode ping NODE: This tests the connectivity to the node rosnode cleanup: This purges the registration information from unreachable nodes A powerful feature of ROS nodes is the possibility of changing parameters while you start the node. This feature gives us the power to change the node name, topic names, and parameter names. We use this to reconfigure the node without recompiling the code so that we can use the node in different scenes. An example of changing a topic name is as follows: $ rosrun book_tutorials tutorialX topic1:=/level1/topic1 This command will change the topic name topic1 to /level1/topic1. To change parameters in the node, you can do something similar to changing the topic name. For this, you only need to add an underscore (_) to the parameter name; for example: $ rosrun book_tutorials tutorialX _param:=9.0 The preceding command will set param to the float number 9.0. Bear in mind that you cannot use names that are reserved by the system. They are as follows: __name: This is a special, reserved keyword for the name of the node __log: This is a reserved keyword that designates the location where the node's log file should be written __ip and __hostname: These are substitutes for ROS_IP and ROS_HOSTNAME __master: This is a substitute for ROS_MASTER_URI __ns: This is a substitute for ROS_NAMESPACE Topics Topics are buses used by nodes to transmit data. Topics can be transmitted without a direct connection between nodes, which means that the production and consumption of data is decoupled. A topic can have various subscribers and can also have various publishers, but you should be careful when publishing the same topic with different nodes as it can create conflicts. Each topic is strongly typed by the ROS message type used to publish it, and nodes can only receive messages from a matching type. A node can subscribe to a topic only if it has the same message type. The topics in ROS can be transmitted using TCP/IP and UDP. The TCP/IP-based transport is known as TCPROS and uses the persistent TCP/IP connection. This is the default transport used in ROS. The UDP-based transport is known as UDPROS and is a low-latency, lossy transport. So, it is best suited to tasks such as teleoperation. ROS has a tool to work with topics called rostopic. It is a command-line tool that gives us information about the topic or publishes data directly on the network. This tool has the following parameters: rostopicbw /topic: This displays the bandwidth used by the topic. rostopic echo /topic: This prints messages to the screen. rostopic find message_type: This finds topics by their type. rostopichz /topic: This displays the publishing rate of the topic. rostopic info /topic: This prints information about the topic, such as its message type, publishers, and subscribers. rostopic list: This prints information about active topics. rostopic pub /topic type args: This publishes data to the topic. It allows us to create and publish data in whatever topic we want, directly from the command line. rostopic type /topic: This prints the topic type, that is, the type of message it publishes. We will learn to use this command-line tool in upcoming sections. Services When you need to communicate with nodes and receive a reply, in an RPC fashion, you cannot do it with topics; you need to do it with services. Services are developed by the user, and standard services don't exist for nodes. The files with the source code of the services are stored in the srv folder. Similar to topics, services have an associated service type that is the package resource name of the .srv file. As with other ROS filesystem-based types, the service type is the package name and the name of the .srv file. ROS has two command-line tools to work with services: rossrv and rosservice. With rossrv, we can see information about the services' data structure, and it has exactly the same usage as rosmsg. With rosservice, we can list and query services. The supported commands are as follows: rosservice call /service args: This calls the service with the arguments provided rosservice find msg-type: This finds services by service type rosservice info /service: This prints information about the service rosservice list: This lists the active services rosservice type /service: This prints the service type rosserviceuri /service: This prints the ROSRPC URI service Messages A node publishes information using messages which are linked to topics. The message has a simple structure that uses standard types or types developed by the user. Message types use the following standard ROS naming convention; the name of the package, then /, and then the name of the .msg file. For example, std_msgs/ msg/String.msg has the std_msgs/String message type. ROS has the rosmsg command-line tool to get information about messages. The accepted parameters are as follows: rosmsg show: This displays the fields of a message rosmsg list: This lists all messages rosmsg package: This lists all of the messages in a package rosmsg packages: This lists all of the packages that have the message rosmsg users: This searches for code files that use the message type rosmsgmd5: This displays the MD5 sum of a message Bags A bag is a file created by ROS with the .bag format to save all of the information of the messages, topics, services, and others. You can use this data later to visualize what has happened; you can play, stop, rewind, and perform other operations with it. The bag file can be reproduced in ROS just as a real session can, sending the topics at the same time with the same data. Normally, we use this functionality to debug our algorithms. To use bag files, we have the following tools in ROS: rosbag: This is used to record, play, and perform other operations rqt_bag: This is used to visualize data in a graphic environment rostopic: This helps us see the topics sent to the nodes The ROS master The ROS master provides naming and registration services to the rest of the nodes in the ROS system. It tracks publishers and subscribers to topics as well as services. The role of the master is to enable individual ROS nodes to locate one another. Once these nodes have located each other, they communicate with each other in a peer-to-peer fashion. You can see in a graphic example the steps performed in ROS to advertise a topic, subscribe to a topic, and publish a message, in the following diagram: The master also provides Parameter Server. The master is most commonly run using the roscore command, which loads the ROS master, along with other essential components. Parameter Server Parameter Server is a shared, multivariable dictionary that is accessible via a network. Nodes use this server to store and retrieve parameters at runtime. Parameter Server is implemented using XMLRPC and runs inside the ROS master, which means that its API is accessible via normal XMLRPC libraries. XMLRPC is a Remote Procedure Call (RPC) protocol that uses XML to encode its calls and HTTP as a transport mechanism. Parameter Server uses XMLRPC data types for parameter values, which include the following: 32-bit integers Booleans Strings Doubles ISO8601 dates Lists Base64-encoded binary data ROS has the rosparam tool to work with Parameter Server. The supported parameters are as follows: rosparam list: This lists all the parameters in the server rosparam get parameter: This gets the value of a parameter rosparam set parameter value: This sets the value of a parameter rosparam delete parameter: This deletes a parameter rosparam dump file: This saves Parameter Server to a file rosparam load file: This loads a file (with parameters) on Parameter Server Understanding the ROS Community level The ROS Community level concepts are the ROS resources that enable separate communities to exchange software and knowledge. These resources include the following: Distributions: ROS distributions are collections of versioned metapackages that you can install. ROS distributions play a similar role to Linux distributions. They make it easier to install a collection of software, and they also maintain consistent versions across a set of software. Repositories: ROS relies on a federated network of code repositories, where different institutions can develop and release their own robot software components. The ROS Wiki: The ROS Wiki is the main forum for documenting information about ROS. Anyone can sign up for an account, contribute their own documentation, provide corrections or updates, write tutorials, and more. Bug ticket system: If you find a problem or want to propose a new feature, ROS has this resource to do it. Mailing lists: The ROS user-mailing list is the primary communication channel about new updates to ROS as well as a forum to ask questions about the ROS software. ROS Answers: Users can ask questions on forums using this resource. Blog: You can find regular updates, photos, and news at http://www.ros.org/news. Summary This article provided you with general information about the ROS architecture and how it works. You saw certain concepts and tools of how to interact with nodes, topics, and services. Remember that if you have queries about something, you can use the official resources of ROS from http://www.ros.org. Additionally, you can ask the ROS Community questions at http://answers.ros.org. Resources for Article: Further resources on this subject: The ROS Filesystem levels [article] Using ROS with UAVs [article] Face Detection and Tracking Using ROS, Open-CV and Dynamixel Servos [article]
Read more
  • 0
  • 0
  • 16991

article-image-exploring-new-reality-oculus-rift
Packt
21 Dec 2016
7 min read
Save for later

Exploring a New Reality with the Oculus Rift

Packt
21 Dec 2016
7 min read
In this article by Jack Donovan the author of the book Mastering Oculus Rift Development explains about virtual reality. What made you feel like you were truly immersed in a game world for the first time? Was it graphics that looked impressively realistic, ambient noise that perfectly captured the environment and mood, or the way the game's mechanics just started to feel like a natural reflex? Game developers constantly strive to replicate scenarios that are as real and as emotionally impactful as possible, and they've never been as close as they are now with the advent of virtual reality. Virtual reality has been a niche market since the early 1950s, often failing to evoke a meaningful sense of presence that the concept hinges on—that is, until the first Oculus Rift prototype was designed in 2010 by Oculus founder Palmer Luckey. The Oculus Rift proved that modern rendering and display technology was reaching a point that an immersive virtual reality could be achieved, and that's when the new era of VR development began. Today, virtual reality development is as accessible as ever, comprehensively supported in the most popular off-the-shelf game development engines such as Unreal Engine and Unity 3D. In this article, you'll learn all of the essentials that go into a complete virtual reality experience, and master the techniques that will enable you to bring any idea you have into VR. This article will cover everything you need to know to get started with virtual reality, including the following points: The concept of virtual reality The importance of intent in VR design Common limitations of VR games (For more resources related to this topic, see here.) The concept of virtual reality Virtual reality has taken many forms and formats since its inception, but this article will be focused on modern virtual reality experienced with a Head-Mounted Display (HMD). HMDs like the Oculus Rift are typically treated like an extra screen attached to your computer (more on that later) but with some extra components that enable it to capture its own orientation (and position, in some cases). This essentially amounts to a screen that sits on your head and knows how it moves, so it can mirror your head movements in the VR experience and enable you to look around. In the following example from the Oculus developer documentation, you can see how the HMD translates this rotational data into the game world: Depth perception Depth perception is another big principle of VR. Because the display of the HMD is always positioned right in front of the user's eyes, the rendered image is typically split into two images: one per eye, with each individual image rendered from the position of that eye. You can observe the difference between normal rendering and VR rendering in the following two images. This first image is how normal 3D video games are rendered to a computer screen, created based on the position and direction of a virtual camera in the game world: This next image shows how VR scenes are rendered, using a different virtual camera for each eye to create a stereoscopic depth effect: Common limitations of VR games While virtual reality provides the ability to immerse a player's senses like never before, it also creates some new, unique problems that must be addressed by responsible VR developers. Locomotion sickness Virtual reality headsets are meant to make you feel like you're somewhere else, and it only makes sense that you'd want to be able to explore that somewhere. Unfortunately, common game mechanics like traditional joystick locomotion are problematic for VR. Our inner ears and muscular system are accustomed to sensing inertia while we move from place to place, so if you were to push a joystick forward to walk in virtual reality, your body would get confused when it sensed that you're still in a chair. Typically when there's a mismatch between what we're seeing and what we're feeling, our bodies assume that a nefarious poison or illness is at work, and they prepare to rid the body of the culprit; that's the motion sickness you feel when reading in a car, standing on a boat, and yes, moving in virtual reality. This doesn't mean that we have to prevent users from moving in VR, we just might want to be more clever about it—more on that later. The primary cause of nausea with traditional joystick movement in VR is acceleration; your brain gets confused when picking up speed or slowing down, but not so much when it's moving at a constant rate (think of being stationary in a car that's moving at a constant speed). Rotation can get even more complicated, because rotating smoothly even at a constant speed causes nausea. Some developers get around this by using hard increments instead of gradual acceleration, such as rotating in 30 degree "snaps" once per second instead of rotating smoothly. Lack of real-world vision One of the potentially clumsiest aspects of virtual reality is getting your hands where they need to be without being able to see them. Whether you're using a gamepad, keyboard, or motion controller, you'll likely need to use your hands to interact with VR—which you can't see with an HMD sitting over your eyes. It's good practice to centralize input around resting positions (i.e. the buttons naturally closest to your thumbs on a gamepad or the home row of a computer keyboard), but shy away from anything that requires complex precise input, like writing sentences on a keyboard or hitting button combos on a controller. Some VR headsets, such as the HTC Vive, have a forward-facing camera (sometimes called a passthrough camera) that users can choose to view in VR, enabling basic interaction with the real world without taking the headset off. The Oculus Rift doesn't feature a built-in camera, but you could still display the feed from an external camera on any surface in virtual reality. Even before modern VR, developers were creating applications that overlay smart information over what a camera is seeing; that's called augmented reality (AR). Experiences that ride the line between camera output and virtual environments are called mixed reality (MR). Unnatural head movements You may not have thought about it before, but looking around in a traditional first-person shooter (FPS) is quite different than looking around using your head. The right analog stick is typically used to direct the camera and make adjustments as necessary, but in VR, players actually move their head instead of using their thumb to move their virtual head. Don't expect players in VR to be able to make the same snappy pivots and 180-degree turns on a dime that are trivial in a regular console game. Summary In this article, we approached the topic of virtual reality from a fundamental level. The HMD is the crux of modern VR simulation, and it uses motion tracking components as well as peripherals like the constellation system to create immersive experiences that transport the player into a virtual world. Now that we've scratched the surface of the hardware, development techniques and use cases of virtual reality—particularly the Oculus Rift—you're probably beginning to think about what you'd like to create in virtual reality yourself Resources for Article: Further resources on this subject: Cardboard is Virtual Reality for Everyone [article] Virtually Everything for Everyone [article] Customizing the Player Character [article]
Read more
  • 0
  • 0
  • 2931

article-image-setting-development-environment-android-wear-applications
Packt
16 Nov 2016
8 min read
Save for later

Setting up Development Environment for Android Wear Applications

Packt
16 Nov 2016
8 min read
"Give me six hours to chop down a tree and I will spend the first four sharpening the axe." -Abraham Lincoln In this article by Siddique Hameed, author of the book, Mastering Android Wear Application Development, they have discussed the steps, topics and process involved in setting up the development environment using Android Studio. If you have been doing Android application development using Android Studio, some of the items discussed here might already be familiar to you. However, there are some Android Wear platform specific items that may be of interest to you. (For more resources related to this topic, see here.) Android Studio Android Studio Integrated Development Environment (IDE) is based on IntelliJ IDEA platform. If you had done Java development using IntelliJ IDEA platform, you'll be feeling at home working with Android Studio IDE. Android Studio platform comes bundled with all the necessary tools and libraries needed for Android application development. If this is the first time you are setting up Android Studio on your development system, please make sure that you have satisfied all the requirements before installation. Please refer developer site (http://developer.android.com/sdk/index.html#Requirements) for checking on the items needed for the operating system of your choice. Please note that you need at least JDK version 7 installed on your machine for Android Studio to work. You can verify your JDK's version that by typing following commands shown in the following terminal window: If your system does not meet that requirement, please upgrade it using the method that is specific to your operating system. Installation Android Studio platform includes Android Studio IDE, SDK Tools, Google API Libraries and systems images needed for Android application development. Visit the http://developer.android.com/sdk/index.html URL for downloading Android Studio for your corresponding operating system and following the installation instruction. Git and GitHub Git is a distributed version control system that is used widely for open-source projects. We'll be using Git for sample code and sample projects as we go along the way. Please make sure that you have Git installed on your system by typing the command as shown in the following a terminal window. If you don't have it installed, please download and install it using this link for your corresponding operating system by visting https://git-scm.com/downloads. If you are working on Apple's Macintosh OS like El Capitan or Yosemite or Linux distributions like Ubuntu, Kubuntu or Mint, chances are you already have Git installed on it. GitHub (http://github.com) is a free and popular hosting service for Git based open-source projects. They make checking out and contributing to open-source projects easier than ever. Sign up with GitHub for a free account if you don't have an account already. We don't need to be an expert on Git for doing Android application development. But, we do need to be familiar with basic usages of Git commands for working with the project. Android Studio comes by default with Git and GitHub integration. It helps importing sample code from Google's GitHub repository and helps you learn by checking out various application code samples. Gradle Android application development uses Gradle (http://gradle.org/)as the build system. It is used to build, test, run and package the apps for running and testing Android applications. Gradle is declarative and uses convention over configuration for build settings and configurations. It manages all the library dependencies for compiling and building the code artifacts. Fortunately, Android Studio abstracts most of the common Gradle tasks and operations needed for development. However, there may be some cases where having some extra knowledge on Gradle would be very helpful. We won't be digging into Gradle now, we'll be discussing about it as and when needed during the course of our journey. Android SDK packages When you install Android Studio, it doesn't include all the Android SDK packages that are needed for development. The Android SDK separates tools, platforms and other components & libraries into packages that can be downloaded, as needed using the Android SDK Manager. Before we start creating an application, we need to add some required packages into the Android SDK. Launch SDK Manager from Android Studio Tools | Android | SDK Manager. Let's quickly go over a few items from what's displayed in the preceding screenshot. As you can see, the Android SDK's location is /opt/android-sdk on my machine, it may very well be different on your machine depending on what you selected during Android Studio installation setup. The important point to note is that the Android SDK is installed on a different location than the Android Studio's path (/Applications/Android Studio.app/). This is considered a good practice because the Android SDK installation can be unaffected depending on a new installation or upgrade of Android Studio or vice versa. On the SDK Platforms tab, select some recent Android SDK versions like Android 6.0, 5.1.1, and 5.0.1. Depending on the Android versions you are planning on supporting in your wearable apps, you can select other older Android versions. Checking on Show Package Details option on the bottom right, the SDK Manager will show all the packages that will be installed for a given Android SDK version. To be on the safer side, select all the packages. As you may have noticed already Android Wear ARM and Intel system images are included in the package selection. Now when you click on SDK Tools tab, please make sure the following items are selected: Android SDK Build Tools Android SDK Tools 24.4.1 (Latest version) Android SDK Platform-Tools Android Support Repository, rev 25 (Latest version) Android Support Library, rev 23.1.1 (Latest version) Google Play services, rev 29 (Latest version) Google Repository, rev 24 (Latest version) Intel X86 Emulator Accelerator (HAXM installer), rev 6.0.1 (Latest version) Documentation for Android SDK (Optional) Please do not change anything on SDK Update Sites. Keep the update sites as it was configured by default. Clicking on OK will take some time downloading and installing all the components and packages selected. Android Virtual Device Android Virtual Devices will enable us to test the code using Android Emulators. It lets us pick and choose various Android system target versions and form factors needed for testing. Launch Android Virtual Device Manager from Tools | Android | AVD Manager From Android Virtual Device Manager window, click on Create New Virtual Device button on the bottom left and proceed to the next screen and select Wear category Select Marshmallow API Level 23 on x86 and everything else as default, as shown in the following screenshot: Note that the current latest Android version is Marshmallow of API level 23 at the time of this writing. It may or may not be the latest version while you are reading this article. Feel free to select the latest version that is available during that time. Also, if you'd like to support or test in earlier Android versions, please feel free to do so in that screen. After the virtual device is selected successfully, you should see that listed on the Android Virtual Devices list as show in the following screenshot: Although it's not required to use real Android Wear device during development, sometimes it may be convenient and faster developing it in a real physical device. Let's build a skeleton App Since we have all the components and configurations needed for building wearable app, let's build a skeleton app and test out what we have so far. From Android Studio's Quick Start menu, click on Import an Android code sample tab: Select the Skeleton Wearable App from Wearable category as shown in following screenshot: Click Next and select your preferred project location. As you can see the skeleton project is cloned from Google's sample code repository from GitHub. Clicking on Finish button will pull the source code and Android Studio will compile and build the code and get it ready for execution. The following screenshot indicates that the Gradle build finished successfully without any errors. Click on Run configuration to run the app: When the app starts running, Android Studio will prompt us to select the deployment targets, we can select the emulator we created earlier and click OK. After the code compiles and uploaded to the emulator, the main activity of the Skeleton App will be launched as shown below: Clicking on SHOW NOTIFICATION will show the notification as below: Clicking on START TIMER will start the timer and run for five seconds and clicking on Finish Activity will close the activity take the emulator to the home screen. Summary We discussed the process involved in setting up the Android Studio development environment by covering the installation instruction, requirements and SDK tools, packages and other components needed for Android Wear development. We also checked out source code for Skeleton Wearable App from Google's sample code repository and successfully ran and tested it on Android device emulator. Resources for Article: Further resources on this subject: Building your first Android Wear Application [Article] Getting started with Android Development [Article] The Art of Android Development Using Android Studio [Article]
Read more
  • 0
  • 0
  • 1699
article-image-face-detection-and-tracking-using-ros-open-cv-and-dynamixel-servos
Packt
14 Nov 2016
13 min read
Save for later

Face Detection and Tracking Using ROS, Open-CV and Dynamixel Servos

Packt
14 Nov 2016
13 min read
In this article by Lentin Joseph, the author of the book ROS Robotic Projects, we learn how one of the capability in most of the service and social robots is face detection and tracking. The robot can identify faces and it can move its head according to the human face move around it. There are numerous implementation of face detection and tracking system in web. Most of the trackers are having a pan and tilt mechanism and a camera is mounted on the top of the servos. In this article, we are going to see a simple tracker which is having only pan mechanism. We are going to use a USB webcam which is mounted on AX-12 Dynamixel servo. (For more resources related to this topic, see here.) You can see following topics on this article: Overview of the project Hardware and software prerequisites Overview of the project The aim of the project is to build a simple face tracker which can track face only in the horizontal axis of camera. The tracker is having a webcam, Dynamixel servo called AX-12 and a supporting bracket to mount camera on the servo. The servo tracker will follow the face until it align to the center of the image which is getting from webcam. Once it reaches the center, it will stop and wait for the face movement. The face detection is done using OpenCV and ROS interface, and controlling the servo is done using Dynamixel motor driver in ROS. We are going to create two ROS packages for this complete tracking system, one is for face detection and finding centroid of face and next is for sending commands to servo to track the face using the centroid values. Ok!! Let's start discussing the hardware and software prerequisites of this project. Hardware and software prerequisites Following table of hardware components which can be used for building this project. You can also see a rough price of each component and purchase link of the same. List of hardware components: No Component name Estimated price (USD) Purchase link 1 Webcam 32 https://amzn.com/B003LVZO8S 2 Dynamixel AX -12 A servo with mounting bracket 76 https://amzn.com/B0051OXJXU 3 USB To Dynamixel Adapter 50 http://www.robotshop.com/en/robotis-usb-to-dynamixel-adapter.html 4 Extra 3 pin cables for AX-12 servos 12 http://www.trossenrobotics.com/p/100mm-3-Pin-DYNAMIXEL-Compatible-Cable-10-Pack 5 Power adapter 5 https://amzn.com/B005JRGOCM 6 6 Port AX/MX Power Hub 5 http://www.trossenrobotics.com/6-port-ax-mx-power-hub 7 USB extension cable 1 https://amzn.com/B00YBKA5Z0   Total Cost + Shipping + Tax ~ 190 - 200   The URLs and price can vary. If the links are not available, you can do a google search may do the job. The shipping charges and tax are excluded from the price. If you are thinking that, the total cost is not affordable for you, then there are cheap alternatives to do this project too. The main heart of this project is Dynamixel servo. We may can replace this servo with RC servos which only cost around $10 and using an Arduino board cost around $20 can be used to control the servo too, so you may can think about porting the face tracker project work using Arduino and RC servo Ok, let's look on to the software prerequisites of the project. The prerequisites include ROS framework, OS version and ROS packages: No Name of software Estimated price (USD) Download link 1 Ubuntu 16.04 L.T.S Free http://releases.ubuntu.com/16.04/ 2 ROS Kinetic L.T.S Free http://wiki.ros.org/kinetic/Installation/Ubuntu 3 ROS usb_cam package Free http://wiki.ros.org/usb_cam 3 ROS cv_bridge package Free http://wiki.ros.org/cv_bridge 4 ROS Dynamixel controller Free https://github.com/arebgun/dynamixel_motor 5 Windows 7 or higher ~ $120 https://www.microsoft.com/en-in/software-download/windows7 7 RoboPlus (Windows application) Free http://www.robotis.com/download/software/RoboPlusWeb%28v1.1.3.0%29.exe The above table will gives you an idea about which all are the software we are going to be used for this project. We may need both Windows and Ubuntu for doing this project. It will be great if you have dual operating system on your computer Let's see how to install these software first Installing dependent ROS packages We have already installed and configured Ubuntu 16.04 and ROS Kinetic on it. Here are the dependent packages we need to install for this project. Installing usb_cam ROS package Let's see the use of usb_cam package in ROS first. The usb_cam package is ROS driver for Video4Linux (V4L) USB camera. The V4L is a collection of devices drivers in Linux for real time video capture from webcams. The usb_cam ROS package work using the V4L devices and publish the video stream from devices as ROS image messages. We can subscribe it and do our own processing using it. The official ROS page of this package is given in the above table. You may can check this page for different settings and configuration this package offers. Creating ROS workspace for dependencies Before starting installing usb_cam package, let's create a ROS workspace for keeping the dependencies of the entire projects mentioned in the book. We may can create another workspace for keeping the project code. Create a ROS workspace called ros_project_dependencies_ws in home folder. Clone the usb_cam package into the src folder: $ git clone https://github.com/bosch-ros-pkg/usb_cam.git Build the workspace using catkin_make After building the package, install v4l-util Ubuntu package. It is a collection of command line V4L utilities which is using by usb_cam package: $ sudo apt-get install v4l-utils Configuring webcam on Ubuntu 16.04 After installing these two, we can connect the webcam to PC to check it properly detected in our PC. Take a terminal and execute dmesg command to check the kernel logs. If your camera is detected in Linux, it may give logs like this{ $ dmesg Kernels logs of webcam device You can use any webcam which has driver support in Linux. In this project, iBall Face2Face (http://www.iball.co.in/Product/Face2Face-C8-0--Rev-3-0-/90) webcam is used for tracking. You can also go for a popular webcam which is mentioned as a hardware prerequisite. You can opt that for better performance and tracking. If our webcam has support in Ubuntu, we may can open the video device using a tool called cheese. Cheese is simply a webcam viewer. Enter the command cheese in the terminal, if it is not available you can install it using following command: $ sudo apt-get install cheese If the driver and device are proper, you may get the video stream from webcam like this: Webcam video streaming using cheese Congratulation!!, your webcam is working well in Ubuntu, but are we done with everything? No. The next thing is to test the ROS usb_cam package. We have to make sure that is working well in ROS!! Interfacing Webcam to ROS Let's test the webcam using usb_cam package. The following command is used to launch the usb_cam nodes to display images from webcam and publishing ROS image topics at the same time: $ roslaunch usb_cam usb_cam-test.launch If everything works fine, you will get the image stream and logs in the terminal as shown below: Working of usb_cam package in ROS The image is displayed using image_view package in ROS, which is subscribing the topic called /usb_cam/image_raw Here are the topics, that usb_cam node is publishing: Figure 4: The topics publishing by usb_cam node We have just done with interfacing a webcam in ROS. So what's next? We have to interface AX-12 Dynamixel servo to ROS. Before proceeding to interfacing, we have to do something to configure this servo. Next we are going to see how to configure a Dynamixel servo AX-12A. Configuring a Dynamixel servo using RoboPlus The configuring of Dynamixel servo can be done using a software called RoboPlus providing by ROBOTIS INC (http://en.robotis.com/index/), the manufacturer of Dynamixel servos. For configuring Dynamixel, you have to switch your operating system to Windows. The tool RoboPlus will work on Windows. In this project, we are going to configure the servo in Windows 7. Here is the link to download RoboPlus: http://www.robotis.com/download/software/RoboPlusWeb%28v1.1.3.0%29.exe. If the link is not working, you can just search in google to get the RoboPlus 1.1.3 version. After installing the software, you will get the following window, navigate to Expert tab in the software for getting the application for configuring Dynamixel: Dynamixel Manager in RoboPlus Before taking the Dynamixel Wizard and do configuring, we have to connect the Dynamixel and properly powered. Following image of AX-12A servo that we are using for this project and its pin connection. AX-12A Dynamixel and its connection diagram Unlike other RC servos, AX-12 is an intelligent actuator which is having a microcontroller which can monitoring every parameters of servo and customize all the servo parameters. It is having a geared drive and the output of the servo is connected to servo horn. We may can connect any links on this servo horn. There are two connection ports behind each servo. Each port is having pins such as VCC, GND and Data. The ports of Dynamixel are daisy chained so that we can connect another servo from one servo. Here is the connection diagram of Dynamixel with PC. AX-12A Dynamixel and its connection diagram The main hardware component which interfacing Dynamixel to PC is called USB to Dynamixel. This is a USB to serial adapter which can convert USB to RS232, RS 484 and TTL. In AX-12 motors, the data communication is using TTL. From the Figure AX 12A Dynamixel and its connection diagram, we can seen that there are three pins in each port. The data pin is used to send and receive from AX-12 and power pins are used to power the servo. The input voltage range of AX-12A Dynamixel is from 9V to 12V. The second port in each Dynamixel can be used for daisy chaining. We can connect up to 254 servos using this chaining Official links of AX-12A servo and USB to Dynamixel AX-12A: http://www.trossenrobotics.com/dynamixel-ax-12-robot-actuator.aspx USB to Dynamixel: http://www.trossenrobotics.com/robotis-bioloid-usb2dynamixel.aspx For working with Dynamixel, we should know some more things. Let's have a look on some of the important specification of AX-12A servo. The specifications are taken from the servo manual. Figure 8: AX-12A Specification The Dynamixel servos can communicate to PC to a maximum speed of 1 Mbps. It can also give feedback of various parameters such as its position, temperature and current load. Unlike RC servos, this can rotate up to 300 degrees and communication is mainly using digital packets. Powering and connecting Dynamixel to PC Now we are going to connect Dynamixel to PC. Given below a standard way of connecting Dynamixel to PC: Connecting Dynamixel to PC The three pin cable can be first connected to any of the port of AX-12 and other side have to connect to the way to connect 6 port power hub. From the 6-port power hub, connect another cable to the USB to Dynamixel. We have to select the switch of USB to Dynamixel to TTL mode. The power can be either be connected through a 12V adapter or through battery. The 12V adapter is having 2.1X5.5 female barrel jack, so you should check the specification of male adapter plug while purchasing. Setting USB to Dynamixel driver on PC As we have already discussed the USB to Dynamixel adapter is a USB to serial convertor, which is having an FTDI chip (http://www.ftdichip.com/) on it. We have to install a proper FTDI driver on the PC for detecting the device. The driver may need for Windows but not for Linux, because FTDI drivers are built in the Linux kernel. If you install the RoboPlus software, the driver may be already installed along with it. If it is not, you can manually install from the RoboPlus installation folder. Plug the USB to Dynamixel to the Windows PC, and check the device manager. (Right click on My Computer | Properties | Device Manager). If the device is properly detected, you can see like following figure: Figure 10: COM Port of USB to Dynamixel If you are getting a COM port for USB to Dynamixel, then you can start the Dynamixel Manager from RoboPlus. You can connect to the serial port number from the list and click the Search button to scan for Dynamixel as shown in following figure. Select the COM port from the list and connecting to the port is marked as 1. After connecting to the COM port, select the default baud rate as 1 Mbps and click the Start searching button: COM Port of USB to Dynamixel If you are getting a list of servo in the left side panel, it means that your PC have detected a Dynamixel servo. If the servo is not detecting, you can do following steps to debug: Make sure that supply is proper and connections are proper using a multi meter. Make sure that servo LED on the back is blinking when power on. If it is not coming, it can be a problem with servo or power supply. Upgrade the firmware of servo using Dynamixel Manager from the option marked as 6. The wizard is shown in the following figure. During wizard, you may need power off the supply and ON it again for detecting the servo. After detecting the servo, you have to select the servo model and install the new firmware. This may help to detect the servo in the Dynamixel manager if the existing servo firmware is out dated. Dynamixel recovery wizard If the servos are listing on the Dynamixel manager, click on a servo and you can see its complete configuration. We have to modify some values inside the configurations for our current face tracker project. Here are the parameters: ID : Set the ID as 1 Baud rate: 1 Moving Speed: 100 Goal Position: 512 The modified servo settings are shown in the following figure: Modified Dynamixel firmware settings After doing these settings, you can check the servo is working good or not by changing its Goal position. Yes!! Now you are done with Dynamixel configuration, Congratulation!! What's next? We want to interface Dynamixel to ROS. Summary This article was about building a face tracker using webcam and Dynamixel motor. The software we have used was ROS and OpenCV. Initially you can see how to configure the webcam and Dynamixel motor and after configuring, we were trying to build two package for tracking. One package does the face detection and second package is a controller which can send position command to Dynamixel to track the face. We have discussed the use of all files inside the packages and did a final run to show the complete working of the system. Resources for Article: Further resources on this subject: Using ROS with UAVs [article] Hardware Overview [article] Arduino Development [article]
Read more
  • 0
  • 0
  • 6946

article-image-using-ros-uavs
Packt
10 Nov 2016
11 min read
Save for later

Using ROS with UAVs

Packt
10 Nov 2016
11 min read
In this article by Carol Fairchild and Dr. Thomas L. Harman, co-authors of the book ROS Robotics by Example, you will discover the field of ROS Unmanned Air Vehicles (UAVs), quadrotors, in particular. The reader is invited to learn about the simulated hector quadrotor and take it for a flight. The ROS wiki currently contains a growing list of ROS UAVs. These UAVs are as follows: (For more resources related to this topic, see here.) AscTec Pelican and Hummingbird quadrotors Berkeley's STARMAC Bitcraze Crazyflie DJI Matrice 100 Onboard SDK ROS support Erle-copter ETH sFly Lily CameraQuadrotor Parrot AR.Drone Parrot Bebop Penn's AscTec Hummingbird Quadrotors PIXHAWK MAVs Skybotix CoaX helicopter Refer to http://wiki.ros.org/Robots#UAVs for future additions to this list and to the website http://www.ros.org/news/robots/uavs/ to get the latest ROS UAV news. The preceding list contains primarily quadrotors except for the Skybotix helicopter. A number of universities have adopted the AscTec Hummingbird as their ROS UAV of choice. For this book, we present a simulator called Hector Quadrotor and two real quadrotors Crazyflie and Bebop that use ROS. Introducing Hector quadrotor The hardest part of learning about flying robots is the constant crashing. From the first-time learning of flight control to testing new hardware or flight algorithms, the resulting failures can have a huge cost in terms of broken hardware components. To answer this difficulty, a simulated air vehicle designed and developed for ROS is ideal. A simulated quadrotor UAV for the ROS Gazebo environment has been developed by the Team Hector Darmstadt of Technische Universität Darmstadt. This quadrotor, called Hector Quadrotor, is enclosed in the hector_quadrotor metapackage. This metapackage contains the URDF description for the quadrotor UAV, its flight controllers, and launch files for running the quadrotor simulation in Gazebo. Advanced uses of the Hector Quadrotor simulation allows the user to record sensor data such as Lidar, depth camera, and many more. The quadrotor simulation can also be used to test flight algorithms and control approaches in simulation. The hector_quadrotor metapackage contains the following key packages: hector_quadrotor_description: This package provides a URDF model of Hector Quadrotor UAV and the quadrotor configured with various sensors. Several URDF quadrotor models exist in this package each configured with specific sensors and controllers. hector_quadrotor_gazebo: This package contains launch files for executing Gazebo and spawning one or more Hector Quadrotors. hector_quadrotor_gazebo_plugins: This package contains three UAV specific plugins, which are as follows: The simple controller gazebo_quadrotor_simple_controller subscribes to a geometry_msgs/Twist topic and calculates the required forces and torques A gazebo_ros_baro sensor plugin simulates a barometric altimeter The gazebo_quadrotor_propulsion plugin simulates the propulsion, aerodynamics, and drag from messages containing motor voltages and wind vector input hector_gazebo_plugins: This package contains generic sensor plugins not specific to UAVs such as IMU, magnetic field, GPS, and sonar data. hector_quadrotor_teleop: This package provides a node and launch files for controlling a quadrotor using a joystick or gamepad. hector_quadrotor_demo: This package provides sample launch files that run the Gazebo quadrotor simulation and hector_slam for indoor and outdoor scenarios. The entire list of packages for the hector_quadrotor metapackage appears in the next section. Loading Hector Quadrotor The repository for the hector_quadrotor software is at the following website: https://github.com/tu-darmstadt-ros-pkg/hector_quadrotor The following commands will install the binary packages of hector_quadrotor into the ROS package repository on your computer. If you wish to install the source files, instructions can be found at the following website: http://wiki.ros.org/hector_quadrotor/Tutorials/Quadrotor%20outdoor%20flight%20demo (It is assumed that ros-indigo-desktop-full has been installed on your computer.) For the binary packages, type the following commands to install the ROS Indigo version of Hector Quadrotor: $ sudo apt-get update $ sudo apt-get install ros-indigo-hector-quadrotor-demo A large number of ROS packages are downloaded and installed in the hector_quadrotor_demo download with the main hector_quadrotor packages providing functionality that should now be somewhat familiar. This installation downloads the following packages: hector_gazebo_worlds hector_geotiff hector_map_tools hector_mapping hector_nav_msgs hector_pose_estimation hector_pose_estimation_core hector_quadrotor_controller hector_quadrotor_controller_gazebo hector_quadrotor_demo hector_quadrotor_description hector_quadrotor_gazebo hector_quadrotor_gazebo_plugins hector_quadrotor_model hector_quadrotor_pose_estimation hector_quadrotor_teleop hector_sensors_description hector_sensors_gazebo hector_trajectory_serve hector_uav_msgs message_to_tf A number of these packages will be discussed as the Hector Quadrotor simulations are described in the next section. Launching Hector Quadrotor in Gazebo Two demonstration tutorials are available to provide the simulated applications of the Hector Quadrotor for both outdoor and indoor environments. These simulations are described in the next sections. Before you begin the Hector Quadrotor simulations, check your ROS master using the following command in your terminal window: $ echo $ROS_MASTER_URI If this variable is set to localhost or the IP address of your computer, no action is needed. If not, type the following command: $ export ROS_MASTER_URI=http://localhost:11311 This command can also be added to your .bashrc file. Be sure to delete or comment out (with a #) any other commands setting the ROS_MASTER_URI variable. Flying Hector outdoors The quadrotor outdoor flight demo software is included as part of the hector_quadrotor metapackage. Start the simulation by typing the following command: $ roslaunch hector_quadrotor_demo outdoor_flight_gazebo.launch This launch file loads a rolling landscape environment into the Gazebo simulation and spawns a model of the Hector Quadrotor configured with a Hokuyo UTM-30LX sensor. An rviz node is also started and configured specifically for the quadrotor outdoor flight. A large number of flight position and control parameters are initialized and loaded into the Parameter Server. Note that the quadrotor propulsion model parameters for quadrotor_propulsion plugin and quadrotor drag model parameters for quadrotor_aerodynamics plugin are displayed. Then look for the following message: Physics dynamic reconfigure ready. The following screenshots show both the Gazebo and rviz display windows when the Hector outdoor flight simulation is launched. The view from the onboard camera can be seen in the lower left corner of the rviz window. If you do not see the camera image on your rviz screen, make sure that Camera has been added to your Displays panel on the left and that the checkbox has been checked. If you would like to pilot the quadrotor using the camera, it is best to uncheck the checkboxes for tf and robot_model because the visualizations sometimes block the view: Hector Quadrotor outdoor gazebo view Hector Quadrotor outdoor rviz view The quadrotor appears on the ground in the simulation ready for takeoff. Its forward direction is marked by a red mark on its leading motor mount. To be able to fly the quadrotor, you can launch the joystick controller software for the Xbox 360 controller. In a second terminal window, launch the joystick controller software with a launch file from the hector_quadrotor_teleop package: $ roslaunch hector_quadrotor_teleop xbox_controller.launch This launch file launches joy_node to process all joystick input from the left stick and right stick on the Xbox 360 controller as shown in the following figure. The message published by joy_node contains the current state of the joystick axes and buttons. The quadrotor_teleop node subscribes to these messages and publishes messages on the cmd_vel topic. These messages provide the velocity and direction for the quadrotor flight. Several joystick controllers are currently supported by the ROS joy package including PS3 and Logitech devices. For this launch, the joystick device is accessed as /dev/input/js0 and is initialized with a deadzone of 0.050000. Parameters to set the joystick axes are as follows: * /quadrotor_teleop/x_axis: 5 * /quadrotor_teleop/y_axis: 4 * /quadrotor_teleop/yaw_axis: 1 * /quadrotor_teleop/z_axis: 2 These parameters map to the Left Stick and the Right Stick controls on the Xbox 360 controller shown in the following figure. The direction of these sticks control are as follows: Left Stick: Forward (up) is to ascend Backward (down) is to descend Right is to rotate clockwise Left is to rotate counterclockwise Right Stick: Forward (up) is to fly forward Backward (down) is to fly backward Right is to fly right Left is to fly left Xbox 360 joystick controls for Hector Now use the joystick to fly around the simulated outdoor environment! The pilot's view can be seen in the Camera image view on the bottom left of the rviz screen. As you fly around in Gazebo, keep an eye on the Gazebo launch terminal window. The screen will display messages as follows depending on your flying ability: [ INFO] [1447358765.938240016, 617.860000000]: Engaging motors! [ WARN] [1447358778.282568898, 629.410000000]: Shutting down motors due to flip over! When Hector flips over, you will need to relaunch the simulation. Within ROS, a clearer understanding of the interactions between the active nodes and topics can be obtained by using the rqt_graph tool. The following diagram depicts all currently active nodes (except debug nodes) enclosed in oval shapes. These nodes publish to the topics enclosed in rectangles that are pointed to by arrows. You can use the rqt_graph command in a new terminal window to view the same display: ROS nodes and topics for Hector Quadrotor outdoor flight demo The rostopic list command will provide a long list of topics currently being published. Other command line tools such as rosnode, rosmsg, rosparam, and rosservice will help gather specific information about Hector Quadrotor's operation. To understand the orientation of the quadrotor on the screen, use the Gazebo GUI to show the vehicle's tf reference frame. Select quadrotor in the World panel on the left, then select the translation mode on the top environment toolbar (looks like crossed double-headed arrows). This selection will bring up the red-green-blue axis for the x-y-z axes of the tf frame, respectively. In the following figure, the x axis is pointing to the left, the y axis is pointing to the right (toward the reader), and the z axis is pointing up. Hector Quadrotor tf reference frame An YouTube video of hector_quadrotor outdoor scenario demo shows the hector_quadrotor in Gazebo operated with a gamepad controller: https://www.youtube.com/watch?v=9CGIcc0jeuI Flying Hector indoors The quadrotor indoor SLAM demo software is included as part of the hector_quadrotor metapackage. To launch the simulation, type the following command: $ roslaunch hector_quadrotor_demo indoor_slam_gazebo.launch The following screenshots show both the rviz and Gazebo display windows when the Hector indoor simulation is launched: Hector Quadrotor indoor rviz and gazebo views If you do not see this image for Gazebo, roll your mouse wheel to zoom out of the image. Then you will need to rotate the scene to a top-down view, in order to find the quadrotor press Shift + right mouse button. The environment was the offices at Willow Garage and Hector starts out on the floor of one of the interior rooms. Just like in the outdoor demo, the xbox_controller.launch file from the hector_quadrotor_teleop package should be executed: $ roslaunch hector_quadrotor_teleop xbox_controller.launch If the quadrotor becomes embedded in the wall, waiting a few seconds should release it and it should (hopefully) end up in an upright position ready to fly again. If you lose sight of it, zoom out from the Gazebo screen and look from a top-down view. Remember that the Gazebo physics engine is applying minor environment conditions as well. This can create some drifting out of its position. The rqt graph of the active nodes and topics during the Hector indoor SLAM demo is shown in the following figure. As Hector is flown around the office environment, the hector_mapping node will be performing SLAM and be creating a map of the environment. ROS nodes and topics for Hector Quadrotor indoor SLAM demo The following screenshot shows Hector Quadrotor mapping an interior room of Willow Garage: Hector mapping indoors using SLAM The 3D robot trajectory is tracked by the hector_trajectory_server node and can be shown in rviz. The map along with the trajectory information can be saved to a GeoTiff file with the following command: $ rostopic pub syscommand std_msgs/String "savegeotiff" The savegeotiff map can be found in the hector_geotiff/map directory. An YouTube video of hector_quadrotor stack indoor SLAM demo shows hector_quadrotor in Gazebo operated with a gamepad controller: https://www.youtube.com/watch?v=IJbJbcZVY28 Summary In this article, we learnt about Hector Quadrotors, loading Hector Quadrotors, launching Hector Quadrotor in Gazebo, and also about flying Hector outdoors and indoors. Resources for Article: Further resources on this subject: Working On Your Bot [article] Building robots that can walk [article] Detecting and Protecting against Your Enemies [article]
Read more
  • 0
  • 0
  • 11697