Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Single Board Computers

71 Articles
article-image-prototyping-arduino-projects-using-python
Packt
04 Mar 2015
18 min read
Save for later

Prototyping Arduino Projects using Python

Packt
04 Mar 2015
18 min read
In this article by Pratik Desai, the author of Python Programming for Arduino, we will cover the following topics: Working with pyFirmata methods Servomotor – moving the motor to a certain angle The Button() widget – interfacing GUI with Arduino and LEDs (For more resources related to this topic, see here.) Working with pyFirmata methods The pyFirmata package provides useful methods to bridge the gap between Python and Arduino's Firmata protocol. Although these methods are described with specific examples, you can use them in various different ways. This section also provides detailed description of a few additional methods. Setting up the Arduino board To set up your Arduino board in a Python program using pyFirmata, you need to specifically follow the steps that we have written down. We have distributed the entire code that is required for the setup process into small code snippets in each step. While writing your code, you will have to carefully use the code snippets that are appropriate for your application. You can always refer to the example Python files containing the complete code. Before we go ahead, let's first make sure that your Arduino board is equipped with the latest version of the StandardFirmata program and is connected to your computer: Depending upon the Arduino board that is being utilized, start by importing the appropriate pyFirmata classes to the Python code. Currently, the inbuilt pyFirmata classes only support the Arduino Uno and Arduino Mega boards: from pyfirmata import Arduino In case of Arduino Mega, use the following line of code: from pyfirmata import ArduinoMega Before we start executing any methods that is associated with handling pins, it is required to properly set the Arduino board. To perform this task, we have to first identify the USB port to which the Arduino board is connected and assign this location to a variable in the form of a string object. For Mac OS X, the port string should approximately look like this: port = '/dev/cu.usbmodemfa1331' For Windows, use the following string structure: port = 'COM3' In the case of the Linux operating system, use the following line of code: port = '/dev/ttyACM0' The port's location might be different according to your computer configuration. You can identify the correct location of your Arduino USB port by using the Arduino IDE. Once you have imported the Arduino class and assigned the port to a variable object, it's time to engage Arduino with pyFirmata and associate this relationship to another variable: board = Arduino(port) Similarly, for Arduino Mega, use this: board = ArduinoMega(port) The synchronization between the Arduino board and pyFirmata requires some time. Adding sleep time between the preceding assignment and the next set of instructions can help to avoid any issues that are related to serial port buffering. The easiest way to add sleep time is to use the inbuilt Python method, sleep(time): from time import sleep sleep(1) The sleep() method takes seconds as the parameter and a floating-point number can be used to provide the specific sleep time. For example, for 200 milliseconds, it will be sleep(0.2). At this point, you have successfully synchronized your Arduino Uno or Arduino Mega board to the computer using pyFirmata. What if you want to use a different variant (other than Arduino Uno or ArduinoMega) of the Arduino board? Any board layout in pyFirmata is defined as a dictionary object. The following is a sample of the dictionary object for the Arduino board: arduino = {     'digital' : tuple(x for x in range(14)),     'analog' : tuple(x for x in range(6)),     'pwm' : (3, 5, 6, 9, 10, 11),     'use_ports' : True,     'disabled' : (0, 1) # Rx, Tx, Crystal     } For your variant of the Arduino board, you have to first create a custom dictionary object. To create this object, you need to know the hardware layout of your board. For example, an Arduino Nano board has a layout similar to a regular Arduino board, but it has eight instead of six analog ports. Therefore, the preceding dictionary object can be customized as follows: nano = {     'digital' : tuple(x for x in range(14)),     'analog' : tuple(x for x in range(8)),     'pwm' : (3, 5, 6, 9, 10, 11),     'use_ports' : True,     'disabled' : (0, 1) # Rx, Tx, Crystal     } As you have already synchronized the Arduino board earlier, modify the layout of the board using the setup_layout(layout) method: board.setup_layout(nano) This command will modify the default layout of the synchronized Arduino board to the Arduino Nano layout or any other variant for which you have customized the dictionary object. Configuring Arduino pins Once your Arduino board is synchronized, it is time to configure the digital and analog pins that are going to be used as part of your program. Arduino board has digital I/O pins and analog input pins that can be utilized to perform various operations. As we already know, some of these digital pins are also capable of PWM. The direct method Now before we start writing or reading any data to these pins, we have to first assign modes to these pins. In the Arduino sketch-based, we use the pinMode function, that is, pinMode(11, INPUT) for this operation. Similarly, in pyFirmata, this assignment operation is performed using the mode method on the board object as shown in the following code snippet: from pyfirmata import Arduino from pyfirmata import INPUT, OUTPUT, PWM   # Setting up Arduino board port = '/dev/cu.usbmodemfa1331' board = Arduino(port)   # Assigning modes to digital pins board.digital[13].mode = OUTPUT board.analog[0].mode = INPUT The pyFirmata library includes classes for the INPUT and OUTPUT modes, which are required to be imported before you utilized them. The preceding example shows the delegation of digital pin 13 as an output and the analog pin 0 as an input. The mode method is performed on the variable assigned to the configured Arduino board using the digital[] and analog[] array index assignment. The pyFirmata library also supports additional modes such as PWM and SERVO. The PWM mode is used to get analog results from digital pins, while SERVO mode helps a digital pin to set the angle of the shaft between 0 to 180 degrees. If you are using any of these modes, import their appropriate classes from the pyFirmata library. Once these classes are imported from the pyFirmata package, the modes for the appropriate pins can be assigned using the following lines of code: board.digital[3].mode = PWM board.digital[10].mode = SERVO Assigning pin modes The direct method of configuring pin is mostly used for a single line of execution calls. In a project containing a large code and complex logic, it is convenient to assign a pin with its role to a variable object. With an assignment like this, you can later utilize the assigned variable throughout the program for various actions, instead of calling the direct method every time you need to use that pin. In pyFirmata, this assignment can be performed using the get_pin(pin_def) method: from pyfirmata import Arduino port = '/dev/cu.usbmodemfa1311' board = Arduino(port)   # pin mode assignment ledPin = board.get_pin('d:13:o') The get_pin() method lets you assign pin modes using the pin_def string parameter, 'd:13:o'. The three components of pin_def are pin type, pin number, and pin mode separated by a colon (:) operator. The pin types ( analog and digital) are denoted with a and d respectively. The get_pin() method supports three modes, i for input, o for output, and p for PWM. In the previous code sample, 'd:13:o' specifies the digital pin 13 as an output. In another example, if you want to set up the analog pin 1 as an input, the parameter string will be 'a:1:i'. Working with pins As you have configured your Arduino pins, it's time to start performing actions using them. Two different types of methods are supported while working with pins: reporting methods and I/O operation methods. Reporting data When pins get configured in a program as analog input pins, they start sending input values to the serial port. If the program does not utilize this incoming data, the data starts getting buffered at the serial port and quickly overflows. The pyFirmata library provides the reporting and iterator methods to deal with this phenomenon. The enable_reporting() method is used to set the input pin to start reporting. This method needs to be utilized before performing a reading operation on the pin: board.analog[3].enable_reporting() Once the reading operation is complete, the pin can be set to disable reporting: board.analog[3].disable_reporting() In the preceding example, we assumed that you have already set up the Arduino board and configured the mode of the analog pin 3 as INPUT. The pyFirmata library also provides the Iterator() class to read and handle data over the serial port. While working with analog pins, we recommend that you start an iterator thread in the main loop to update the pin value to the latest one. If the iterator method is not used, the buffered data might overflow your serial port. This class is defined in the util module of the pyFirmata package and needs to be imported before it is utilized in the code: from pyfirmata import Arduino, util # Setting up the Arduino board port = 'COM3' board = Arduino(port) sleep(5)   # Start Iterator to avoid serial overflow it = util.Iterator(board) it.start() Manual operations As we have configured the Arduino pins to suitable modes and their reporting characteristic, we can start monitoring them. The pyFirmata provides the write() and read() methods for the configured pins. The write() method The write() method is used to write a value to the pin. If the pin's mode is set to OUTPUT, the value parameter is a Boolean, that is, 0 or 1: board.digital[pin].mode = OUTPUT board.digital[pin].write(1) If you have used an alternative method of assigning the pin's mode, you can use the write() method as follows: ledPin = board.get_pin('d:13:o') ledPin.write(1) In case of the PWM signal, the Arduino accepts a value between 0 and 255 that represents the length of the duty cycle between 0 and 100 percent. The PyFiramta library provides a simplified method to deal with the PWM values as instead of values between 0 and 255, as you can just provide a float value between 0 and 1.0. For example, if you want a 50 percent duty cycle (2.5V analog value), you can specify 0.5 with the write() method. The pyFirmata library will take care of the translation and send the appropriate value, that is, 127, to the Arduino board via the Firmata protocol: board.digital[pin].mode = PWM board.digital[pin].write(0.5) Similarly, for the indirect method of assignment, you can use code similar to the following one: pwmPin = board.get_pin('d:13:p') pwmPin.write(0.5) If you are using the SERVO mode, you need to provide the value in degrees between 0 and 180. Unfortunately, the SERVO mode is only applicable for direct assignment of the pins and will be available in future for indirect assignments: board.digital[pin].mode = SERVO board.digital[pin].write(90) The read() method The read() method provides an output value at the specified Arduino pin. When the Iterator() class is being used, the value received using this method is the latest updated value at the serial port. When you read a digital pin, you can get only one of the two inputs, HIGH or LOW, which will translate to 1 or 0 in Python: board.digital[pin].read() The analog pins of Arduino linearly translate the input voltages between 0 and +5V to 0 and 1023. However, in pyFirmata, the values between 0 and +5V are linearly translated into the float values of 0 and 1.0. For example, if the voltage at the analog pin is 1V, an Arduino program will measure a value somewhere around 204, but you will receive the float value as 0.2 while using pyFirmata's read() method in Python. Servomotor – moving the motor to certain angle Servomotors are widely used electronic components in applications such as pan-tilt camera control, robotics arm, mobile robot movements, and so on where precise movement of the motor shaft is required. This precise control of the motor shaft is possible because of the position sensing decoder, which is an integral part of the servomotor assembly. A standard servomotor allows the angle of the shaft to be set between 0 and 180 degrees. The pyFirmata provides the SERVO mode that can be implemented on every digital pin. This prototyping exercise provides a template and guidelines to interface a servomotor with Python. Connections Typically, a servomotor has wires that are color-coded red, black and yellow, respectively to connect with the power, ground, and signal of the Arduino board. Connect the power and the ground of the servomotor to the 5V and the ground of the Arduino board. As displayed in the following diagram, connect the yellow signal wire to the digital pin 13: If you want to use any other digital pin, make sure that you change the pin number in the Python program in the next section. Once you have made the appropriate connections, let's move on to the Python program. The Python code The Python file consisting this code is named servoCustomAngle.py and is located in the code bundle of this book, which can be downloaded from https://www.packtpub.com/books/content/support/19610. Open this file in your Python editor. Like other examples, the starting section of the program contains the code to import the libraries and set up the Arduino board: from pyfirmata import Arduino, SERVO from time import sleep   # Setting up the Arduino board port = 'COM5' board = Arduino(port) # Need to give some time to pyFirmata and Arduino to synchronize sleep(5) Now that you have Python ready to communicate with the Arduino board, let's configure the digital pin that is going to be used to connect the servomotor to the Arduino board. We will complete this task by setting the mode of pin 13 to SERVO: # Set mode of the pin 13 as SERVO pin = 13 board.digital[pin].mode = SERVO The setServoAngle(pin,angle) custom function takes the pins on which the servomotor is connected and the custom angle as input parameters. This function can be used as a part of various large projects that involve servos: # Custom angle to set Servo motor angle def setServoAngle(pin, angle):   board.digital[pin].write(angle)   sleep(0.015) In the main logic of this template, we want to incrementally move the motor shaft in one direction until it achieves the maximum achievable angle (180 degrees) and then move it back to the original position with the same incremental speed. In the while loop, we will ask the user to provide inputs to continue this routine, which will be captured using the raw_input() function. The user can enter character y to continue this routine or enter any other character to abort the loop: # Testing the function by rotating motor in both direction while True:   for i in range(0, 180):     setServoAngle(pin, i)   for i in range(180, 1, -1):     setServoAngle(pin, i)     # Continue or break the testing process   i = raw_input("Enter 'y' to continue or Enter to quit): ")   if i == 'y':     pass   else:     board.exit()     break While working with all these prototyping examples, we used the direct communication method by using digital and analog pins to connect the sensor with Arduino. Now, let's get familiar with another widely used communication method between Arduino and the sensors. This is called I2C communication. The Button() widget – interfacing GUI with Arduino and LEDs Now that you have had your first hands-on experience in creating a Python graphical interface, let's integrate Arduino with it. Python makes it easy to interface various heterogeneous packages within each other and that is what you are going to do. In the next coding exercise, we will use Tkinter and pyFirmata to make the GUI work with Arduino. In this exercise, we are going to use the Button() widget to control the LEDs interfaced with the Arduino board. Before we jump to the exercises, let's build the circuit that we will need for all upcoming programs. The following is a Fritzing diagram of the circuit where we use two different colored LEDs with pull up resistors. Connect these LEDs to digital pins 10 and 11 on your Arduino Uno board, as displayed in the following diagram: While working with the code provided in this section, you will have to replace the Arduino port that is used to define the board variable according to your operating system. Also, make sure that you provide the correct pin number in the code if you are planning to use any pins other than 10 and 11. For some exercises, you will have to use the PWM pins, so make sure that you have correct pins. You can use the entire code snippet as a Python file and run it. But, this might not be possible in the upcoming exercises due to the length of the program and the complexity involved. For the Button() widget exercise, open the exampleButton.py file. The code contains three main components: pyFirmata and Arduino configurations Tkinter widget definitions for a button The LED blink function that gets executed when you press the button As you can see in the following code snippet, we have first imported libraries and initialized the Arduino board using the pyFirmata methods. For this exercise, we are only going to work with one LED and we have initialized only the ledPin variable for it: import Tkinter import pyfirmata from time import sleep port = '/dev/cu.usbmodemfa1331' board = pyfirmata.Arduino(port) sleep(5) ledPin = board.get_pin('d:11:o') As we are using the pyFirmata library for all the exercises in this article, make sure that you have uploaded the latest version of the standard Firmata sketch on your Arduino board. In the second part of the code, we have initialized the root Tkinter widget as top and provided a title string. We have also fixed the size of this window using the minsize() method. In order to get more familiar with the root widget, you can play around with the minimum and maximum size of the window: top = Tkinter.Tk() top.title("Blink LED using button") top.minsize(300,30) The Button() widget is a standard Tkinter widget that is mostly used to obtain the manual, external input stimulus from the user. Like the Label() widget, the Button() widget can be used to display text or images. Unlike the Label() widget, it can be associated with actions or methods when it is pressed. When the button is pressed, Tkinter executes the methods or commands specified by the command option: startButton = Tkinter.Button(top,                              text="Start",                              command=onStartButtonPress) startButton.pack() In this initialization, the function associated with the button is onStartButtonPress and the "Start" string is displayed as the title of the button. Similarly, the top object specifies the parent or the root widget. Once the button is instantiated, you will need to use the pack() method to make it available in the main window. In the preceding lines of code, the onStartButonPress() function includes the scripts that are required to blink the LEDs and change the state of the button. A button state can have the state as NORMAL, ACTIVE, or DISABLED. If it is not specified, the default state of any button is NORMAL. The ACTIVE and DISABLED states are useful in applications when repeated pressing of the button needs to be avoided. After turning the LED on using the write(1) method, we will add a time delay of 5 seconds using the sleep(5) function before turning it off with the write(0) method: def onStartButtonPress():   startButton.config(state=Tkinter.DISABLED)   ledPin.write(1)   # LED is on for fix amount of time specified below   sleep(5)   ledPin.write(0)   startButton.config(state=Tkinter.ACTIVE) At the end of the program, we will execute the mainloop() method to initiate the Tkinter loop. Until this function is executed, the main window won't appear. To run the code, make appropriate changes to the Arduino board variable and execute the program. The following screenshot with a button and title bar will appear as the output of the program. Clicking on the Start button will turn on the LED on the Arduino board for the specified time delay. Meanwhile, when the LED is on, you will not be able to click on the Start button again. Now, in this particular program, we haven't provided sufficient code to safely disengage the Arduino board and it will be covered in upcoming exercises. Summary In this article, we learned about the Python library pyFirmata to interface Arduino to your computer using the Firmata protocol. We build a prototype using pyFirmata and Arduino to control servomotor and also developed another one with GUI, based on the Tkinter library, to control LEDs. Resources for Article: Further resources on this subject: Python Functions : Avoid Repeating Code? [article] Python 3 Designing Tasklist Application [article] The Five Kinds Of Python Functions Python 3.4 Edition [article]
Read more
  • 0
  • 0
  • 14443

article-image-controlling-dc-motors-using-shield
Packt
27 Feb 2015
4 min read
Save for later

Controlling DC motors using a shield

Packt
27 Feb 2015
4 min read
 In this article by Richard Grimmett, author of the book Intel Galileo Essentials,let's graduate from a simple DC motor to a wheeled platform. There are several simple, two-wheeled robotics platforms. In this example, you'll use one that is available on several online electronics stores. It is called the Magician Chassis, sourced by SparkFun. The following image shows this: (For more resources related to this topic, see here.) To make this wheeled robotic platform work, you're going to control the two DC motors connected directly to the two wheels. You'll want to control both the direction and the speed of the two wheels to control the direction of the robot. You'll do this with an Arduino shield designed for this purpose. The Galileo is designed to accommodate many of these shields. The following image shows the shield: Specifically, you'll be interested in the connections on the front corner of the shield, which is where you will connect the two DC motors. Here is a close-up of that part of the board: It is these three connections that you will use in this example. First, however, place the board on top of the Galileo. Then mount the two boards to the top of your two-wheeled robotic platform, like this: In this case, I used a large cable tie to mount the boards to the platform, using the foam that came with the motor shield between the Galileo and plastic platform. This particular platform comes with a 4 AA battery holder, so you'll need to connect this power source, or whatever power source you are going to use, to the motor shield. The positive and negative terminals are inserted into the motor shield by loosening the screws, inserting the wires, and then tightening the screws, like this: The final step is to connect the motor wires to the motor controller shield. There are two sets of connections, one for each motor like this: Insert some batteries, and then connect the Galileo to the computer via the USB cable, and you are now ready to start programming in order to control the motors. Galileo code for the DC motor shield Now that the Hardware is in place, bring up the IDE, make sure that the proper port and device are selected, and enter the following code: The code is straightforward. It consists of the following three blocks: The declaration of the six variables that connect to the proper Galileo pins: int pwmA = 3; int pwmB = 11; int brakeA = 9; int brakeB = 8; int directionA = 12; int directionB = 13; The setup() function, which sets the directionA, directionB, brakeA, and brakeB digital output pins: pinMode(directionA, OUTPUT); pinMode(brakeA, OUTPUT); pinMode(directionB, OUTPUT); pinMode(brakeB, OUTPUT); The loop() function. This is an example of how to make the wheeled robot go forward, then turn to the right. At each of these steps, you use the brake to stop the robot: // Move Forward digitalWrite(directionA, HIGH); digitalWrite(brakeA, LOW); analogWrite(pwmA, 255); digitalWrite(directionB, HIGH); digitalWrite(brakeB, LOW); analogWrite(pwmB, 255); delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); //Turn Right digitalWrite(directionA, LOW); //Establishes backward direction of Channel A digitalWrite(brakeA, LOW); //Disengage the Brake for Channel A analogWrite(pwmA, 128); //Spins the motor on Channel A at half speed digitalWrite(directionB, HIGH); //Establishes forward direction of Channel B digitalWrite(brakeB, LOW); //Disengage the Brake for Channel B analogWrite(pwmB, 128); //Spins the motor on Channel B at full speed delay(2000); digitalWrite(brakeA, HIGH); digitalWrite(brakeB, HIGH); delay(1000); Once you have uploaded the code, the program should run in a loop. If you want to run your robot without connecting to the computer, you'll need to add a battery to power the Galileo. The Galileo will need at least 2 Amps, but you might want to consider providing 3 Amps or more based on your project. To supply this from a battery, you can use one of several different choices. My personal favorite is to use an emergency cell phone charging battery, like this: If you are going to use this, you'll need a USB-to-2.1 mm DC plug cable, available at most online stores. Once you have uploaded the code, you can disconnect the computer, then press the reset button. Your robot can move all by itself! Summary By now, you should be feeling a bit more comfortable with configuring Hardware and writing code for the Galileo. This example is fun, and provides you with a moving platform. Resources for Article: Further resources on this subject: The Raspberry Pi And Raspbian? [article] Raspberry Pi Gaming Operating Systems [article] Clusters Parallel Computing And Raspberry Pi- Brief Background [article]
Read more
  • 0
  • 0
  • 1581

article-image-raspberry-pi-and-raspbian
Packt
25 Feb 2015
14 min read
Save for later

The Raspberry Pi and Raspbian

Packt
25 Feb 2015
14 min read
In this article by William Harrington, author of the book Learning Raspbian, we will learn about the Raspberry Pi, the Raspberry Pi Foundation and Raspbian, the official Linux-based operating system of Raspberry Pi. In this article, we will cover: (For more resources related to this topic, see here.) The Raspberry Pi History of the Raspberry Pi The Raspberry Pi hardware The Raspbian operating system Raspbian components The Raspberry Pi Despite first impressions, the Raspberry Pi is not a tasty snack. The Raspberry Pi is a small, powerful, and inexpensive single board computer developed over several years by the Raspberry Pi Foundation. If you are a looking for a low cost, small, easy-to-use computer for your next project, or are interested in learning how computers work, then the Raspberry Pi is for you. The Raspberry Pi was designed as an educational device and was inspired by the success of the BBC Micro for teaching computer programming to a generation. The Raspberry Pi Foundation set out to do the same in today's world, where you don't need to know how to write software to use a computer. At the time of printing, the Raspberry Pi Foundation had shipped over 2.5 million units, and it is safe to say that they have exceeded their expectations! The Raspberry Pi Foundation The Raspberry Pi Foundation is a not-for-profit charity and was founded in 2006 by Eben Upton, Rob Mullins, Jack Lang, and Alan Mycroft. The aim of this charity is to promote the study of computer science to a generation that didn't grow up with the BBC Micro or the Commodore 64. They became concerned about the lack of devices that a hobbyist could use to learn and experiment with. The home computer was often ruled out, as it was so expensive, leaving the hobbyist and children with nothing to develop their skills with. History of the Raspberry Pi Any new product goes through many iterations before mass production. In the case of the Raspberry Pi, it all began in 2006 when several concept versions of the Raspberry Pi based on the Atmel 8-bit ATMega664 microcontroller were developed. Another concept based on a USB memory stick with an ARM processor (similar to what is used in the current Raspberry Pi) was created after that. It took six years of hardware development to create the Raspberry Pi that we know and love today! The official logo of the Raspberry Pi is shown in the following screenshot: It wasn't until August 2011 when 50 boards of the Alpha version of the Raspberry Pi were built. These boards were slightly larger than the current version to allow the Raspberry Pi Foundation, to debug the device and confirm that it would all work as expected. Twenty-five beta versions of the Raspberry Pi were assembled in December 2011 and auctioned to raise money for the Raspberry Pi Foundation. Only a single small error with these was found and corrected for the first production run. The first production run consisted of 10,000 boards of Raspberry Pi manufactured overseas in China and Taiwan. Unfortunately, there was a problem with the Ethernet jack on the Raspberry Pi being incorrectly substituted with an incompatible part. This led to some minor shipping delays, but all the Raspberry Pi boards were delivered within weeks of their due date. As a bonus, the foundation was able to upgrade the Model A Raspberry Pi to 256 MB of RAM instead of the 128 MB that was planned. This upgrade in memory size allowed the Raspberry Pi to perform even more amazing tasks, such as real-time image processing. The Raspberry Pi is now manufactured in the United Kingdom, leading to the creation of many new jobs. The release of the Raspberry Pi was met with great fanfare, and the two original retailers of the Raspberry Pi - Premier Farnell and RS components-sold out of the first batch within minutes. The Raspberry Pi hardware At the heart of Raspberry Pi is the powerful Broadcom BCM2835 "system on a chip". The BCM2835 is similar to the chip at the heart of almost every smartphone and set top box in the world that uses ARM architecture. The BCM2835 CPU on the Raspberry Pi runs at 700 MHz and its performance is roughly equivalent to a 300 MHz Pentium II computer that was available back in 1999. To put this in perspective, the guidance computer used in the Apollo missions was less powerful than a pocket calculator! Block diagram of the Raspberry Pi The Raspberry Pi comes with either 256 MB or 512 MB of RAM, depending on which model you buy. Hopefully, this will increase in future versions! Graphic capabilities Graphics in the Raspberry Pi are provided by a Videocore 4 GPU. The graphic performance of the graphics processing unit (GPU) is roughly equivalent to the Xbox, launched in 2011, which cost many hundreds of dollars. These might seem like very low specifications, but they are enough to play Quake 3 at 1080p and full HD movies. There are two ways to connect a display to the Raspberry Pi. The first is using a composite video cable and the second is using HDMI. The composite output is useful as you are able to use any old TV as a monitor. The HDMI output is recommended however, as it provides superior video quality. A VGA connection is not provided on the Raspberry Pi as it would be cost prohibitive. However, it is possible to use an HDMI to VGA/DVI converter for users who have VGA or DVI monitors. The Raspberry Pi also supports an LCD touchscreen. An official version has not been released yet, although many unofficial ones are available. The Raspberry Pi Foundation says that they expect to release one this year. The Raspberry Pi model The Raspberry Pi has several different variants: the Model A and the Model B. The Model A is a low-cost version and unfortunately omits the USB hub chip. This chip also functions as a USB to an Ethernet converter. The Raspberry Pi Foundation has also just released the Raspberry Pi Model B+ that has extra USB ports and resolves many of the power issues surrounding the Model B and Model B USB ports. Parameters Model A Model B Model B+ CPU BCM2835 BCM2835 BCM2835 RAM 256 MB 512 MB 512 MB USB Ports 1 2 4 Ethernet Ports 0 1 1 Price (USD) ~$25 ~$35 ~$35 Available since February 2012 February 2012 July 2014 Boards     Differences between the Raspberry Pi models Did you know that the Raspberry Pi is so popular that if you search for raspberry pie in Google, they will actually show you results for the Raspberry Pi! Accessories The success of the Raspberry Pi has encouraged many other groups to design accessories for the Raspberry Pi, and users to use them. These accessories range from a camera to a controller for an automatic CNC machine. Some of these accessories include: Accessories Links Raspberry Pi camera http://www.raspberrypi.org/tag/camera-board/ VGA board http://www.suptronics.com/RPI.html CNC Controller http://code.google.com/p/picnc/ Autopilot http://www.emlid.com/ Case http://shortcrust.net/ Raspbian No matter how good the hardware of the Raspberry Pi is, without an operating system it is just a piece of silicon, fiberglass, and a few other materials. There are several different operating systems for the Raspberry Pi, including RISC OS, Pidora, Arch Linux, and Raspbian. Currently, Raspbian is the most popular Linux-based operating system for the Raspberry Pi. Raspbian is an open source operating system based on Debian, which has been modified specifically for the Raspberry Pi (thus the name Raspbian). Raspbian includes customizations that are designed to make the Raspberry Pi easier to use and includes many different software packages out of the box. Raspbian is designed to be easy to use and is the recommended operating system for beginners to start off with their Raspberry Pi. Debian The Debian operating system was created in August 1993 by Ian Murdock and is one of the original distributions of Linux. As Raspbian is based on the Debian operating system, it shares almost all the features of Debian, including its large repository of software packages. There are over 35,000 free software packages available for your Raspberry Pi, and they are available for use right now! An excellent resource for more information on Debian, and therefore Raspbian, is the Debian administrator's handbook. The handbook is available at http://debian-handbook.info. Open source software The majority of the software that makes up Raspbian on the Raspberry Pi is open source. Open source software is a software whose source code is available for modification or enhancement by anyone. The Linux kernel and most of the other software that makes up Raspbian is licensed under the GPLv2 License. This means that the software is made available to you at no cost, and that the source code that makes up the software is available for you to do what you want to. The GPLV2 license also removes any claim or warranty. The following extract from the GPLV2 license preamble gives you a good idea of the spirit of free software: "The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users…. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things." Raspbian components There are many components that make up a modern Linux distribution. These components work together to provide you with all the modern features you expect in a computer. There are several key components that Raspbian is built from. These components are: The Raspberry Pi bootloader The Linux kernel Daemons The shell Shell utilities The X.Org graphical server The desktop environment The Raspberry Pi bootloader When your Raspberry Pi is powered on, lot of things happen behind the scene. The role of the bootloader is to initialize the hardware in the Raspberry Pi to a known state, and then to start loading the Linux kernel. In the case of the Raspberry Pi, this is done by the first and second stage bootloaders. The first stage bootloader is programmed into the ROM of the Raspberry Pi during manufacture and cannot be modified. The second and third stage bootloaders are stored on the SD card and are automatically run by the previous stage bootloader. The Linux kernel The Linux kernel is one of the most fundamental parts of Raspbian. It manages every part of the operation of your Raspberry Pi, from displaying text on the screen to receiving keystrokes when you type on your keyboard. The Linux kernel was created by Linus Torvalds, who started working on the kernel in April 1991. Since then, groups of volunteers and organizations have worked together to continue the development of the kernel and make it what it is today. Did you know that the cost to rewrite the Linux kernel to where it was in 2011 would be over $3 billion USD? If you want to use a hardware device by connecting it to your Raspberry Pi, the kernel needs to know what it is and how to use it. The vast majority of devices on the market are supported by the Linux kernel, with more being added all the time. A good example of this is when you plug a USB drive into your Raspberry Pi. In this case, the kernel automatically detects the USB drive and notifies a daemon that automatically makes the files available to you. When the kernel has finished loading, it automatically runs a program called init. This program is designed to finish the initialization of the Raspberry Pi, and then to load the rest of the operating system. This program starts by loading all the daemons into the background, followed by the graphical user interface. Daemons A daemon is a piece of software that runs behind the scenes to provide the operating system with different features. Some examples of a daemon include the Apache web server, Cron, a job scheduler that is used to run programs automatically at different times, and Autofs, a daemon that automatically mounts removable storage devices such as USB drives. A distribution such as Raspbian needs more than just the kernel to work. It also needs other software that allows the user to interact with the kernel, and to manage the rest of the operating system. The core operating system consists of a collection of programs and scripts that make this happen. The shell After all the daemons have loaded, init launches a shell. A shell is an interface to your Raspberry Pi that allows you to monitor and control it using commands typed in using a keyboard. Don't be fooled by this interface, despite the fact that it looks exactly like what was used in computers 30 years ago. The shell is one of the most powerful parts of Raspbian. There are several shells available in Linux. Raspbian uses the Bourne again shell (bash) This shell is by far the most common shell used in Linux. Bash is an extremely powerful piece of software. One of bash's most powerful features is its ability to run scripts. A script is simply a collection of commands stored in a file that can do things, such as run a program, read keys from the keyboard, and many other things. Later on in this book, you will see how to use bash to make the most from your Raspberry Pi! Shell utilities A command interpreter is not much of use without any commands to run. While bash provides some very basic commands, all the other commands are shell utilities. These shell utilities together form one of the important parts of Raspbian (essential as without the utilities, the system would crash). They provide many features that range from copying files, creating directories, to the Advanced Packaging Tool (APT) – a package manager application that allows you to install and remove software from your Raspberry Pi. You will learn more about APT later in this book. The X.Org graphical server After the shell and daemons are loaded, by default the X.Org graphical server is automatically started. The role of X.Org is to provide you with a common platform from which to build a graphical user interface. X.Org handles everything from moving your mouse pointer, listening, and responding to your key presses to actually drawing the applications you are running onto the screen. The desktop environment It is difficult to use any computer without a desktop environment. A desktop environment lets you interact with your computer using more than just your keyboard, surf the Internet, view pictures and movies, and many other things. A GUI normally uses Windows, menus, and a mouse to do this. Raspbian includes a graphical user interface called Lightweight X11 Desktop Environment or LXDE. LXDE is used in Raspbian as it was specifically designed to run on devices such as the Raspberry Pi, which only have limited resources. Later in this book, you will learn how to customize and use LXDE to make the most of your Raspberry Pi.                 A screenshot of the LXDE desktop environment Summary This article introduces the Raspberry Pi and Raspbian. It discusses the history, hardware and components of the Raspberry Pi. This article is a great resource for understanding the Raspberry Pi. Resources for Article: Further resources on this subject: Raspberry Pi Gaming Operating Systems? [article] Learning Nservicebus Preparing Failure [article] Clusters Parallel Computing and Raspberry Pi Brief Background [article]
Read more
  • 0
  • 0
  • 2067

article-image-getting-your-own-video-and-feeds
Packt
06 Feb 2015
18 min read
Save for later

Getting Your Own Video and Feeds

Packt
06 Feb 2015
18 min read
"One server to satisfy them all" could have been the name of this article by David Lewin, the author of BeagleBone Media Center. We now have a great media server where we can share any media, but we would like to be more independent so that we can choose the functionalities the server can have. The goal of this article is to let you cross the bridge, where you are going to increase your knowledge by getting your hands dirty. After all, you want to build your own services, so why not create your own contents as well. (For more resources related to this topic, see here.) More specifically, here we will begin by building a webcam streaming service from scratch, and we will see how this can interact with what we have implemented previously in the server. We will also see how to set up a service to retrieve RSS feeds. We will discuss the services in the following sections: Installing and running MJPG-Streamer Detecting the hardware device and installing drivers and libraries for a webcam Configuring RSS feeds with Leed Detecting the hardware device and installing drivers and libraries for a webcam Even though today many webcams are provided with hardware encoding capabilities such as the Logitech HD Pro series, we will focus on those without this capability, as we want to have a low budget project. You will then learn how to reuse any webcam left somewhere in a box because it is not being used. At the end, you can then create a low cost video conference system as well. How to know your webcam As you plug in the webcam, the Linux kernel will detect it, so you can read every detail it's able to retrieve about the connected device. We are going to see two ways to retrieve the webcam we have plugged in: the easy one that is not complete and the harder one that is complete. "All magic comes with a price."                                                                                     –Rumpelstiltskin, Once Upon a Time Often, at a certain point in your installation, you have to choose between the easy or the hard way. Most of the time, powerful Linux commands or tools are not thought to be easy at first but after some experiments you'll discover that they really can make your life better. Let's start with the fast and easy way, which is lsusb : debian@arm:~$ lsusb Bus 001 Device 002: ID 046d:0802 Logitech, Inc. Webcam C200 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub This just confirms that the webcam is running well and is seen correctly from the USB. Most of the time we want more details, because a hardware installation is not exactly as described in books or documentations, so you might encounter slight differences. This is why the second solution comes in. Among some of the advantages, you are able to know each step that has taken place when the USB device was discovered by the board and Linux, such as in a hardware scenario: debian@arm:~$ dmesg A UVC device (here, a Logitech C200) has been used to obtain these messages Most probably, you won't exactly have the same outputs, but they should be close enough so that you can interpret them easily when they are referred to: New USB device found: This is the main message. In case of any issue, we will check its presence elsewhere. This message indicates that this is a hardware error and not a software or configuration error that you need to investigate. idVendor and idProduct: This message indicates that the device has been detected. This information is interesting so you can check the constructor detail. Most recent webcams are compatible with the Linux USB Video Class (UVC), you can check yours at http://www.ideasonboard.org/uvc/#devices. Among all the messages, you should also look for the one that says Registered new interface driver interface because failing to find it can be a clue that Linux could detect the device but wasn't able to install it. The new device will be detected as /dev/video0. Nevertheless, at start, you can see your webcam as a different device name according to your BeagleBone configuration, for example, if a video capable cape is already plugged in. Setting up your webcam Now we know what is seen from the USB level. The next step is to use the crucial Video4Linux driver, which is like a Swiss army knife for anything related to video capture: debian@arm:~$ Install v4l-utils The primary use of this tool is to inquire about what the webcam can provide with some of its capabilities: debian@arm:~$ v4l2-ctl -–all There are four distinctive sections that let you know how your webcam will be used according to the current settings: Driver info (1) : This contains the following information: Name, vendor, and product IDs that we find in the system message The driver info (the kernel's version) Capabilities: the device is able to provide video streaming Video capture supported format(s) (2): This contains the following information: What resolution(s) are to be used. As this example uses an old webcam, there is not much to choose from but you can easily have a lot of choices with devices nowadays. The pixel format is all about how the data is encoded but more details can be retrieved about format capabilities (see the next paragraph). The remaining stuff is relevant only if you want to know in precise detail. Crop capabilities (3): This contains your current settings. Indeed, you can define the video crop window that will be used. If needed, use the crop settings: --set-crop-output=top=<x>,left=<y>,width=<w>,height=<h> Video input (4): This contains the following information: The input number. Here we have used 0, which is the one that we found previously. Its current status. The famous frames per second, which gives you a local ratio. This is not what you will obtain when you'll be using a server, as network latencies will downgrade this ratio value. You can grab capabilities for each parameter. For instance, if you want to see all the video formats the webcam can provide, type this command: debian@arm:~$ v4l2-ctl --list-formats Here, we see that we can also use MJPEG format directly provided by the cam. While this part is not mandatory, such a hardware tour is interesting because you know what you can do with your device. It is also a good habit to be able to retrieve diagnostics when the webcam shows some bad signs. If you would like to get more in depth knowledge about your device, install the uvcdynctrl package, which lets you retrieve all the formats and frame rates supported. Installing and running MJPG-Streamer Now that we have checked the chain from the hardware level up to the driver, we can install the software that will make use of Video4Linux for video streaming. Here comes MJPG-Streamer. This application aims to provide you with a JPEG stream on the network available for browsers and all video applications. Besides this, we are also interested in this solution as it's made for systems with less advanced CPU, so we can start MJPG-Streamer as a service. With this streamer, you can also use the built-hardware compression and even control webcams such as pan, tilt, rotations, zoom capabilities, and so on. Installing MJPG-Streamer Before installing MJPG-Streamer, we will install all the necessary dependencies: debian@arm:~$ install subversion libjpeg8-dev imagemagick Next, we will retrieve the code from the project: debian@arm:~$ svn checkout http://svn.code.sf.net/p/mjpg-streamer/code/ mjpg-streamer-code You can now build the executable from the sources you just downloaded by performing the following steps: Enter the following into the local directory you have downloaded: debian@arm:~$ cd mjpg-streamer-code/mjpg-streamer Then enter the following command: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ make When the compilation is complete, we end up with some new files. From this picture the new green files are produced from the compilation: there are the executables and some plugins as well. That's all that is needed, so the application is now considered ready. We can now try it out. Not so much to do after all, don't you think? Starting the application This section aims at getting you started quickly with MJPG-Streamer. At the end, we'll see how to start it as a service on boot. Before getting started, the server requires some plugins to be copied into the dedicated lib directory for this purpose: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ sudo cp input_uvc.so output_http.so /usr/lib The MJPG-Streamer application has to know the path where these files can be found, so we define the following environment variable: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ export LD_LIBRARY_PATH=/usr/ lib;$LD_LIBRARY_PATH Enough preparation! Time to start streaming: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www" As the script starts, the input parameters that will be taken into consideration are displayed. You can now identify this information, as they have been explained previously: The detected device from V4L2 The resolution that will be displayed, according to your settings Which port will be opened Some controls that depend on your camera capabilities (tilt, pan, and so on) If you need to change the port used by MJPG-Streamer, add -p xxxx at the end of the command, which is shown as follows: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www –p 1234" Let's add some security If you want to add some security, then you should set the credentials: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg-streamer -o "output_http.so -w ./www -c debian:temppwd" Credentials can always be stolen and used without your consent. The best way to ensure that your stream is confidential all along would be to encrypt it. So if you intend to use strong encryption for secured applications, the crypto-cape is worth taking a look at http://datko.net/2013/10/03/howto_crypto_beaglebone_black/. "I'm famous" – your first stream That's it. The webcam is made accessible to everyone across the network from BeagleBone; you can access the video from your browser and connect to http://192.168.0.15:8080/. You will then see the default welcome screen, bravo!: Your first contact with the MJPG-Server You might wonder how you would get informed about which port to use among those already assigned. Using our stream across the network Now that the webcam is available across the network, you have several options to handle this: You can use the direct flow available from the home page. On the left-hand side menu, just click on the stream tab. Using VLC, you can open the stream with the direct link available at http://192.168.0.15:8080/?action=stream.The VideoLAN menu tab is a M3U-playlist link generator that you can click on. This will generate a playlist file you can open thereafter. In this case, VLC is efficient, as you can transcode the webcam stream to any format you need. Although it's not mandatory, this solution is the most efficient, as it frees the BeagleBone's CPU so that your server can focus on providing services. Using MediaDrop, we can integrate this new stream in our shiny MediaDrop server, knowing that currently MediaDrop doesn't support direct local streams. You can create a new post with the related URL link in the message body, as shown in the following screenshot: Starting the streaming service automatically on boot In the beginning, we saw that MJPG-Streamer needs only one command line to be started. We can put it in a bash script, but servicing on boot is far better. For this, use a console text editor – nano or vim – and create a file dedicated to this service. Let's call it start_mjpgstreamer and add the following commands: #! /bin/sh # /etc/init.d/start_mjpgstreamer export LD_LIBRARY_PATH="/home/debian/mjpg-streamer/mjpg-streamer-code/ mjpg-streamer;$LD_LIBRARY_PATH" EXEC_PATH="/home/debian/mjpg-streamer/mjpg-streamer-code/mjpg-streamer" $EXEC_PATH/mjpg_streamer -i "input_uvc.so" -o "output_http.so -w EXEC_PATH /www" You can then use administrator rights to add it to the services: debian@arm:~$ sudo /etc/init.d/start_mjpgstreamer start On the next reboot, MJPG-Streamer will be started automatically. Exploring new capabilities to install For those about to explore, we salute you! Plugins Remember that at the beginning of this article, we began the demonstration with two plugins: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer -i "input_uvc.so" -o "output_http.so -w www" If we take a moment to look at these plugins, we will understand that the first plugin is responsible for handling the webcam directly from the driver. Simply ask for help and options as follows: debian@beaglebone:~/mjpg-streamer-code/mjpg-streamer$ ./mjpg_streamer --input "input_uvc.so --help" The second plugin is about the web server settings: The path to the directory contains the final web server HTML pages. This implies that you can modify the existing pages with a little effort or create new ones based on those provided. Force a special port to be used. Like I said previously, port use is dedicated for a server. You define here which will be the one for this service. You can discover many others by asking: debian@arm:~$ ./mjpg_streamer --output "output_http.so --help" Apart from input_uvc and output_http, you have other available plugins to play with. Let's take a look at the plugins directory. Another tool for the webcam The Mjpg_streamer project is dedicated for streaming over network, but it is not the only one. For instance, do you have any specific needs such as monitoring your house/son/cat/Jon Snow figurine? buuuuzzz: if you answered yes to the last one, you just defined yourself as a geek. Well, in that case the Motion project is for you; just install the motion package and start it with the default motion.conf configuration. You will then record videos and pictures of any moving object/person that will be detected. As MJPG-Streamer motion aims to be a low CPU consumer, it works very well on BeagleBone Black. Configuring RSS feeds with Leed Our server can handle videos, pictures, and music from any source and it would be cool to have another tool to retrieve news from some RSS providers. This can be done with Leed, a RSS project organized for servers. You can have a final result, as shown in the following screenshot: This project has a "quick and easy" installation spirit, so you can give it a try without harness. Leed (for Light Feed) allows you to you access RSS feeds from any browser, so no RSS reader application is needed, and every user in your network can read them as well. You install it on the server and feeds are automatically updated. Well, the truth behind the scenes is that a cron task does this for you. You will be guided to set some synchronisation after the installation. Creating the environment for Leed in three steps We already have Apache, MySQL, and PHP installed, and we need a few other prerequisites to run Leed: Create a database for Leed Download the project code and set permissions Install Leed itself Creating a database for Leed You will begin by opening a MySQL session: debian@arm:~$ mysql –u root –p What we need here is to have a dedicated Leed user with its database. This user will be connected using the following: create user 'debian_leed'@'localhost' IDENTIFIED BY 'temppwd'; create database leed_db; use leed_db; grant create, insert, update, select, delete on leed_db.* to debian_leed@localhost; exit Downloading the project code and setting permissions We prepared our server to have its environment ready for Leed, so after getting the latest version, we'll get it working with Apache by performing the following steps: From your home, retrieve the latest project's code. It will also create a dedicated directory: debian@arm:~$ git clone https://github.com/ldleman/Leed.git debian@arm:~$ ls mediadrop mjpg-streamer Leed music Now, we need to put this new directory where the Apache server can find it: debian@arm:~$ sudo mv Leed /var/www/ Change the permissions for the application: debian@arm:~$ chmod 777 /var/www/Leed/ -R Installing Leed When you go to the server address (http//192.168.0.15/leed/install.php), you'll get the following installation screen: We now need to fill in the database details that we previously defined and add the Administrator credentials as well. Now save and quit. Don't worry about the explanations, we'll discuss these settings thereafter. It's important that all items from the prerequisites list on the right are green. Otherwise, a warning message will be displayed about the wrong permissions settings, as shown in the following screenshot: After the configuration, the installation is complete: Leed is now ready for you. Setting up a cron job for feed updates If you want automatic updates for your feeds, you'll need to define a synchronization task with cron: Modify cron jobs: debian@arm:~$ sudo crontab –e Add the following line: 0 * * * * wget -q -O /var/www/leed/logsCron "http://192.168.0.15/Leed/action.php?action=synchronize Save it and your feeds will be refreshed every hour. Finally, some little cleanup: remove install.php for security matters: debian@arm:~$ rm /var/www/Leed/install.php Using Leed to add your RSS feed When you need to add some feeds from the Manage menu, in Feed Options (on the right- hand side) select Preferences and you just have to paste the RSS link and add it with the button: You might find it useful to organize your feeds into groups, as we did for movies in MediaDrop. The Rename button will serve to achieve this goal. For example, here a TV Shows category has been created, so every feed related to this type will be organized on the main screen. Some Leed preferences settings in a server environment You will be asked to choose between two synchronisation modes: Complete and Graduated. Complete: This isto be used in a usual computer, as it will update all your feeds in a row, which is a CPU consuming task Graduated: Look for the oldest 10 feeds and update them if required You also have the possibility of allowing anonymous people to read your feeds. Setting Allow anonymous readers to Yeswill let your guests access your feeds but not add any. Extending Leed with plugins If you want to extend Leed capabilities, you can use the Leed Market—as the author defined it—from Feed options in the Manage menu. There, you'll be directed to the Leed Market space. Installation is just a matter of downloading the ZIP file with all plugins: debian@arm:~/Leed$ wget  https://github.com/ldleman/Leed-market/archive/master.zip debian@arm:~/Leed$ sudo unzip master.zip Let's use the AdBlock plugin for this example: Copy the content of the AdBlock plugin directory where Leed can see it: debian@arm:~/Leed$ sudo cp –r Leed-market-master/adblock /var/www/Leed/plugins Connect yourself and set the plugin by navigating to Manage | Available Plugins and then activate adblock withEnable, as follows: In this article, we covered: Some words about the hardware How to know your webcam Configuring RSS feeds with Leed Summary In this article, we had some good experiments with the hardware part of the server "from the ground," to finally end by successfully setting up the webcam service on boot. We discovered hardware detection, a way to "talk" with our local webcam and thus to be able to see what happens when we plug a device in the BeagleBone. Through the topics, we also discovered video4linux to retrieve information about the device, and learned about configuring devices. Along the way, we encountered MJPG-Streamer. Finally, it's better to be on our own instead of being dependent on some GUI interfaces, where you always wonder where you need to click. Finally, our efforts have been rewarded, as we ended up with a web page we can use and modify according to our tastes. RSS news can also be provided by our server so that you can manage all your feeds in one place, read them anywhere, and even organize dedicated groups. Plenty of concepts have been seen for hardware and software. Then think of this article as a concrete example you can use and adapt to understand how Linux works. I hope you enjoyed this freedom of choice, as you drag ideas and drop them in your BeagleBone as services. We entered in the DIY area, showing you ways to explore further. You can argue, saying that we can choose the software but still use off the shelf commercial devices. Resources for Article: Further resources on this subject: Using PVR with Raspbmc [Article] Pulse width modulator [Article] Making the Unit Very Mobile - Controlling Legged Movement [Article]
Read more
  • 0
  • 0
  • 1847
Banner background image

article-image-advanced-programming-and-control
Packt
05 Feb 2015
10 min read
Save for later

Advanced Programming and Control

Packt
05 Feb 2015
10 min read
Advanced Programming and Control In this article by Gary Garber, author of the book Learning LEGO MINDSTORMS EV3, we will explore advanced controlling algorithms to use for sensor-based navigation and tracking. We will cover: Proportional distance control with the Ultrasonic Sensor Proportional distance control with the Infrared (IR) Sensor Line following with the Color Sensor Two-level control with the Color Sensor Proportional control with the Color Sensor Proportional integral derivative control Precise turning and course correction with the Gyro Sensor Beacon tracking with the IR sensor Triangulation with two IR beacons (For more resources related to this topic, see here.) Distance controller In this section, we will program the robot to gradually come to a stop using a proportional algorithm. In a proportional algorithm, the robot will gradually slow down as it approaches the desired stopping point. Before we begin, we need to attach a distance sensor to our robot. If you have the Home Edition, you will be using the IR sensor, whereas if you have the Educational Edition, you will use the Ultrasonic Sensor. Because these sensors use reflected beams (infrared light or sound), they need to be placed unobstructed by the other parts of the robot. You could either place the sensor high above the robot or well out in front of many parts of the robot. The design I have shown in the following screenshot allows you to place the sensor in front of the robot. If you are using the Ultrasonic Sensor for FIRST Lego League (a competition that uses a lot of sensor-based navigation) and trying to measure the distance to the border, you will find it is a good idea to place the sensor as low as possible. This is because the perimeter of the playing fields for FIRST LEGO League are made from 3- or 4-inch- high pieces of lumber. Infrared versus Ultrasonic We are going to start out with a simple program and will gradually add complexity to it. If you are using the Ultrasonic Sensor, it should be plugged into port 4, and this program is on the top line. If you are using the IR sensor, it should be plugged into port 1 and this program is at the bottom line. In this program, the robot moves forward until the Wait block tells it to stop 25 units from a wall or other barrier. You will find that the Ultrasonic Sensor can be set to stop in units of inches or centimeters. The Ultrasonic Sensor emits high-frequency sound waves (above the range of human hearing) and measures the time delay between the emission of the sound waves and when the reflection off an object is measured by the sensor. In everyday conditions, we can assume that the speed of sound is constant, and thus the Ultrasonic Sensor can give precise distance measurements to the nearest centimeter. In other programming languages, you could even use the Ultrasonic Sensor to transmit data between two robots. The IR sensor emits infrared light and has an IR-sensitive camera that measures the reflected light. The sensor reading does not give exact distance units because the strength of the signal depends on environmental factors such as the reflectivity of the surface. What the IR sensor loses in precision in proximity measurements, it makes up for in the fact that you can use it to track on the IR beacon, which is a source of infrared light. In other programming languages, you could actually use the IR sensor to track on sources of infrared light other than the beacon (such as humans or animals). In the following screenshot, we have a simple program that will tell the robot to stop a given distance from a barrier using a Wait for the sensor block. The program on the top of the screenshot uses the Ultrasonic Sensor, and the program on the bottom of the screenshot uses the IR sensor. You should only use the program for the sensor you are using. If you are downloading and executing the program from the Packt Publishing website, you should delete the program that you do not need. When you execute the program in the preceding screenshot, you will find that the robot only begins to stop at 25 units from the wall, but cannot stop immediately. To do this, the robot will need to slow down before it gets to the stopping point. Proportional algorithm In the next set of program, we create a loop called Slow Down. Inside this loop, readings from the Ultrasonic or Infrared proximity sensor block are sent to a Math block (to take the negative of the position values so that the robot moves forward) and then sent to the power input of a Move Steering block. We can have the loop end when it reaches our desired stopping distance as shown in the following screenshot: Instead of using the exact values of the output of the sensor block, we can use the difference between the actual position and the desired position to control the Move Steering block, as shown in the following screenshot. This difference is called the error. We call the desired position the setpoint. In the following screenshot, the setpoint is 20. The power is actually proportional to the error or the difference between the positions. When you execute this code, you will also find that if the robot is too close to the wall, it will run in reverse and back up from the wall. We are using an Advanced Math block in the following screenshot. You can see that we are writing a simple equation, -(a-b), into the block text field of the Advanced Math block: You may have also noticed that the robot moves very slowly as it approaches the stopping point. You can change this program by adding gain to the algorithm. If you multiply the difference by a larger factor, it will approach the stopping point quicker. When you execute this program, you will find that if you increase the gain too much, it will overshoot the stopping point and reverse direction. We can adjust these values using the Advanced Math block. We can type in any simple math function we need, as shown in the following screenshot. In this block, the value of a is the measured position, b is the setpoint position, and c is the gain. The equation can be seen in the following screenshot inside the block text field of the Advanced Math block: We can also define the desired gain and setpoint position using variables. We can create two Variable blocks called Gain and Set Point. We can write the value 3 to the Gain variable block and 20 to the Set Point variable block. Inside our loop, we can then read these variables and take the output of the Read Variable block and draw data wires into the Advanced Math block. The basic idea of the proportional algorithm is that the degree of correction needed is proportional to the error. So when our measured value is far from our goal, a large correction is applied. When our measured value is near our goal, only a small correction is applied. The algorithm also allows overcorrections. If the robot moves past the setpoint distance, it will back up. Depending on what you are trying to do, you will need to play around with various values for the gain variable. If the gain is too large, you will overshoot your goal and oscillate around it. If your gain is too small, you will never reach your goal. The response time of the microprocessor also affects the efficiency of the algorithm. You can experiment by inserting a Wait block into the loop and see how this affects the behavior of the robot. If we are merely using the distance sensor to approach a stationary object, then the proportional algorithm will suffice. However, if you were trying to maintain a given distance from a moving object (such as another robot), you might need a more complicated algorithm such as a Proportional Integral Derivative (PID) controller. Next we will build a line follower using the Color Sensor, which will use a PID controller. Line following using the Color Sensor When we are using the Color Sensor in Reflected Light Intensity mode, the sensor emits light and the robot measures the intensity of the reflected light. The brightness of the red LED in the sensor is a constant, but the intensity of the reflection will depend on the reflectivity of the surface, the angle of the sensor relative to the surface, and the distance of the sensor from the surface. If you shine the sensor at a surface, you will notice that a circle of light is generated. As you change the height of the sensor, the diameter of this circle will change because the light emitted from the LED diverges in a cone. As you increase the height, the size of the circle gets larger and the reflected intensity gets smaller. You might think you want the sensor to be as close as possible to the surface. Because there is a finite distance between the LED and the photo diode (which collects the light) of about 5.5 mm, it puts a constraint on the minimum diameter of your circle of light. Ideally, you want the circle of light to have a diameter of about 11 mm, which means placing the sensor about half of a centimeter above the tracking surface. For the caster-bot, you will need the sensor, an axle, two bushings, two long pins, a 5-mod beam, and two axle-pin connectors, as you can see in the following screenshot: You can assemble the sensor attachment in two steps. The sensor attachment settles into the holes in the caster attachment itself as you can see in the following screenshot. This placement is ideal as it allows the caster to do the steering while you do your line tracking. You can build the Color Sensor attachment in four steps. The Color Sensor attachment for the skid-bot will be the most complicated of our designs because we want the sensor to be in front of the robot and the skid is quite long. Again, we will need the pins, axles, bushings, and axle-pin connectors seen in the following screenshot: The Color Sensor attachment will connect directly to the EV3 brick. As you can see in the following screenshot, the attachment will be inserted from below the brick: Next I will describe the attachment for the tread-bot from the Educational kit. Because the tread-bot is slightly higher off the ground, we need to use some pieces such as the thin 1 x 4 mod lift arm that is a half mod in height. This extra millimeter in height can make a huge difference in the signal strength. The pins have trouble gripping the thin lift arm, so I like to use the pins with stop bushings to prevent the lift arm from falling off. The Light Sensor attachment is once again inserted into the underside of the EV3 brick as you can see in the following screenshot: The simplest of our Light Sensor attachments will be the tread-bot for the Home Edition, and you can build this in one step. Similarly, it attaches to the underside of the EV3 brick. Summary In this article, we explored advanced methods of navigations. We used both the Ultrasonic Sensor and the Infrared Sensor to measure distance with a proportional algorithm. Resources for Article: Further resources on this subject: eJOS – Unleashing EV3 [article] Proportional line follower (Advanced) [article] Making the Unit Very Mobile - Controlling Legged Movement [article]
Read more
  • 0
  • 0
  • 3928

article-image-calling-your-fellow-agents
Packt
04 Feb 2015
12 min read
Save for later

Calling your fellow agents

Packt
04 Feb 2015
12 min read
In this article by Stefan Sjogelid, author of book Raspberry Pi for Secret Agents Second Edition. We will be setting up SIP Witch by adding softphones, connect them together, and then we will run the softphone on the Pi. When you're out in the field and need to call in a favor from a fellow agent or report back to HQ, you don't want to depend on the public phone network if you can avoid it. Landlines and cell phones alike can be tapped by all sorts of shady characters and to add insult to injury, you have to pay good money for this service. We can do better. Welcome to the wonderful world of Voice over IP (VoIP). VoIP is a blanket term for any technology capable of delivering speech between two end users over IP networks. There are plenty of services and protocols out there that try to meet this demand, most of which force you to connect through a central server that you don't own or control. We're going to turn the Pi into the central server of our very own phone network. To aid us with this task, we'll deploy GNU SIP Witch—a peer-to-peer VoIP server that uses Session Initiation Protocol (SIP) to route calls between phones. While there are many excellent VoIP servers available (Asterisk, FreeSwitch, and Yate and so on) SIP Witch has the advantage of being very lightweight on the Pi because its only concern is connecting phones and not much else. (For more resources related to this topic, see here.) Setting up SIP Witch Once we have the SIP server up and running we'll be adding one or more software phones or softphones. It's assumed that server and phones will all be on the same network. Let's get started! Install SIP Witch using the following command: pi@raspberrypi ~ $ sudo apt-get install sipwitch Just as the output of the previous command says, we have to define PLUGINS in /etc/default/sipwitch before running SIP Witch. Let's open it up for editing: pi@raspberrypi ~ $ sudo nano /etc/default/sipwitch Find the line that reads #PLUGINS="zeroconf scripting subscriber forward" and remove the # character to uncomment the line. This directive tells SIP Witch that we want the standard plugins to be loaded. Next we'll have a look at the main SIP Witch configuration file: pi@raspberrypi ~ $ sudo nano /etc/sipwitch.conf Note how some blocks of text are between <!-- and --> tags. These are comments in XML documents and are ignored by SIP Witch. Whatever changes you want to make, ensure they go outside of those tags. Now we're going to add a few softphone user accounts. It's up to you how many phones you'd like on your system, but each account needs a username, an extension (short phone number) and a password. Find the <provision> tag, make a new line and add your users: <user id="phone1"> <extension>201</extension> <secret>SecretSauce201</secret> <display>Agent 201</display> </user> <user id="phone2"> <extension>202</extension> <secret>SecretSauce202</secret> <display>Agent 202</display> </user> The user ID will be used as a user/login name later from the softphones. In this default configuration, the extensions can be any number between 201 and 299. The secret is the password that will go together with the username on the softphones. We will look into a better way of storing passwords later in this chapter. Finally, the display string defines an identity to present to other phones when calling. One more thing that we need to configure is how SIP Witch should treat local names. This makes it possible to call a phone by user ID in addition to the extension. Find the <stack> tag, make a new line and add the following directive, but replace [IP address] with the IP address of your Pi: <localnames>[IP address]</localnames> Those are all the changes we need to make to the configuration at the moment. Basic SIP Witch configuration for two phones With our configuration in place, let's start up the SIP Witch service: pi@raspberrypi ~ $ sudo service sipwitch start The SIP Witch server runs in the background and only outputs to a log file viewable with this command: pi@raspberrypi ~ $ sudo cat /var/log/sipwitch.log Now we can use the sipwitch command to interact with the running service. Type sipwitch for a list of all possible commands. Here's a short list of particularly handy ones: Command Description sudo sipwitch dump Shows how the SIP Witch server is currently configured. sudo sipwitch registry Lists all currently registered softphones. sudo sipwitch calls Lists active calls. sudo sipwitch message [extension] "[text]" Sends a text message from the server to an extension. Perfect for sending status updates from the Pi through scripting. Connecting the softphones Running your own telecommunications service is kind of boring without actual phones to make use of it. Fortunately, there are softphone applications available for most common electronic devices out there. The configuration of these phones will be pretty much identical no matter which platform they're running on. This is the basic information that will always need to be specified when configuring your softphone application: User / Login name: phone1 or phone2 in our example configuration Password / Authentication: The user's secret in our configuration Server / Host name / Domain: The IP address of your Pi Once a softphone is successfully registered with the SIP Witch server, you should be able to see that phone listed using the sudo sipwitch registry command. What follows is a list of verified decent softphones that will get the job done. Windows (MicroSIP) MicroSIP is an open source softphone that also supports video calls. Visit http://www.microsip.org/downloads to obtain and install the latest version (MicroSIP-3.8.1.exe at the time of writing).   Configuring the MicroSIP softphone for Windows Right-click on either the status bar in the main application window or the system tray icon to bring up the menu that lets you access the Account settings. Mac OS X (Telephone) Telephone is a basic open source softphone that is easily installed through the Mac App store. Configuring the Telephone softphone for Mac OS X Linux (SFLphone) SFLphone is an open source softphone with packages available for all major distributions and client interfaces for both GNOME and KDE. Use your distribution's package manager to find and install the application. Configuring SFLphone GNOME client in Ubuntu Android (CSipSimple) CSipSimple is an excellent open source softphone available from the Google Play store. When adding your account, use the basic generic wizard. Configuring the CSipSimple softphone on Android iPhone/iPad (Linphone) Linphone is an open source softphone that is easily installed through the iPhone App store. Select I have already a SIP-account to go to the setup assistant. Configuring Linphone on the iPhone Running a softphone on the Pi It's always good to be able to reach your agents directly from HQ, that is, the Pi itself. Proving once again that anything can be done from the command line, we're going to install a softphone called Linphone that will make good use of your USB microphone. This new softphone obviously needs a user ID and password just like the others. We will take this opportunity to look at a better way of storing passwords in SIP Witch. Encrypting SIP Witch passwords Type sudo sipwitch dump to see how SIP Witch is currently configured. Find the accounts: section and note how there's already a user ID named pi with extension 200. This is the result of a SIP Witch feature that automatically assigns an extension number to certain Raspbian user accounts. You may also have noticed that the display string for the pi user looks empty. We can easily fix that by filling in the full name field for the Raspbian pi user account with the following command: pi@raspberrypi ~ $ sudo chfn -f "Agent HQ" pi Now restart the SIP Witch server with sudo service sipwitch restart and verify with sudo sipwitch dump that the display string has changed. So how do we set the password for this automatically added pi user? For the other accounts, we specified the password in clear text inside <secret> tags in /etc/sipwitch.conf. This is not the best solution from a security perspective if your Pi would happen to fall into the wrong hands. Therefore, SIP Witch supports specifying passwords in encrypted digest form. Use the following command to create an encrypted password for the pi user: pi@raspberrypi ~ $ sudo sippasswd pi We can then view the database of SIP passwords that SIP Witch knows about: pi@raspberrypi ~ $ sudo cat /var/lib/sipwitch/digests.db Now you can add digest passwords for your other SIP users as well and then delete all <secret> lines from /etc/sipwitch.conf to be completely free of clear text. Setting up Linphone With our pi user account up and ready to go, let's proceed to set up Linphone: Linphone does actually have a graphical user interface, but we'll specify that we want the command-line only client: pi@raspberrypi ~ $ sudo apt-get install linphone-nogtk Now we fire up the Linphone command-line client: pi@raspberrypi ~ $ linphonec You will immediately receive a warning that reads: Warning: Could not start udp transport on port 5060, maybe this port is already used. That is, in fact, exactly what is happening. The standard communication channel for the SIP protocol is UDP port 5060, and it's already in use by our SIP Witch server. Let's tell Linphone to use port 5062 with this command: linphonec> ports sip 5062 Next we'll want to set up our microphone. Use these three commands to list, show, and select what audio device to use for phone calls: linphonec> soundcard list linphonec> soundcard show linphonec> soundcard use [number] For the softphone to perform reasonably well on the Pi, we'll want to make adjustments to the list of codecs that Linphone will try to use. The job of a codec is to compress audio as much as possible while retaining high quality. This is a very CPU-intensive process, which is why we want to use the codec with the least amount of CPU load on the Pi, namely, PCMU or PCMA. Use the following command to list all currently supported codecs: linphonec> codec list Now use this command to disable all codecs that are not PCMU or PCMA: linphonec> codec disable [number] It's time to register our softphone to the SIP Witch server. Use the following command but replace [IP address] with the IP address of your Pi and [password] with the SIP password you set earlier for the pi user: linphonec> register sip:pi@[IP address] sip:[IP address] [password] That's all you need to start calling your fellow agents from the Pi itself. Type help to get a list of all commands that Linphone accepts. The basic commands are call [user id] to call someone, answer to pick up incoming calls and quit to exit Linphone. All the settings that you've made will be saved to ~/.linphonerc and loaded the next time you start linphonec. Playing files with Linphone Now that you know the Linphone basics, let's explore some interesting features not offered by most other softphones. At any time (except during a call), you can switch Linphone into file mode, which lets us experiment with alternative audio sources. Use this command to enable file mode: linphonec> soundcard use files Do you remember eSpeak from earlier in this chapter? While you rest your throat, eSpeak can provide its soothing voice to carry out entire conversations with your agents. If you haven't already got it, install eSpeak first: pi@raspberrypi ~ $ sudo apt-get install espeak Now we tell Linphone what to say next: linphonec> speak english Greetings! I'm a Linphone, obviously. This sentence will be spoken as soon as there's an established call. So you can either make an outgoing call or answer an incoming call to start the conversation, after which you're free to continue the conversation in Italian: linphonec> speak italian Buongiorno! Mi chiamo Enzo Gorlami. Should you want a message to play automatically when someone calls, just toggle auto answer: linphonec> autoanswer enable How about playing a pre-recorded message or some nice grooves? If you have a WAV or MP3 file that you'd like to play over the phone, it has to be converted to a suitable format first. A simple SoX command will do the trick: pi@raspberrypi ~ $ sox "original file.mp3" -c 1 -r 48000 playme.wav Now we can tell Linphone to play the file: linphonec> play playme.wav Finally, you can also record a call to file. Note that only the remote part of the conversation can be recorded, which makes this feature more suitable for leaving messages and such. Use the following command to record: linphonec> record message.wav Summary In this article, we set up our very own phone network using SIP Witch and connected softphones running on a wide variety of platforms including the Pi itself. Resources for Article: Further resources on this subject: Our First Project – A Basic Thermometer [article] Testing Your Speed [article] Creating a 3D world to roam in [article]
Read more
  • 0
  • 0
  • 3348
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-security-and-interoperability
Packt
03 Feb 2015
28 min read
Save for later

Security and Interoperability

Packt
03 Feb 2015
28 min read
 This article by Peter Waher, author of the book, Learning Internet of Things, we will focus on the security, interoperability, and what issues we need to address during the design of the overall architecture of Internet of Things (IoT) to avoid many of the unnecessary problems that might otherwise arise and minimize the risk of painting yourself into a corner. You will learn the following: Risks with IoT Modes of attacking a system and some counter measures The importance of interoperability in IoT (For more resources related to this topic, see here.) Understanding the risks There are many solutions and products marketed today under the label IoT that lack basic security architectures. It is very easy for a knowledgeable person to take control of devices for malicious purposes. Not only devices at home are at risk, but cars, trains, airports, stores, ships, logistics applications, building automation, utility metering applications, industrial automation applications, health services, and so on, are also at risk because of the lack of security measures in their underlying architecture. It has gone so far that many western countries have identified the lack of security measures in automation applications as a risk to national security, and rightly so. It is just a matter of time before somebody is literally killed as a result of an attack by a hacker on some vulnerable equipment connected to the Internet. And what are the economic consequences for a company that rolls out a product for use on the Internet that results into something that is vulnerable to well-known attacks? How has it come to this? After all the trouble Internet companies and applications have experienced during the rollout of the first two generations of the Web, do we repeat the same mistakes with IoT? Reinventing the wheel, but an inverted one One reason for what we discussed in the previous section might be the dissonance between management and engineers. While management knows how to manage known risks, they don't know how to measure them in the field of IoT and computer communication. This makes them incapable of understanding the consequences of architectural decisions made by its engineers. The engineers in turn might not be interested in focusing on risks, but on functionality, which is the fun part. Another reason might be that the generation of engineers who tackle IoT are not the same type of engineers who tackled application development on the Internet. Electronics engineers now resolve many problems already solved by computer science engineers decades earlier. Engineers working on machine-to-machine (M2M) communication paradigms, such as industrial automation, might have considered the problem solved when they discovered that machines could talk to each other over the Internet, that is, when the message-exchanging problem was solved. This is simply relabeling their previous M2M solutions as IoT solutions because the transport now occurs over the IP protocol. But, in the realm of the Internet, this is when the problems start. Transport is just one of the many problems that need to be solved. The third reason is that when engineers actually re-use solutions and previous experience, they don't really fit well in many cases. The old communication patterns designed for web applications on the Internet are not applicable for IoT. So, even if the wheel in many cases is reinvented, it's not the same wheel. In previous paradigms, publishers are a relatively few number of centralized high-value entities that reside on the Internet. On the other hand, consumers are many but distributed low-value entities, safely situated behind firewalls and well protected by antivirus software and operating systems that automatically update themselves. But in IoT, it might be the other way around: publishers (sensors) are distributed, very low-value entities that reside behind firewalls, and consumers (server applications) might be high-value centralized entities, residing on the Internet. It can also be the case that both the consumer and publisher are distributed, low-value entities who reside behind the same or different firewalls. They are not protected by antivirus software, and they do not autoupdate themselves regularly as new threats are discovered and countermeasures added. These firewalls might be installed and then expected to work for 10 years with no modification or update being made. The architectural solutions and security patterns developed for web applications do not solve these cases well. Knowing your neighbor When you decide to move into a new neighborhood, it might be a good idea to know your neighbors first. It's the same when you move a M2M application to IoT. As soon as you connect the cable, you have billions of neighbors around the world, all with access to your device. What kind of neighbors are they? Even though there are a lot of nice and ignorant neighbors on the Internet, you also have a lot of criminals, con artists, perverts, hackers, trolls, drug dealers, drug addicts, rapists, pedophiles, burglars, politicians, corrupt police, curious government agencies, murderers, demented people, agents from hostile countries, disgruntled ex-employees, adolescents with a strange sense of humor, and so on. Would you like such people to have access to your things or access to the things that belong to your children? If the answer is no (as it should be), then you must take security into account from the start of any development project you do, aimed at IoT. Remember that the Internet is the foulest cesspit there is on this planet. When you move from the M2M way of thinking to IoT, you move from a nice and security gated community to the roughest neighborhood in the world. Would you go unprotected or unprepared into such an area? IoT is not the same as M2M communication in a secure and controlled network. For an application to work, it needs to work for some time, not just in the laboratory or just after installation, hoping that nobody finds out about the system. It is not sufficient to just get machines to talk with each other over the Internet. Modes of attack To write an exhaustive list of different modes of attack that you can expect would require a book by itself. Instead, just a brief introduction to some of the most common forms of attack is provided here. It is important to have these methods in mind when designing the communication architecture to use for IoT applications. Denial of Service A Denial of Service (DoS) or Distributed Denial of Service (DDoS) attack is normally used to make a service on the Internet crash or become unresponsive, and in some cases, behave in a way that it can be exploited. The attack consists in making repetitive requests to a server until its resources gets exhausted. In a distributed version, the requests are made by many clients at the same time, which obviously increases the load on the target. It is often used for blackmailing or political purposes. However, as the attack gets more effective and difficult to defend against when the attack is distributed and the target centralized, the attack gets less effective if the solution itself is distributed. To guard against this form of attack, you need to build decentralized solutions where possible. In decentralized solutions, each target's worth is less, making it less interesting to attack. Guessing the credentials One way to get access to a system is to impersonate a client in the system by trying to guess the client's credentials. To make this type of attack less effective, make sure each client and each device has a long and unique, perhaps randomly generated, set of credentials. Never use preset user credentials that are the same for many clients or devices or factory default credentials that are easy to reset. Furthermore, set a limit to the number of authentication attempts per time unit permitted by the system; also, log an event whenever this limit is reached, from where to which credentials were used. This makes it possible for operators to detect systematic attempts to enter the system. Getting access to stored credentials One common way to illicitly enter a system is when user credentials are found somewhere else and reused. Often, people reuse credentials in different systems. There are various ways to avoid this risk from happening. One is to make sure that credentials are not reused in different devices or across different services and applications. Another is to randomize credentials, lessening the desire to reuse memorized credentials. A third way is to never store actual credentials centrally, even encrypted if possible, and instead store hashed values of these credentials. This is often possible since authentication methods use hash values of credentials in their computations. Furthermore, these hashes should be unique to the current installation. Even though some hashing functions are vulnerable in such a way that a new string can be found that generates the same hash value, the probability that this string is equal to the original credentials is miniscule. And if the hash is computed uniquely for each installation, the probability that this string can be reused somewhere else is even more remote. Man in the middle Another way to gain access to a system is to try and impersonate a server component in a system instead of a client. This is often referred to as a Man in the middle (MITM) attack. The reason for the middle part is that the attacker often does not know how to act in the server and simply forwards the messages between the real client and the server. In this process, the attacker gains access to confidential information within the messages, such as client credentials, even if the communication is encrypted. The attacker might even try to modify messages for their own purposes. To avoid this type of attack, it's important for all clients (not just a few) to always validate the identity of the server it connects to. If it is a high-value entity, it is often identified using a certificate. This certificate can both be used to verify the domain of the server and encrypt the communication. Make sure this validation is performed correctly, and do not accept a connection that is invalid or where the certificate has been revoked, is self-signed, or has expired. Another thing to remember is to never use an unsecure authentication method when the client authenticates itself with the server. If a server has been compromised, it might try to fool clients into using a less secure authentication method when they connect. By doing so, they can extract the client credentials and reuse them somewhere else. By using a secure authentication method, the server, even if compromised, will not be able to replay the authentication again or use it somewhere else. The communication is valid only once. Sniffing network communication If communication is not encrypted, everybody with access to the communication stream can read the messages using simple sniffing applications, such as Wireshark. If the communication is point-to-point, this means the communication can be heard by any application on the sending machine, the receiving machine, or any of the bridges or routers in between. If a simple hub is used instead of a switch somewhere, everybody on that network will also be able to eavesdrop. If the communication is performed using multicast messaging service, as can be done in UPnP and CoAP, anybody within the range of the Time to live (TTL) parameter (maximum number of router hops) can eavesdrop. Remember to always use encryption if sensitive data is communicated. If data is private, encryption should still be used, even if the data might not be sensitive at first glance. A burglar can know if you're at home by simply monitoring temperature sensors, water flow meters, electricity meters, or light switches at your home. Small variations in temperature alert to the presence of human beings. Change in the consumption of electrical energy shows whether somebody is cooking food or watching television. The flow of water shows whether somebody is drinking water, flushing a toilet, or taking a shower. No flow of water or a relatively regular consumption of electrical energy tells the burglar that nobody is at home. Light switches can also be used to detect presence, even though there are applications today that simulate somebody being home by switching the lights on and off. If you haven't done so already, make sure to download a sniffer to get a feel of what you can and cannot see by sniffing the network traffic. Wireshark can be downloaded from https://www.wireshark.org/download.html. Port scanning and web crawling Port scanning is a method where you systematically test a range of ports across a range of IP addresses to see which ports are open and serviced by applications. This method can be combined with different tests to see the applications that might be behind these ports. If HTTP servers are found, standard page names and web-crawling techniques can be used to try to figure out which web resources lie behind each HTTP server. CoAP is even simpler since devices often publish well-known resources. Using such simple brute-force methods, it is relatively easy to find (and later exploit) anything available on the Internet that is not secured. To avoid any private resources being published unknowingly, make sure to close all the incoming ports in any firewalls you use. Don't use protocols that require incoming connections. Instead, use protocols that create the connections from inside the firewall. Any resources published on the Internet should be authenticated so that any automatic attempt to get access to them fails. Always remember that information that might seem trivial to an individual might be very interesting if collected en masse. This information might be coveted not only by teenage pranksters but by public relations and marketing agencies, burglars, and government agencies (some would say this is a repetition). Search features and wildcards Don't make the mistake of thinking it's difficult to find the identities of devices published on the Internet. Often, it's the reverse. For devices that use multicast communication, such as those using UPnP and CoAP, anybody can listen in and see who sends the messages. For devices that use single-cast communication, such as those using HTTP or CoAP, port-scanning techniques can be used. For devices that are protected by firewalls and use message brokers to protect against incoming attacks, such as those that use XMPP and MQTT, search features or wildcards can be used to find the identities of devices managed by the broker, and in the case of MQTT, even what they communicate. You should always assume that the identity of all devices can be found, and that there's an interest in exploiting the device. For this reason, it's very important that each device authenticates any requests made to it if possible. Some protocols help you more with this than others, while others make such authentication impossible. XMPP only permits messages from accepted friends. The only thing the device needs to worry about is which friend requests to accept. This can be either configured by somebody else with access to the account or by using a provisioning server if the device cannot make such decisions by itself. The device does not need to worry about client authentication, as this is done by the brokers themselves, and the XMPP brokers always propagate the authenticated identities of everybody who send them messages. MQTT, on the other hand, resides in the other side of the spectrum. Here, devices cannot make any decision about who sees the published data or who makes a request since identities are stripped away by the protocol. The only way to control who gets access to the data is by building a proprietary end-to-end encryption layer on top of the MQTT protocol, thereby limiting interoperability. In between the two resides protocols such as HTTP and CoAP that support some level of local client authentication but lacks a good distributed identity and authentication mechanism. This is vital for IoT even though this problem can be partially solved in local intranets. Breaking ciphers Many believe that by using encryption, data is secure. This is not the case, as discussed previously, since the encryption is often only done between connected parties and not between end users of data (the so-called end-to-end encryption). At most, such encryption safeguards from eavesdropping to some extent. But even such encryption can be broken, partially or wholly, with some effort. Ciphers can be broken using known vulnerabilities in code where attackers exploit program implementations rather than the underlying algorithm of the cipher. This has been the method used in the latest spectacular breaches in code based on the OpenSSL library. To protect yourselves from such attacks, you need to be able to update code in devices remotely, which is not always possible. Other methods use irregularities in how the cipher works to figure out, partly or wholly, what is being communicated over the encrypted channel. This sometimes requires a considerable amount of effort. To safeguard against such attacks, it's important to realize that an attacker does not spend more effort into an attack than what is expected to be gained by the attack. By storing massive amounts of sensitive data centrally or controlling massive amounts of devices from one point, you increase the value of the target, increasing the interest of attacking it. On the other hand, by decentralizing storage and control logic, the interest in attacking a single target decreases since the value of each entity is comparatively lower. Decentralized architecture is an important tool to both mitigate the effects of attacks and decrease the interest in attacking a target. However, by increasing the number of participants, the number of actual attacks can increase, but the effort that can be invested behind each attack when there are many targets also decreases, making it easier to defend each one of the attacks using standard techniques. Tools for achieving security There are a number of tools that architects and developers can use to protect against malicious use of the system. An exhaustive discussion would fill a smaller library. Here, we will mention just a few techniques and how they not only affect security but also interoperability. Virtual Private Networks A method that is often used to protect unsecured solutions on the Internet is to protect them using Virtual Private Networks (VPNs). Often, traditional M2M solutions working well in local intranets need to expand across the Internet. One way to achieve this is to create such VPNs that allow the devices to believe they are in a local intranet, even though communication is transported across the Internet. Even though transport is done over the Internet, it's difficult to see this as a true IoT application. It's rather a M2M solution using the Internet as the mode of transport. Because telephone operators use the Internet to transport long distance calls, it doesn't make it Voice over IP (VoIP). Using VPNs might protect the solution, but it completely eliminates the possibility to interoperate with others on the Internet, something that is seen as the biggest advantage of using the IoT technology. X.509 certificates and encryption We've mentioned the use of certificates to validate the identity of high-value entities on the Internet. Certificates allow you to validate not only the identity, but also to check whether the certificate has been revoked or any of the issuers of the certificate have had their certificates revoked, which might be the case if a certificate has been compromised. Certificates also provide a Public Key Infrastructure (PKI) architecture that handles encryption. Each certificate has a public and private part. The public part of the certificate can be freely distributed and is used to encrypt data, whereas only the holder of the private part of the certificate can decrypt the data. Using certificates incurs a cost in the production or installation of a device or item. They also have a limited life span, so they need to be given either a long lifespan or updated remotely during the life span of the device. Certificates also require a scalable infrastructure for validating them. For these reasons, it's difficult to see that certificates will be used by other than high-value entities that are easy to administer in a network. It's difficult to see a cost-effective, yet secure and meaningful, implementation of validating certificates in low-value devices such as lamps, temperature sensors, and so on, even though it's theoretically possible to do so. Authentication of identities Authentication is the process of validating whether the identity provided is actually correct or not. Authenticating a server might be as simple as validating a domain certificate provided by the server, making sure it has not been revoked and that it corresponds to the domain name used to connect to the server. Authenticating a client might be more involved, as it has to authenticate the credentials provided by the client. Normally, this can be done in many different ways. It is vital for developers and architects to understand the available authentication methods and how they work to be able to assess the level of security used by the systems they develop. Some protocols, such as HTTP and XMPP, use the standardized Simple Authentication and Security Layer (SASL) to publish an extensible set of authentication methods that the client can choose from. This is good since it allows for new authentication methods to be added. But it also provides a weakness: clients can be tricked into choosing an unsecure authentication mechanism, thus unwittingly revealing their user credentials to an impostor. Make sure clients do not use unsecured or obsolete methods, such as PLAIN, BASIC, MD5-CRAM, MD5-DIGEST, and so on, even if they are the only options available. Instead, use secure methods such as SCRAM-SHA-1 or SCRAM-SHA-1-PLUS, or if client certificates are used, EXTERNAL or no method at all. If you're using an unsecured method anyway, make sure to log it to the event log as a warning, making it possible to detect impostors or at least warn operators that unsecure methods are being used. Other protocols do not use secure authentication at all. MQTT, for instance, sends user credentials in clear text (corresponding to PLAIN), making it a requirement to use encryption to hide user credentials from eavesdroppers or client-side certificates or pre-shared keys for authentication. Other protocols do not have a standardized way of performing authentication. In CoAP, for instance, such authentication is built on top of the protocol as security options. The lack of such options in the standard affects interoperability negatively. Usernames and passwords A common method to provide user credentials during authentication is by providing a simple username and password to the server. This is a very human concept. Some solutions use the concept of a pre-shared key (PSK) instead, as it is more applicable to machines, conceptually at least. If you're using usernames and passwords, do not reuse them between devices, just because it is simple. One way to generate secure, difficult-to-guess usernames and passwords is to randomly create them. In this way, they correspond more to pre-shared keys. One problem in using randomly created user credentials is how to administer them. Both the server and the client need to be aware of this information. The identity must also be distributed among the entities that are to communicate with the device. Here, the device creates its own random identity and creates the corresponding account in the XMPP server in a secure manner. There is no need for a common factory default setting. It then reports its identity to a thing registry or provisioning server where the owner can claim it and learn the newly created identity. This method never compromises the credentials and does not affect the cost of production negatively. Furthermore, passwords should never be stored in clear text if it can be avoided. This is especially important on servers where many passwords are stored. Instead, hashes of the passwords should be stored. Most modern authentication algorithms support the use of password hashes. Storing hashes minimizes the risk of unwanted generation of original passwords for attempted reuse in other systems. Using message brokers and provisioning servers Using message brokers can greatly enhance security in an IoT application and lower the complexity of implementation when it comes to authentication, as long as message brokers provide authenticated identity information in messages it forwards. In XMPP, all the federated XMPP servers authenticate clients connected to them as well as the federated servers themselves when they intercommunicate to transport messages between domains. This relieves clients from the burden of having to authenticate each entity in trying to communicate with it since they all have been securely authenticated. It's sufficient to manage security on an identity level. Even this step can be relieved further by the use of provisioning. Unfortunately, not all protocols using message brokers provide this added security since they do not provide information about the sender of packets. MQTT is an example of such a protocol. Centralization versus decentralization Comparing centralized and decentralized architectures is like comparing the process of putting all the eggs in the same basket and distributing them in many much smaller baskets. The effect of a breach of security is much smaller in the decentralized case; fewer eggs get smashed when you trip over. Even though there are more baskets, which might increase the risk of an attack, the expected gain of an attack is much smaller. This limits the motivation of performing a costly attack, which in turn makes it simpler to protect it against. When designing IoT architecture, try to consider the following points: Avoid storing data in a central position if possible. Only store the data centrally that is actually needed to bind things together. Distribute logic, data, and workload. Perform work as far out in the network as possible. This makes the solution more scalable, and it utilizes existing resources better. Use linked data to spread data across the Internet, and use standardized grid computation technologies to assemble distributed data (for example, SPARQL) to avoid the need to store and replicate data centrally. Use a federated set of small local brokers instead of trying to get all the devices on the same broker. Not all brokered protocols support federation, for example, XMPP supports it but MQTT does not. Let devices talk directly to each other instead of having a centralized proprietary API to store data or interpret communication between the two. Contemplate the use of cheap small and energy-efficient microcomputers such as the Raspberry Pi in local installations as an alternative to centralized operation and management from a datacenter. The need for interoperability What has made the Internet great is not a series of isolated services, but the ability to coexist, interchange data, and interact with the users. This is important to keep in mind when developing for IoT. Avoid the mistakes made by many operators who failed during the first Internet bubble. You cannot take responsibility for everything in a service. The new Internet economy is based on the interaction and cooperation between services and its users. Solves complexity The same must be true with the new IoT. Those companies that believe they can control the entire value chain, from things to services, middleware, administration, operation, apps, and so on, will fail, as the companies in the first Internet bubble failed. Companies that built devices with proprietary protocols, middleware, and mobile phone applications, where you can control your things, will fail. Why? Imagine a future where you have a thousand different things in your apartment from a hundred manufacturers. Would you want to download a hundred smart phone apps to control them? Would you like five different applications just to control your lights at home, just because you have light bulbs from five different manufacturers? An alternative would be to have one app to rule them all. There might be a hundred different such apps available (or more), but you can choose which one to use based on your taste and user feedback. And you can change if you want to. But for this to be possible, things need to be interoperable, meaning they should communicate using a commonly understood language. Reduces cost Interoperability does not only affect simplicity of installation and management, but also the price of solutions. Consider a factory that uses thousands (or hundreds of thousands) of devices to control and automate all processes within. Would you like to be able to buy things cheaply or expensively? Companies that promote proprietary solutions, where you're forced to use their system to control your devices, can force their clients to pay a high price for future devices and maintenance, or the large investment made originally might be lost. Will such a solution be able to survive against competitors who sell interoperable solutions where you can buy devices from multiple manufacturers? Interoperability provides competition, and competition drives down cost and increases functionality and quality. This might be a reason for a company to work against interoperability, as it threatens its current business model. But the alternative might be worse. A competitor, possibly a new one, might provide such a solution, and when that happens, the business model with proprietary solutions is dead anyway. The companies that are quickest in adapting a new paradigm are the ones who would most probably survive a paradigm shift, as the shift from M2M to IoT undoubtedly is. Allows new kinds of services and reuse of devices There are many things you cannot do unless you have an interoperable communication model from the start. Consider a future smart city. Here, new applications and services will be built that will reuse existing devices, which were installed perhaps as part of other systems and services. These applications will deliver new value to the inhabitants of the city without the need of installing new duplicate devices for each service being built. But such multiple use of devices is only possible if the devices communicate in an open and interoperable way. However, care has to be taken at the same time since installing devices in an open environment requires the communication infrastructure to be secure as well. To achieve the goal of building smart cities, it is vitally important to use technologies that allow you to have both a secure communication infrastructure and an interoperable one. Combining security and interoperability As we have seen, there are times where security is contradictory to interoperability. If security is meant to be taken as exclusivity, it opposes the idea of interoperability, which is by its very nature inclusive. Depending on the choice of communication infrastructure, you might have to use security measures that directly oppose the idea of an interoperable infrastructure, prohibiting third parties from accessing existing devices in a secure fashion. It is important during the architecture design phase, before implementation, to thoroughly investigate what communication technologies are available, and what they provide and what they do not provide. You might think that this is a minor issue, thinking that you can easily build what is missing on top of the chosen infrastructure. This is not true. All such implementation is by its very nature proprietary, and therefore not interoperable. This might drastically limit your options in the future, which in turn might drastically reduce anyone else's willingness to use your solution. The more a technology includes, in the form of global identity, authentication, authorization, different communication patterns, common language for interchange of sensor data, control operations and access privileges, provisioning, and so on, the more interoperable the solution becomes. If the technology at the same time provides a secure infrastructure, you have the possibility to create a solution that is both secure and interoperable without the need to build proprietary or exclusive solutions on top of it. Summary In this article, we presented the basic reasons why security and interoperability must be contemplated early on in the project and not added as late patchwork because it was shown to be necessary. Not only does such late addition limit interoperability and future use of the solution, it also creates solutions that can jeopardize not only yourself your company and your customers, but in the end, even national security. This article also presented some basic modes of attack and some basic defense systems to counter them. Resources for Article: Further resources on this subject: Rich Internet Application (RIA) – Canvas [article] ExtGWT Rich Internet Application: Crafting UI Real Estate [article] Sending Data to Google Docs [article]
Read more
  • 0
  • 0
  • 3290

article-image-beagle-boards
Packt
23 Dec 2014
10 min read
Save for later

Beagle Boards

Packt
23 Dec 2014
10 min read
In this article by Hunyue Yau, author of Learning BeagleBone, we will provide a background on the entire family of Beagle boards with brief highlights of what is unique about every member, such as things that favor one member over the other. This article will help you identify Beagle boards that might have been mislabeled. The following topics will be covered here: What are Beagle boards How do they relate to other development boards BeagleBoard Classic BeagleBoard-xM BeagleBone White BeagleBone Black (For more resources related to this topic, see here.) The Beagle board family The Beagle boards are a family of low-cost, open development boards that provide everyday students, developers, and other interested people with access to the current mobile processor technology on a path toward developing ideas. Prior to the invention of the Beagle family of boards, the available options with the user were primarily limited to either low-computing power boards, such as the 8-bit microcontroller-based Arduino boards, or dead-end options, such as repurposing existing products. There were even other options such as compromising the physical size or electrical power consumption by utilizing the nonmobile-oriented technology, for example, embedding a small laptop or desktop into a project. The Beagle boards attempt to address these points and more. The Beagle board family provides you with access to the technologies that were originally developed for mobile devices, such as phones and tablets, and use them to develop projects and for educational purposes. By leveraging the same technology for education, students can be less reliant on obsolete technologies. All this access comes affordably. Prior to the Beagle boards being available, development boards of this class easily exceeded thousands of dollars. In contrast, the initial Beagle board offering was priced at a mere 150 dollars! The Beagle boards The Beagle family of boards began in late 2008 with the original Beagle board. The original board has quite a few characteristics similar to all members of the Beagle board family. All the current boards are based on an ARM core and can be powered by a single 5V source or by varying degrees from a USB port. All boards have a USB port for expansion and provide direct access to the processor I/O for advance interfacing and expansion. Examples of the processor I/O available for expansion include Serial Peripheral Interface (SPI), I2C, pulse width modulation (PWM), and general-purpose input/output (GPIO). The USB expansion path was introduced at an early stage providing a cheap path to add features by leveraging the existing desktop and laptop accessories. All the boards are designed keeping the beginner board in mind and, as such, are impossible to brick on software basis. To brick a board is a common slang term that refers to damaging a board beyond recovery, thus, turning the board from an embedded development system to something as useful for embedded development as a brick. This doesn't mean that they cannot be damaged electrically or physically. For those who are interested, the design and manufacturing material is also available for all the boards. The bill of material is designed to be available via the distribution so that the boards themselves can be customized and manufactured even in small quantities. This allows projects to be manufactured if desired. Do not power up the board on any conductive surfaces or near conductive materials, such as metal tools or exposed wires. The board is fully exposed and doing so can subject your board to electrical damage. The only exception is a proper ESD mat design for use with electrons. The proper ESD mats are designed to be only conductive enough to discharge static electricity without damaging the circuits. The following sections highlight the specifications for each member presented in the order they were introduced. They are based on the latest revision of the board. As these boards leverage mobile technology, the availability changes and the designs are partly revised to accommodate the available parts. The design information for older versions is available at http://www.beagleboard.org/. BeagleBoard Classic The initial member of the Beagle board family is the BeagleBoard Classic (BBC), which features the following specs: OMAP3530 clocked up to 720 MHz, featuring an ARM Cortex-A8 core along with integrated 3D and video decoding accelerators 256 MB of LPDDR (low-power DDR) memory with 512 MB of integrated (NAND) flash memory on board; older revisions had less memory USB OTG (switchable between a USB device and a USB host) along with a pure USB high-speed host only port A low-level debug port accessible using a common desktop DB-9 adapter Analog audio in and out DVI-D video output to connect to a desktop monitor or a digital TV A full-size SD card interface A 28-pin general expansion header along with two 20-pin headers for video expansion 1.8V I/O Only a nominal 5V is available on the expansion connector. Expansion boards should have their own regulator. At the original release of the BBC in 2008, OMAP3530 was comparable to the processors of mobile phones of that time. The BBC is the only member to feature a full-size SD card interface. You can see the BeagleBoard Classic in the following image: BeagleBoard-xM As an upgrade to the BBC, the BeagleBoard-xM (BBX) was introduced later. It features the following specs: DM3730 clocked up to 1 GHz, featuring an ARM Cortex-A8 core along with integrated 3D and video decoding accelerators compared to 720 MHz of the BBC. 512 MB of LPDDR but no onboard flash memory compared to 256 MB of LPDDR with up to 512 MB of onboard flash memory. USB OTG (switchable between a USB device and a USB host) along with an onboard hub to provide four USB host ports and an onboard USB connected the Ethernet interface. The hub and Ethernet connect to the same port as the only high-speed port of the BBC. The hub allows low-speed devices to work with the BBX. A low-level debug port accessible with a standard DB-9 serial cable. An adapter is no longer needed. Analog audio in and out. This is the same analog audio in and out as that of the BBC. DVI-D video output to connect to a desktop monitor or a digital TV. This is the same DVI-D video output as used in the BBC. A microSD interface. It replaces the full-size SD interface on the BBC. The difference is mainly the physical size. A 28-pin expansion interface and two 20-pin video expansion interfaces along with an additional camera interface board. The 28-pin and two 20-pin interfaces are physically and electrically compatible with the BBC. 1.8V I/O. Only a nominal 5V is available on the expansion connector. Expansion boards should have their own regulator. The BBX has a faster processor and added capabilities when compared to the BBC. The camera interface is a unique feature for the BBX and provides a direct interface for raw camera sensors. The 28-pin interface, along with the two 20-pin video interfaces, is electrically and mechanically compatible with the BBC. Mechanical mounting holes were purposely made backward compatible. Beginning with the BBX, boards were shipped with a microSD card containing the Angström Linux distribution. The latest version of the kernel and bootloader are shared between the BBX and BBC. The software can detect and utilize features available on each board as the DM3730 and the OMAP3530 processors are internally very similar. You can see the BeagleBoard-xM in the following image: BeagleBone To simplify things and to bring in a low-entry cost, the BeagleBone subfamily of boards was introduced. While many concepts in this article can be shared with the entire Beagle family, this article will focus on this subfamily. All current members of BeagleBone can be purchased for less than 100 dollars. BeagleBone White The initial member of this subfamily is the BeagleBone White (BBW). This new form factor has a footprint to allow the board itself to be stored inside an Altoids tin. The Altoids tin is conductive and can electrically damage the board if an operational BeagleBone without additional protection is placed inside it. The BBW features the following specs: AM3358 clocked at up to 720 MHz, featuring an ARM Cortex-A8 core along with a 3D accelerator, an ARM Cortex-M3 for power management, and a unique feature—the Programmable Real-time Unit Subsystem (PRUSS) 256 MB of DDR2 memory Two USB ports, namely, a dedicated USB host and dedicated USB device An onboard JTAG debugger An onboard USB interface to access the low-level serial interfaces 10/100 MB Ethernet interfaces Two 46-pin expansion interfaces with up to eight channels of analog input 10-pin power expansion interface A microSD interface 3.3V digital I/O 1.8V analog I/O As with the BBX, the BBW ships with the Angström Linux distribution. You can see the BeagleBone White in the following image: BeagleBone Black Intended as a lower cost version of the BeagleBone, the BeagleBone Black (BBB) features the following specs: AM3358 clocked at up to 1 GHz, featuring an ARM Cortex-A8 core along with a 3D accelerator, an ARM Cortex-M3 for power management, and a unique feature: the PRUSS. This is an improved revision of the same processor in BBW. 512 MB of DDR3 memory compared to 256 MB of DDR2 memory on the BBW. 4 GB of onboard flash embedded MMC (eMMC) memory for the latest version compared to a complete lack of onboard flash memory on the BBW. Two USB ports, namely, a dedicated USB host and dedicated USB device. A low-level serial interface is available as a dedicated 6-pin header. 10/100 MB Ethernet interfaces. Two 46-pin expansion interfaces with up to eight channels of analog input. A microSD interface. A micro HDMI interface to connect to a digital monitor or a digital TV. A digital audio is available on the same interface. This is new to the BBB. 3.3V digital I/O. 1.8V analog I/O. The overall mechanical form factor of the BBB is the same as that of the BBW. However, due to the added features, there are some slight electrical changes in the expansion interface. The power expansion header was removed to make room for added features. Unlike other boards, the BBB is shipped with a Linux distribution on the internal flash memory. Early revisions shipped with Angström Linux and later revisions shipped with Debian Linux as an attempt to simplify things for new users. Unlike the BBW, the BBB does not provide an onboard JTAG debugger or an onboard USB to serial converter. Both these features were provided by a single chip on the BBW and were removed from the BBB for cost reasons. JTAG debugging is possible on the BBB by soldering a connector to the back of the BBB and using an external debugger. A serial port access on the BBB is provided by a serial header. This article will focus solely on the BeagleBone subfamily (BBW and BBB). The difference between them will be noted where applicable. It should be noted that for more advanced projects, the BBC/BBX should be considered as they offer additional unique features that are not available in the BBW/BBB. Most concepts learned on the BBB/BBW boards are entirely applicable to the BBC/BBX boards. You can see the BeagleBone Black in the following image: Summary In this article, we looked at the Beagle board offerings and a few unique features of each offering. Then, we went through the process of setting up a BeagleBone board and understood the basics to access it from a laptop/desktop. Resources for Article: Further resources on this subject: Protecting GPG Keys in BeagleBone [article] Making the Unit Very Mobile - Controlling Legged Movement [article] Pulse width modulator [article]
Read more
  • 0
  • 0
  • 2850

article-image-arduino-mobile-robot
Packt
17 Dec 2014
8 min read
Save for later

The Arduino Mobile Robot

Packt
17 Dec 2014
8 min read
In this article, Marco Schwartz and Stefan Buttigieg, the authors of the Arduino Android Blueprints, we are going to use most of the concepts we have learned to control a mobile robot via an Android app. The robot will have two motors that we can control, and also an ultrasonic sensor in the front so that it can detect obstacles. The robot will also have a BLE chip so that it can receive commands from the Android app. The following will be the major takeaways: Building a mobile robot based on the Arduino platform Connecting a BLE module to the Arduino robot (For more resources related to this topic, see here.) Configuring the hardware We are first going to assemble the robot itself, and then see how to connect the Bluetooth module and the ultrasonic sensor. To give you an idea of what you should end up with, the following is a front-view image of the robot when fully assembled: The following image shows the back of the robot when fully assembled: The first step is to assemble the robot chassis. To do so, you can watch the DFRobot assembly guide at https://www.youtube.com/watch?v=tKakeyL_8Fg. Then, you need to attach the different Arduino boards and shields to the robot. Use the spacers found in the robot chassis kit to mount the Arduino Uno board first. Then put the Arduino motor shield on top of that. At this point, use the screw header terminals to connect the two DC motors to the motor shield. This is how it should look at this point: Finally, mount the prototyping shield on top of the motor shield. We are now going to connect the BLE module and the ultrasonic sensor to the Arduino prototyping shield. The following is a schematic diagram showing the connections between the Arduino Uno board (done via the prototyping shield in our case) and the components: Now perform the following steps: First, we are now going to connect the BLE module. Place the module on the prototyping shield. Connect the power supply of the module as follows: GND goes to the prototyping shield's GND pin, and VIN goes to the prototyping shield's +5V. After that, you need to connect the different wires responsible for the SPI interface: SCK to Arduino pin 13, MISO to Arduino pin 12, and MOSI to Arduino pin 11. Then connect the REQ pin to Arduino pin 10. Finally, connect the RDY pin to Arduino pin 2 and the RST pin to Arduino pin 9. For the URM37 module, connect the VCC pin of the module to Arduino +5V, GND to GND, and the PWM pin to the Arduino A3 pin. To review the pin order on the URM37 module, you can check the official DFRobot documentation at http://www.dfrobot.com/wiki/index.php?title=URM37_V3.2_Ultrasonic_Sensor_(SKU:SEN0001). The following is a close-up image of the prototyping shield with the BLE module connected: Finally, connect the 7.4 V battery to the Arduino Uno board power jack. The battery is simply placed below the Arduino Uno board. Testing the robot We are now going to write a sketch to test the different functionalities of the robot, first without using Bluetooth. As the sketch is quite long, we will look at the code piece by piece. Before you proceed, make sure that the battery is always plugged into the robot. Now perform the following steps: The sketch starts by including the aREST library that we will use to control the robot via serial commands: #includ e <aREST.h> Now we declare which pins the motors are connected to: int speed_motor1 = 6; int speed_motor2 = 5; int direction_motor1 = 7; int direction_motor2 = 4; We also declare which pin the ultrasonic sensor is connected to: int distance_sensor = A3; Then, we create an instance of the aREST library: aREST rest = aREST(); To store the distance data measured by the ultrasonic sensor, we declare a distance variable: int distance; In the setup() function of the sketch, we first initialize serial communications that we will use to communicate with the robot for this test: Serial.begin(115200); We also expose the distance variable to the REST API, so we can access it easily: rest.variable("distance",&distance); To control the robot, we are going to declare a whole set of functions that will perform the basic operations: going forward, going backward, turning on itself (left or right), and stopping. We will see the details of these functions in a moment; for now, we just need to expose them to the API: rest.function("forward",forward); rest.function("backward",backward); rest.function("left",left); rest.function("right",right); rest.function("stop",stop); We also give the robot an ID and a name: rest.set_id("001"); rest.set_name("mobile_robot"); In the loop() function of the sketch, we first measure the distance from the sensor: distance = measure_distance(distance_sensor); We then handle the requests using the aREST library: rest.handle(Serial); Now, we will look at the functions for controlling the motors. They are all based on a function to control a single motor, where we need to set the motor pins, the speed, and the direction of the motor: void send_motor_command(int speed_pin, int direction_pin, int pwm, boolean dir) { analogWrite(speed_pin, pwm); // Set PWM control, 0 for stop, and 255 for maximum speed digitalWrite(direction_pin, dir); // Dir set the rotation direction of the motor (true or false means forward or reverse) } Based on this function, we can now define the different functions to move the robot, such as forward: int forward(String command) { send_motor_command(speed_motor1,direction_motor1,100,1); send_motor_command(speed_motor2,direction_motor2,100,1); return 1; } We also define a backward function, simply inverting the direction of both motors: int backward(String command) { send_motor_command(speed_motor1,direction_motor1,100,0); send_motor_command(speed_motor2,direction_motor2,100,0); return 1; } To make the robot turn left, we simply make the motors rotate in opposite directions: int left(String command) { send_motor_command(speed_motor1,direction_motor1,75,0); send_motor_command(speed_motor2,direction_motor2,75,1); return 1; } We also have a function to stop the robot: int stop(String command) { send_motor_command(speed_motor1,direction_motor1,0,1); send_motor_command(speed_motor2,direction_motor2,0,1); return 1; } There is also a function to make the robot turn right, which is not detailed here. We are now going to test the robot. Before you do anything, ensure that the battery is always plugged into the robot. This will ensure that the motors are not trying to get power from your computer USB port, which could damage it. Also place some small support at the bottom of the robot so that the wheels don't touch the ground. This will ensure that you can test all the commands of the robot without the robot moving too far from your computer, as it is still attached via the USB cable. Now you can upload the sketch to your Arduino Uno board. Open the serial monitor and type the following: /forward This should make both the wheels of the robot turn in the same direction. You can also try the other commands to move the robot to make sure they all work properly. Then, test the ultrasonic distance sensor by typing the following: /distance You should get back the distance (in centimeters) in front of the sensor: {"distance": 24, "id": "001", "name": "mobile_robot", "connected": true} Try changing the distance by putting your hand in front of the sensor and typing the command again. Writing the Arduino sketch Now that we have made sure that the robot is working properly, we can write the final sketch that will receive the commands via Bluetooth. As the sketch shares many similarities with the test sketch, we are only going to see what is added compared to the test sketch. We first need to include more libraries: #include <SPI.h> #include "Adafruit_BLE_UART.h" #include <aREST.h> We also define which pins the BLE module is connected to: #define ADAFRUITBLE_REQ 10 #define ADAFRUITBLE_RDY 2     // This should be an interrupt pin, on Uno thats #2 or #3 #define ADAFRUITBLE_RST 9 We have to create an instance of the BLE module: Adafruit_BLE_UART BTLEserial = Adafruit_BLE_UART(ADAFRUITBLE_REQ, ADAFRUITBLE_RDY, ADAFRUITBLE_RST); In the setup() function of the sketch, we initialize the BLE chip: BTLEserial.begin(); In the loop() function, we check the status of the BLE chip and store it in a variable: BTLEserial.pollACI(); aci_evt_opcode_t status = BTLEserial.getState(); If we detect that a device is connected to the chip, we handle the incoming request with the aREST library, which will allow us to use the same commands as before to control the robot: if (status == ACI_EVT_CONNECTED) { rest.handle(BTLEserial); } You can now upload the code to your Arduino board, again by making sure that the battery is connected to the Arduino Uno board via the power jack. You can now move on to the development of the Android application to control the robot. Summary In this article, we managed to create our very own mobile robot together with a companion Android application that we can use to control our robot. We achieved this step by step by setting up an Arduino-enabled robot and coding the companion Android application. It uses the BLE software and hardware of an Android physical device running on Android 4.3 or higher. Resources for Article: Further resources on this subject: Our First Project – A Basic Thermometer [article] Hardware configuration [article] Avoiding Obstacles Using Sensors [article]
Read more
  • 0
  • 0
  • 4463

article-image-overview-chips
Packt
16 Dec 2014
7 min read
Save for later

Overview of Chips

Packt
16 Dec 2014
7 min read
In this article by Olliver M. Schinagl, author of Getting Started with Cubieboard, we will overview various development boards and compare a few popular ones to help you choose a board tailored to your requirements. In the last few years, ARM-based Systems on Chips (SoCs) have become immensely popular. Compared to the regular x86 Intel-based or AMD-based CPUs, they are much more energy efficient and still performs adequately. They also incorporate a lot of peripherals, such as a Graphics Processor Unit (GPU), a Video Accelerator (VPU), an audio controller, various storage controllers, and various buses (I2C and SPI), to name a few. This immensely reduces the required components on a board. With the reduction in the required components, there are a few obvious advantages, such as reduction in the cost and, consequentially, a much easier design of boards. Thus, many companies with electronic engineers are able to design and manufacture these boards cheaply. (For more resources related to this topic, see here.) So, there are many boards; does that mean there are also many SoCs? Quite a few actually, but to keep the following list short, only the most popular ones are listed: Allwinner's A-series Broadcom's BCM-series Freescale's i.MX-series MediaTek's MT-series Rockchip's RK-series Samsung's Exynos-series NVIDIA's Tegra-series Texas Instruments' AM-series and OMAP-series Qualcomm's APQ-series and MSM-series While many of the potential chips are interesting, Allwinner's A-series of SoCs will be the focus of this book. Due to their low price and decent availability, quite a few companies design development boards around these chips and sell them at a low cost. Additionally, the A-series is, presently, the most open source friendly series of chips available. There is a fully open source bootloader, and nearly all the hardware is supported by open source drivers. Among the A-series of chips, there are a few choices. The following is the list of the most common and most interesting devices: A10: This is the first chip of the A-series and the best supported one, as it has been around for long. It is able to communicate with the outside world over I2C, SPI, MMC, NAND, digital and analog video out, analog audio out, SPDIF, I2S, Ethernet MAC, USB, SATA, and HDMI. This chip initially targeted everything, such as phones, tablets, set-top boxes, and mini PC sticks. For its GPU, it features the MALI-400. A10S: This chip followed the A10; it focused mainly on the PC stick market and left out several parts, such as SATA and analog video in/out, and it has no LCD interface. These parts were left out to reduce the cost of the chip, making it interesting for cheap TV sticks. A13: This chip was introduced more or less simultaneously with the A10S for primary use in tablets. It lacked SATA, Ethernet MAC, and also HDMI, which reduced the chip's cost even more. A20: This chip was introduced way after the others and hence it was pin-compatible to the A10 intended to replace it. As the name hints, the A20 is a dual-core variant of the A10. The ARM cores are slightly different; Cortex-A7 has been used in the A10 instead of Cortex-A8. A23: This chip was introduced after the A31 and A31S and is reasonably similar to the A31 in its design. It features a dual-core Cortex-A7 design and is intended to replace the A13. It is mainly intended to be used in tablets. A31: This chip features four Cortex-A7 cores and generally has all the connections that the A10 has. It is, however, not popular within the community because it features a PowerVR GPU that, until now, has seen no community support at all. Additionally, there are no development boards commonly available for this chip. A31S: This chip was released slightly after the A31 to solve some issues with the A31. There are no common development boards available. Choosing the right development board Allwinner's A-series of SoCs was produced and sold so cheaply that many companies used these chips in their products, such as tablets, set-top boxes, and eventually, development boards. Before the availability of development boards, people worked on and with tablets and set-top boxes. The most common and popular boards are from Cubietech and Olimex, in part because both companies handed out development boards to community developers for free. Olimex Olimex has released a fair amount of different development boards and peripherals. A lot of its boards are open source hardware with schematics and layout files available, and Olimex is also very open source friendly. You can see the Olimex board in the following image: Olimex offers the A10-OLinuXino-LIME, an A10-based micro board that is marketed to compete with the famous Raspberry Pi price-wise. Due to its small size, it uses less standard, 1.27 mm pitch headers for the pins, but it has nearly all of these pins exposed for use. You can see the A10-OLinuXino-LIME board in the following image: The Olimex OLinuXino series of boards is available in the A10, A13, and A20 flavors and has more standard, 2.54 mm pitch headers that are compatible with the old IDE and serial connectors. Olimex has various sensors, displays, and other peripherals that are also compatible with these headers. Cubietech Cubietech was formed by previous Allwinner employees and was one of the first development boards available using the Allwinner SoC. While it is not open source hardware, it does offer the schematics for download. Cubietech released three boards: the Cubieboard1, the Cubieboard2, and the Cubieboard3—also known as the Cubietruck. Interfacing with these boards can be quite tricky, as they use 2 mm pitch headers that might be hard to find in Europe or America. You can see the Cubietech board in the following image: Cubieboard1 and Cubieboard2 use identical boards; the only difference is that A20 is used instead of A10 in Cubieboard2. These boards only have a subset of the pins exposed. You can see the Cubietruck board in the following image: Cubietruck is quite different but a well-designed A20 board. It features everything that the previous boards offer, along with Gigabit Ethernet, VGA, Bluetooth, Wi-Fi, and an optical audio out. This does come at the cost that there are fewer pins to keep the size reasonably small. Compared to Raspberry Pi or LIME, it is almost double the size. Lemaker Lemaker made a smart design choice when releasing its Banana Pi board. It is an Allwinner A20-based board but uses the same board size and connector placement as Raspberry Pi and hence the name Banana Pi. Because of this, many of those Raspberry Pi cases could fit the Banana Pi and even shields will fit. Software-wise, it is quite different and does not work when using Raspberry Pi image files. Nevertheless, it features composite video out, stereo audio out, HDMI out Gigabit Ethernet, two USB ports, one USB OtG port, CSI out and LVDS out, and a handful of pins. Also available are a LiPo battery connector and a SATA connector and two buttons, but those might not be accessible on a lot of standard cases. See the following image for the topside of the Banana Pi: Itead and Olimex Itead and Olimex both offer an interesting board, which is worth mentioning separately. The Iteaduino Plus and the Olimex A20-SoM are quite interesting concepts; the computing module, which is a board with the SoC, memory, and flash, which are plugin modules, and a separated baseboard. Both of them sell a very complete baseboard as open source hardware, but anybody can design their own baseboard and buy the computing module. You can see the following board by Itead: Refer to the following board by Olimex: Additional hardware While a development board is a key ingredient, there are several other items that are also required. A power supply, for example, is not always supplied and does have some considerations. Also, additional hardware is required for the initial communication and to debug. Summary In this article, you looked at the additional hardware and a few extra peripherals that will help you understand the stuff you require for your projects. Resources for Article: Further resources on this subject: Home Security by BeagleBone [article] Mobile Devices [article] Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs [article]
Read more
  • 0
  • 0
  • 1701
article-image-detecting-beacons-showing-advert
Packt
25 Nov 2014
26 min read
Save for later

Detecting Beacons – Showing an Advert

Packt
25 Nov 2014
26 min read
In this article, by Craig Gilchrist, author of the book Learning iBeacon, we're going to expand our knowledge and get an in-depth understanding of the broadcasting triplet, and we'll expand on some of the important classes within the Core Location framework. (For more resources related to this topic, see here.) To help demonstrate the more in-depth concepts, we'll build an app that shows different advertisements depending on the major and minor values of the beacon that it detects. We'll be using the context of an imaginary department store called Matey's. Matey's are currently undergoing iBeacon trials in their flagship London store and at the moment are giving offers on their different-themed restaurants and also on their ladies clothing to users who are using their branded app. Uses of the UUID/major/minor broadcasting triplet In the last article, we covered the reasons behind the broadcasting triplet; we're going to use the triplet with a more realistic scenario. Let's go over the three values again in some more detail. UUID – Universally Unique Identifier The UUID is meant to be unique to your app. It can be spoofed, but generally, your app would be the only app looking for that UUID. The UUID identifies a region, which is the maximum broadcast range of a beacon from its center point. Think of a region as a circle of broadcast with the beacon in the middle. If lots of beacons with the same UUID have overlapping broadcasting ranges, then the region is represented by the broadcasting range of all the beacons combined as shown in the following figure. The combined range of all the beacons with the same UUID becomes the region. Broadcast range More specifically, the region is represented by an instance of the CLBeaconRegion class, which we'll cover in more detail later in this article. The following code shows how to configure CLBeaconRegion: NSString * uuidString = @"78BC6634-A424-4E05-A2AE-A59A25CAC4A9";   NSUUID * regionUUID; regionUUID = [[NSUUID alloc] initWithUUIDString:uuidString"];    CLBeaconRegion * region; region = [[CLBeaconRegion alloc] initWithProximityUUID: regionUUID identifier:@"My Region"]; Generally, most apps will be monitoring only for one region. This is normally sufficient since the major and minor values are 16-bit unsigned integers, which means that each value can be a number up to 65,535 giving 4,294,836,225 unique beacon combinations per UUID. Since the major and minor values are used to represent a subsection of the use case, there may be a time when 65,535 combinations of a major value may not be enough and so, this would be the rare time that your app can monitor multiple regions with different UUIDs. Another more likely example is that your app has multiple use cases, which are more logically split by UUID. An example where an app has multiple use cases would be a loyalty app that has offers for many different retailers when the app is within the vicinity of the retail stores. Here you can have a different UUID for every retailer. Major The major value further identifies your use case. The major value should separate your use case along logical categories. This could be sections in a shopping mall or exhibits in a museum. In our example, a use case of the major value represents the different types of service within a department store. In some cases, you may wish to separate logical categories into more than one major value. This would only be if each category has more than 65,535 beacons. Minor The minor value ultimately identifies the beacon itself. If you consider the major value as the category, then the minor value is the beacon within that category. Example of a use case The example laid out in this article uses the following UUID/major/minor values to broadcast different adverts for Matey's: Department Food Women's clothing UUID 8F0C1DDC-11E5-4A07-8910-425941B072F9 Major 1 2 Minor 1 30 percent off on sushi at The Japanese Kitchen 50 percent off on all ladies' clothing   2 Buy one get one free at Tucci's Pizza N/A Understanding Core Location The Core Location framework lets you determine the current location or heading associated with the device. The framework has been around since 2008 and was present in iOS 2.0. Up until the release of iOS 7, the framework was only used for geolocation based on GPS coordinates and so was suitable only for outdoor location. The framework got a new set of classes and new methods were added to the existing classes to accommodate the beacon-based location functionality. Let's explore a few of these classes in more detail. The CLBeaconRegion class Geo-fencing (geofencing) is a feature in a software program that uses the global positioning system (GPS) or radio frequency identification (RFID) to define geographical boundaries. A geofence is a virtual barrier. The CLBeaconRegion class defines a geofenced boundary identified by a UUID and the collective range of all physical beacons with the same UUID. When a device matching the CLBeaconRegion UUID comes in range, the region triggers the delivery of an appropriate notification. CLBeaconRegion inherits CLRegion, which also serves as the superclass of CLCircularRegion. The CLCircularRegion class defines the location and boundaries for a circular geographic region. You can use instances of this class to define geofences for a specific location, but it shouldn't be confused with CLBeaconRegion. The CLCircularRegion class shares many of the same methods but is specifically related to a geographic location based on the GPS coordinates of the device. The following figure shows the CLRegion class and its descendants. The CLRegion class hierarchy The CLLocationManager class The CLLocationManager class defines the interface for configuring the delivery of location-and heading-related events to your application. You use an instance of this class to establish the parameters that determine when location and heading events should be delivered and to start and stop the actual delivery of those events. You can also use a location manager object to retrieve the most recent location and heading data. Creating a CLLocationManager class The CLLocationManager class is used to track both geolocation and proximity based on beacons. To start tracking beacon regions using the CLLocationManager class, we need to do the following: Create an instance of CLLocationManager. Assign an object conforming to the CLLocationManagerDelegate protocol to the delegate property. Call the appropriate start method to begin the delivery of events. All location- and heading-related updates are delivered to the associated delegate object, which is a custom object that you provide. Defining a CLLocationManager class line by line Consider the following steps to define a CLLocationManager class line by line: Every class that needs to be notified about CLLocationManager events needs to first import the Core Location framework (usually in the header file) as shown: #import <CoreLocation/CoreLocation.h> Then, once the framework is imported, the class needs to declare itself as implementing the CLLocationManagerDelegate protocol like the following view controller does: @interface MyViewController :   UIViewController<CLLocationManagerDelegate> Next, you need to create an instance of CLLocationManager and set your class as the instance delegate of CLLocationManager as shown:    CLLocationManager * locationManager =       [[CLLocationManager alloc] init];    locationManager.delegate = self; You then need a region for your location manager to work with: // Create a unique ID to identify our region. NSUUID * regionId = [[NSUUID alloc]   initWithUUIDString:@" AD32373E-9969-4889-9507-C89FCD44F94E"];   // Create a region to monitor. CLBeaconRegion * beaconRegion =   [[CLBeaconRegion alloc] initWithProximityUUID: regionId identifier:@"My Region"]; Finally, you need to call the appropriate start method using the beacon region. Each start method has a different purpose, which we'll explain shortly: // Start monitoring and ranging beacons. [locationManager startMonitoringForRegion:beaconRegion]; [locationManager startRangingBeaconsInRegion:beaconRegion]; Once the class is imported, you need to implement the methods of the CLLocationManagerDelegate protocol. Some of the most important delegate methods are explained shortly. This isn't an exhaustive list of the methods, but it does include all of the important methods we'll be using in this article. locationManager:didEnterRegion Whenever you enter a region that your location manager has been instructed to look for (by calling startRangingBeaconsInRegion), the locationManager:didEnterRegion delegate method is called. This method gives you an opportunity to do something with the region such as start monitoring for specific beacons, shown as follows: -(void)locationManager:(CLLocationManager *) manager didEnterRegion:(CLRegion *)region {    // Do something when we enter a region. } locationManager:didExitRegion Similarly, when you exit the region, the locationManager:didExitRegion delegate method is called. Here you can do things like stop monitoring for specific beacons, shown as follows: -(void)locationManager:(CLLocationManager *)manager   didExitRegion:(CLRegion *)region {    // Do something when we exit a region. } When testing your region monitoring code on a device, realize that region events may not happen immediately after a region boundary is crossed. To prevent spurious notifications, iOS does not deliver region notifications until certain threshold conditions are met. Specifically, the user's location must cross the region boundary and move away from that boundary by a minimum distance and remain at that minimum distance for at least 20 seconds before the notifications are reported. locationManager:didRangeBeacons:inRegion The locationManager:didRangeBeacons:inRegion method is called whenever a beacon (or a number of beacons) change distance from the device. For now, it's enough to know that each beacon that's returned in this array has a property called proximity, which returns a CLProximity enum value (CLProximityUnknown, CLProximityFar, CLProximityNear, and CLProximityImmediate), shown as follows: -(void)locationManager:(CLLocationManager *)manager   didRangeBeacons:(NSArray *)beacons inRegion: (CLBeaconRegion *)region {    // Do something with the array of beacons. } locationManager:didChangeAuthorizationStatus Finally, there's one more delegate method to cover. Whenever the users grant or deny authorization to use their location, locationManager:didChangeAuthorizationStatus is called. This method is passed as a CLAuthorizationStatus enum (kCLAuthorizationStatusNotDetermined, kCLAuthorizationStatusRestricted, kCLAuthorizationStatusDenied, and kCLAuthorizationStatusAuthorized), shown as follows: -(void)locationManager:(CLLocationManager *)manager   didChangeAuthorizationStatus:(CLAuthorizationStatus)status {    // Do something with the array of beacons. } Understanding iBeacon permissions It's important to understand that apps using the Core Location framework are essentially monitoring location, and therefore, they have to ask the user for their permission. The authorization status of a given application is managed by the system and determined by several factors. Applications must be explicitly authorized to use location services by the user, and the current location services must themselves be enabled for the system. A request for user authorization is displayed automatically when your application first attempts to use location services. Requesting the location can be a fine balancing act. Asking for permission at a point in an app, when your user wouldn't think it was relevant, makes it more likely that they will decline it. It makes more sense to tell the users why you're requesting their location and why it benefits them before requesting it so as not to scare away your more squeamish users. Building those kinds of information views isn't covered in this book, but to demonstrate the way a user is asked for permission, our app should show an alert like this: Requesting location permission If your user taps Don't Allow, then the location can't be enabled through the app unless it's deleted and reinstalled. The only way to allow location after denying it is through the settings. Location permissions in iOS 8 Since iOS 8.0, additional steps are required to obtain location permissions. In order to request location in iOS 8.0, you must now provide a friendly message in the app's plist by using the NSLocationAlwaysUsageDescription key, and also make a call to the CLLocationManager class' requestAlwaysAuthorization method. The NSLocationAlwaysUsageDescription key describes the reason the app accesses the user's location information. Include this key when your app uses location services in a potentially nonobvious way while running in the foreground or the background. There are two types of location permission requests as of iOS 8 as specified by the following plist keys: NSLocationWhenInUseUsageDescription: This plist key is required when you use the requestAlwaysAuthorization method of the CLLocationManager class to request authorization for location services. If this key is not present and you call the requestAlwaysAuthorization method, the system ignores your request and prevents your app from using location services. NSLocationAlwaysUsageDescription: This key is required when you use the requestWhenInUseAuthorization method of the CLLocationManager class to request authorization for location services. If the key is not present when you call the requestWhenInUseAuthorization method without including this key, the system ignores your request. Since iBeacon requires location services in the background, we will only ever use the NSLocationAlwaysUsageDescription key with the call to the CLLocationManager class' requestAlwaysAuthorization. Enabling location after denying it If a user denies enabling location services, you can follow the given steps to enable the service again on iOS 7: Open the iOS device settings and tap on Privacy. Go to the Location Services section. Turn location services on for your app by flicking the switch next to your app name. When your device is running iOS 8, you need to follow these steps: Open the iOS device settings and tap on Privacy. Go to your app in the Settings menu. Tap on Privacy. Tap on Location Services. Set the Allow Location Access to Always. Building the tutorial app To demonstrate the knowledge gained in this article, we're going to build an app for our imaginary department store Matey's. Matey's is trialing iBeacons with their app Matey's offers. People with the app get special offers in store as we explained earlier. For the app, we're going to start a single view application containing two controllers. The first is the default view controller, which will act as our CLLocationManagerDelegate, the second is a view controller that will be shown modally and shows the details of the offer relating to the beacon we've come into proximity with. The final thing to consider is that we'll only show each offer once in a session and we can only show an offer if one isn't showing. Shall we begin? Creating the app Let's start by firing up Xcode and choosing a new single view application just as we did in the previous article. Choose these values for the new project: Product Name: Matey's Offers Organization Name: Learning iBeacon Company Identifier: com.learning-iBeacon Class Prefix: LI Devices: iPhone Your project should now contain your LIAppDelegate and LIViewController classes. We're not going to touch the app delegate this time round, but we'll need to add some code to the LIViewController class since this is where all of our CLLocationManager code will be running. For now though, let's leave it to come back to later. Adding CLOfferViewController Our offer view controller will be used as a modal view controller to show the offer relating to the beacon that we come in contact with. Each of our offers is going to be represented with a different background color, a title, and an image to demonstrate the offer. Be sure to download the code relating to this article and add the three images contained therein to your project by dragging the images from finder into the project navigator: ladiesclothing.jpg pizza.jpg sushi.jpg Next, we need to create the view controller. Add a new file and be sure to choose the template Objective-c class from the iOS Cocoa Touch menu. When prompted, name this class LIOfferViewController and make it a subclass of UIViewController. Setting location permission settings We need to add our permission message to the applications so that when we request permission for the location, our dialog appears: Click on the project file in the project navigator to show the project settings. Click the Info tab of the Matey's Offers target. Under the Custom iOS Target Properties dictionary, add the NSLocationAlwaysUsageDescription key with the value. This app needs your location to give you wonderful offers. Adding some controls The offer view controller needs two controls to show the offer the view is representing, an image view and a label. Consider the following steps to add some controls to the view controller: Open the LIOfferViewController.h file and add the following properties to the header: @property (nonatomic, strong) UILabel * offerLabel; @property (nonatomic, strong) UIImageView * offerImageView; Now, we need to create them. Open the LIOfferViewController.m file and first, let's synthesize the controls. Add the following code just below the @implementation LIOfferViewController line: @synthesize offerLabel; @synthesize offerImageView; We've declared the controls; now, we need to actually create them. Within the viewDidLoad method, we need to create the label and image view. We don't need to set the actual values or images of our controls. This will be done by LIViewController when it encounters a beacon. Create the label by adding the following code below the call to [super viewDidLoad]. This will instantiate the label making it 300 points wide and appear 10 points from the left and top: UILabel * label = [[UILabel alloc]   initWithFrame:CGRectMake(10, 10, 300, 100)]; Now, we need to set some properties to style the label. We want our label to be center aligned, white in color, and with bold text. We also want it to auto wrap when it's too wide to fit the 300 point width. Add the following code: label setTextAlignment:NSTextAlignmentCenter]; [label setTextColor:[UIColor whiteColor]]; [label setFont:[UIFont boldSystemFontOfSize:22.f]]; label.numberOfLines = 0; // Allow the label to auto wrap. Now, we need to add our new label to the view and assign it to our property: [self.view addSubview:label]; self.offerLabel = label; Next, we need to create an image. Our image needs a nice border; so to do this, we need to add the QuartzCore framework. Add the QuartzCore framework like we did with CoreLocation in the previous article, and come to mention it, we'll need CoreLocation; so, add that too. Once that's done add #import <QuartzCore/QuartzCore.h> to the top of the LIOfferViewController.m file. Now, add the following code to instantiate the image view and add it to our view: UIImageView * imageView = [[UIImageView alloc]   initWithFrame:CGRectMake(10, 120, 300, 300)]; [imageView.layer setBorderColor:[[UIColor   whiteColor] CGColor]]; [imageView.layer setBorderWidth:2.f]; imageView.contentMode = UIViewContentModeScaleToFill; [self.view addSubview:imageView]; self.offerImageView = imageView; Setting up our root view controller Let's jump to LIViewController now and start looking for beacons. We'll start by telling LIViewController that LIOfferViewController exists and also that the view controller should act as a location manager delegate. Consider the following steps: Open LIViewController.h and add an import to the top of the file: #import <CoreLocation/CoreLocation.h> #import "LIOfferViewController.h" Now, add the CLLocationManagerDelegate protocol to the declaration: @interface LIViewController :   UIViewController<CLLocationManagerDelegate> LIViewController also needs three things to manage its roll: A reference to the current offer on display so that we know to show only one offer at a time An instance of CLLocationManager for monitoring beacons A list of offers seen so that we only show each offer once Let's add these three things to the interface in the CLViewController.m file (as they're private instances). Change the LIViewController interface to look like this: @interface LIViewController ()    @property (nonatomic, strong) CLLocationManager *       locationManager;    @property (nonatomic, strong) NSMutableDictionary *       offersSeen;    @property (nonatomic, strong) LIOfferViewController *       currentOffer; @end Configuring our location manager Our location manager needs to be configured when the root view controller is first created, and also when the app becomes active. It makes sense therefore that we put this logic into a method. Our reset beacon method needs to do the following things: Clear down our list of offers seen Request permission to the user's location Create a region and set our LIViewController instance as the delegate Create a beacon region and tell CLLocationManager to start ranging beacons Let's add the code to do this now: -(void)resetBeacons { // Initialize the location manager. self.locationManager = [[CLLocationManager alloc] init]; self.locationManager.delegate = self;   // Request permission. [self.locationManager requestAlwaysAuthorization];   // Clear the offers seen. self.offersSeen = [[NSMutableDictionary alloc]   initWithCapacity:3];    // Create a region. NSUUID * regionId = [[NSUUID alloc] initWithUUIDString: @"8F0C1DDC-11E5-4A07-8910-425941B072F9"];   CLBeaconRegion * beaconRegion = [[CLBeaconRegion alloc]   initWithProximityUUID:regionId identifier:@"Mateys"];   // Start monitoring and ranging beacons. [self.locationManager stopRangingBeaconsInRegion:beaconRegion]; [self.locationManager startMonitoringForRegion:beaconRegion]; [self.locationManager startRangingBeaconsInRegion:beaconRegion]; } Now, add the two calls to the reset beacon to ensure that the location manager is reset when the app is first started and then every time the app becomes active. Let's add this code now by changing the viewDidLoad method and adding the applicationDidBecomeActive method: -(void)viewDidLoad {    [super viewDidLoad];    [self resetBeacons]; }   - (void)applicationDidBecomeActive:(UIApplication *)application {    [self resetBeacons]; } Wiring up CLLocationManagerDelegate Now, we need to wire up the delegate methods of the CLLocationManagerDelegate protocol so that CLViewController can show the offer view when the beacons come into proximity. The first thing we need to do is to set the background color of the view to show whether or not our app has been authorized to use the device location. If the authorization has not yet been determined, we'll use orange. If the app has been authorized, we'll use green. Finally, if the app has been denied, we'll use red. We'll be using the locationManager:didChangeAuthorizationStatus delegate method to do this. Let's add the code now: -(void)locationManager:(CLLocationManager *)manager   didChangeAuthorizationStatus:(CLAuthorizationStatus) status {    switch (status) {        case kCLAuthorizationStatusNotDetermined:        {            // Set a lovely orange background            [self.view setBackgroundColor:[UIColor               colorWithRed:255.f/255.f green:147.f/255.f               blue:61.f/255.f alpha:1.f]];            break;        }        case kCLAuthorizationStatusAuthorized:        {             // Set a lovely green background.            [self.view setBackgroundColor:[UIColor               colorWithRed:99.f/255.f green:185.f/255.f               blue:89.f/255.f alpha:1.f]];            break;        }        default:        {             // Set a dark red background.            [self.view setBackgroundColor:[UIColor               colorWithRed:188.f/255.f green:88.f/255.f               blue:88.f/255.f alpha:1.f]];            break;        }    } } The next thing we need to do is to save the battery life by stopping and starting the ranging of beacons when we're within the region (except for when the app first starts). We do this by calling the startRangingBeaconsInRegion method with the locationManager:didEnterRegion delegate method and calling the stopRangingBeaconsInRegion method within the locationManager:didExitRegion delegate method. Add the following code to do what we've just described: -(void)locationManager:(CLLocationManager *)manager   didEnterRegion:(CLRegion *)region {    [self.locationManager       startRangingBeaconsInRegion:(CLBeaconRegion*)region]; } -(void)locationManager:(CLLocationManager *)manager   didExitRegion:(CLRegion *)region {    [self.locationManager       stopRangingBeaconsInRegion:(CLBeaconRegion*)region]; } Showing the advert To actually show the advert, we need to capture when a beacon is ranged by adding the locationManager:didRangeBeacons:inRegion delegate method to LIViewController. This method will be called every time the distance changes from an already discovered beacon in our region or when a new beacon is found for the region. The implementation is quite long so I'm going to explain each part of the method as we write it. Start by creating the method implementation as follows: -(void)locationManager:(CLLocationManager *)manager   didRangeBeacons:(NSArray *)beacons inRegion: (CLBeaconRegion *)region {   } We only want to show an offer associated with the beacon if we've not seen it before and there isn't a current offer being shown. We do this by checking the currentOffer property. If this property isn't nil, it means an offer is already being displayed and so, we need to return from the method. The locationManager:didRangeBeacons:inRegion method gets called by the location manager and gets passed to the region instance and an array of beacons that are currently in range. We only want to see each advert once in a session and so need to loop through each of the beacons to determine if we've seen it before. Let's add a for loop to iterate through the beacons and in the beacon looping do an initial check to see if there's an offer already showing: for (CLBeacon * beacon in beacons) {    if (self.currentOffer) return; >} Our offersSeen property is NSMutableDictionary containing all the beacons (and subsequently offers) that we've already seen. The key consists of the major and minor values of the beacon in the format {major|minor}. Let's create a string using the major and minor values and check whether this string exists in our offersSeen property by adding the following code to the loop: NSString * majorMinorValue = [NSString stringWithFormat: @"%@|%@", beacon.major, beacon.minor]; if ([self.offersSeen objectForKey:majorMinorValue]) continue; If offersSeen contains the key, then we continue looping. If the offer hasn't been seen, then we need to add it to the offers that are seen, before presenting the offer. Let's start by adding the key to our offers that are seen in the dictionary and then preparing an instance of LIOfferViewController: [self.offersSeen setObject:[NSNumber numberWithBool:YES]   forKey:majorMinorValue]; LIOfferViewController * offerVc = [[LIOfferViewController alloc]   init]; offerVc.modalPresentationStyle = UIModalPresentationFullScreen; Now, we're going prepare some variables to configure the offer view controller. Food offers show with a blue background while clothing offers show with a red background. We use the major value of the beacon to determine the color and then find out the image and label based on the minor value: UIColor * backgroundColor; NSString * labelValue; UIImage * productImage;        // Major value 1 is food, 2 is clothing. if ([beacon.major intValue] == 1) {       // Blue signifies food.    backgroundColor = [UIColor colorWithRed:89.f/255.f       green:159.f/255.f blue:208.f/255.f alpha:1.f];       if ([beacon.minor intValue] == 1) {        labelValue = @"30% off sushi at the Japanese Kitchen.";        productImage = [UIImage imageNamed:@"sushi.jpg"];    }    else {        labelValue = @"Buy one get one free at           Tucci's Pizza.";        productImage = [UIImage imageNamed:@"pizza.jpg"];    } } else {    // Red signifies clothing.    backgroundColor = [UIColor colorWithRed:188.f/255.f       green:88.f/255.f blue:88.f/255.f alpha:1.f];    labelValue = @"50% off all ladies clothing.";    productImage = [UIImage imageNamed:@"ladiesclothing.jpg"]; } Finally, we need to set these values on the view controller and present it modally. We also need to set our currentOffer property to be the view controller so that we don't show more than one color at the same time: [offerVc.view setBackgroundColor:backgroundColor]; [offerVc.offerLabel setText:labelValue]; [offerVc.offerImageView setImage:productImage]; [self presentViewController:offerVc animated:YES  completion:nil]; self.currentOffer = offerVc; Dismissing the offer Since LIOfferViewController is a modal view, we're going to need a dismiss button; however, we also need some way of telling it to our root view controller (LIViewController). Consider the following steps: Add the following code to the LIViewController.h interface to declare a public method: -(void)offerDismissed; Now, add the implementation to LIViewController.h. This method simply clears the currentOffer property as the actual dismiss is handled by the offer view controller: -(void)offerDismissed {    self.currentOffer = nil; } Now, let's jump back to LIOfferViewController. Add the following code to the end of the viewDidLoad method of LIOfferViewController to create a dismiss button: UIButton * dismissButton = [[UIButton alloc]   initWithFrame:CGRectMake(60.f, 440.f, 200.f, 44.f)]; [self.view addSubview:dismissButton]; [dismissButton setTitle:@"Dismiss"   forState:UIControlStateNormal]; [dismissButton setTitleColor:[UIColor whiteColor] forState:UIControlStateNormal]; [dismissButton addTarget:self   action:@selector(dismissTapped:)   forControlEvents:UIControlEventTouchUpInside]; As you can see, the touch up event calls @selector(dismissTapped:), which doesn't exist yet. We can get a handle of LIViewController through the app delegate (which is an instance of LIAppDelegate). In order to use this, we need to import it and LIViewController. Add the following imports to the top of LIOfferViewController.m: #import "LIViewController.h" #import "LIAppDelegate.h" Finally, let's complete the tutorial by adding the dismissTapped method: -(void)dismissTapped:(UIButton*)sender {    [self dismissViewControllerAnimated:YES completion:^{        LIAppDelegate * delegate =           (LIAppDelegate*)[UIApplication           sharedApplication].delegate;        LIViewController * rootVc =           (LIViewController*)delegate.         window.rootViewController;        [rootVc offerDismissed];    }]; } Now, let's run our app. You should be presented with the location permission request as shown in the Requesting location permission figure, from the Understanding iBeacon permissions section. Tap on OK and then fire up the companion app. Play around with the Chapter 2 beacon configurations by turning them on and off. What you should see is something like the following figure: Our app working with the companion OS X app Remember that your app should only show one offer at a time and your beacon should only show each offer once per session. Summary Well done on completing your first real iBeacon powered app, which actually differentiates between beacons. In this article, we covered the real usage of UUID, major, and minor values. We also got introduced to the Core Location framework including the CLLocationManager class and its important delegate methods. We introduced the CLRegion class and discussed the permissions required when using CLLocationManager. Resources for Article: Further resources on this subject: Interacting with the User [Article] Physics with UIKit Dynamics [Article] BSD Socket Library [Article]
Read more
  • 0
  • 0
  • 1194

article-image-building-beowulf-cluster
Packt
17 Nov 2014
19 min read
Save for later

Building a Beowulf Cluster

Packt
17 Nov 2014
19 min read
A Beowulf cluster is nothing more than a bunch of computers interconnected by Ethernet and running with a Linux or BSD operating system. A key feature is the communication over IP (Internet Protocol) that distributes problems among the boards. The entity of the boards or computers is called a cluster and each board or computer is called a node. In this article, written by Andreas Joseph Reichel, the author of Building a BeagleBone Black Super Cluster, we will first see what is really required for each board to run inside a cluster environment. You will see examples of how to build a cheap and scalable cluster housing and how to modify an ATX power supply in order to use it as a power source. I will then explain the network interconnection of the Beowulf cluster and have a look at its network topology. The article concludes with an introduction to the microSD card usage for installation images and additional swap space as well as external network storage. The following topics will be covered: Describing the minimally required equipment Building a scalable housing Modifying an ATX power source Introducing the Beowulf network topology Managing microSD cards Using external network storage We will first start with a closer look at the utilization of a single BBB and explain the minimal hardware configuration required. (For more resources related to this topic, see here.) Minimal configuration and optional equipment BBB is a single-board computer that has all the components needed to run Linux distributions that support ARMhf platforms. Due to the very powerful network utilities that come with Linux operating systems, it is not necessary to install a mouse or keyboard. Even a monitor is not required in order to install and configure a new BBB. First, we will have a look at the minimal configuration required to use a single board over a network. Minimal configuration A very powerful interface of Linux operating systems is its standard support for SSH. SSH is the abbreviation of Secure Shell, and it enables users to establish an authenticated and encrypted network connection to a remote PC that provides a Shell. Its command line can then be utilized to make use of the PC without any local monitor or keyboard. SSH is the secure replacement for the telnet service. The following diagram shows you the typical configuration of a local area network using SSH for the remote control of a BBB board: The minimal configuration for the SSH control SSH is a key feature of Linux and comes preinstalled on most distributions. If you use Microsoft ® Windows™ as your host operating system, you will require additional software such as putty, which is an SSH client that is available at http://www.putty.org. On Linux and Mac OS, there is usually an SSH client already installed, which can be started using the ssh command. Using a USB keyboard It is practical for several boards to be configured using the same network computer and an SSH client. However, if a system does not boot up, it can be hard for a beginner to figure out the reason. If you get stuck with such a problem and don't find a solution using SSH, or the SSH login is not possible for some reason anymore, it might be helpful to use a local keyboard and a local monitor to control the problematic board such as a usual PC. Installing a keyboard is possible with the onboard USB host port. A very practical way is to use a wireless keyboard and mouse combination. In this case, you only need to plug the wireless control adapter into the USB host port. Using the HDMI adapter and monitor The BBB board supports high definition graphics and, therefore, uses a mini HDMI port for the video output. In order to use a monitor, you need an adapter for mini HDMI to HDMI, DVI, or VGA, respectively. Building a scalable board-mounting system The following image shows you the finished board housing with its key components as well as some installed BBBs. Here, a indicates the threaded rod with the straw as the spacer, b indicates BeagleBone Black, c indicates the Ethernet cable, d indicates 3.5" hard disc cooling fans, e indicates the 5 V power cable, and f indicates the plate with drilled holes. The finished casing with installed BBBs One of the most important things that you have to consider before building a super computer is the space you require. It is not only important to provide stable and practical housing for some BBB boards, but also to keep in mind that you might want to upgrade the system to more boards in the future. This means that you require a scalable system that is easy to upgrade. Also, you need to keep in mind that every single board requires its own power and has to be accessible by hand (reset, boot-selection, and the power button as well as the memory card, and so on). The networking cables also need some place depending on their lengths. There are also flat Ethernet cables that need less space. The tidier the system is built, the easier it will be to track down errors or exchange faulty boards, cables, or memory cards. However, there is a more important point. Although the BBB boards are very power-efficient, they get quite warm depending on their utilization. If you have 20 boards stacked onto each other and do not provide sufficient space for air flow, your system will overheat and suffer from data loss or malfunctions. Insufficient air flow can result in the burning of devices and other permanent hardware damage. Please remember that I'm is not liable for any damages resulting from an insufficient cooling system. Depending on your taste, you can spend a lot of money on your server housing and put some lights inside and make it glow like a Christmas tree. However, I will show you very cheap housing, which is easy and fast to build and still robust enough, scalable, and practical to use. Board-holding rods The key idea of my board installation is to use the existing corner holes of the BBB boards and attach the boards on four rods in order to build a horizontal stack. This stack is then held by two side plates and a base plate. Usually, when I experiment and want to build a prototype, it is helpful not to predefine every single measurement, and then invest money into the sawing and cutting of these parts. Instead, I look around in some hobby markets and see what they have and think about whether I can use these parts. However, drilling some holes is not unavoidable. When you get to drilling holes and using screws and threads, you might know or not know that there are two different systems. One is the metric system and the other is the English system. The BBB board has four holes and their size fits to 1/8" in the English or M3 in the metric system. According to the international standard, this article will only name metric dimensions. For easy and quick installation of the boards, I used four M3 threaded rods that are obtainable at model making or hobby shops. I got mine at Conrad Electronic. For the base plates, I went to a local raw material store. The following diagram shows you the mounting hole positions for the side walls with the dimensions of BBB (dashed line). The measurements are given for the English and metric system. The mounting hole's positions Board spacers As mentioned earlier, it is important to leave enough space between the boards in order to provide finger access to the buttons and, of course, for airflow. First, I mounted each board with eight nuts. However, when you have 16 boards installed and want to uninstall the eighth board from the left, then it will take you a lot of time and nerves to get the nuts along the threaded rods. A simple solution with enough stability is to use short parts of straws. You can buy some thick drinking straws and cut them into equally long parts, each of two or three centimeters in length. Then, you can put them between the boards onto the threaded rods in order to use them as spacers. Of course, this is not the most stable way, but it is sufficient, cheap, and widely available. Cooling system One nice possibility I found for cooling the system is to use hard disk fans. They are not so cheap but I had some lying around for years. Usually, they are mounted to the lower side of 3.5" hard discs, and their width is approximately the length of one BBB. So, they are suitable for the base plate of our casing and can provide enough air flow to cool the whole system. I installed two with two fans each for eight boards and a third one for future upgrades. The following image shows you my system with eight boards installed: A board housing with BBBs and cooling system Once you have built housing with a cooling system, you can install your boards. The next step will be the connection of each board to a power source as well as the network interconnection. Both are described in the following sections. Using a low-cost power source I have seen a picture on the Web where somebody powered a dozen older Beagle Boards with a lot of single DC adapters and built everything into a portable case. The result was a huge mess of cables. You should always try to keep your cables well organized in order to save space and improve the cooling performance. Using an ATX power supply with a cable tree can save you a lot of money compared to buying several standalone power supplies. They are stable and can also provide some protection for hardware, which cheap DC adapters don't always do. In the following section, I will explain the power requirements and how to modify an ATX power supply to fit our needs. Power requirements If you do not use an additional keyboard and mouse and only onboard flash memory, one board needs around 500 mA at 5 V voltage, which gives you a total power of 2.5 Watts for one board. Depending on the installed memory card or other additional hardware, you might need more. Please note that using the Linux distribution described in this article is not compatible with the USB-client port power supply. You have to use the 5 V power jack. Power cables If you want to use an ATX power supply, then you need to build an adapter from the standard PATA or SATA power plugs to a low voltage plug that fits the 5 V jack of the board. You need a 5.5/2.1 mm low voltage plug and they are obtainable from VOLTCRAFT with cables already attached. I got mine from Conrad Electronics (item number 710344). Once you have got your power cables, you can build a small distribution box. Modifying the ATX power supply ATX power supplies are widely available and power-efficient. They cost around 60 dollars, providing more than 500 Watts of output power. For our purpose, we will only need most power on the 5 V rail and some for fans on the 12 V rail. It is not difficult to modify an ATX supply. The trick is to provide the soft-on signal, because ATX supplies are turned on from the mainboard via a soft-on signal on the green wire. If this green wire is connected to the ground, it turns on. If the connection is lost, it turns off. The following image shows you which wires of the ATX mainboard plug have to be cut and attached to a manual switch in order to build a manual on/off switch: The ATX power plug with a green and black wire, as indicated by the red circle, cut and soldered to a switch As we are using the 5 V and most probably the 12 V rail (for the cooling fans) of the power supply, it is not necessary to add resistors. If the output voltage of the supply is far too low, this means that not enough current is flowing for its internal regulation circuitry. If this happens, you can just add a 1/4 Watt 200 Ohms resistor between any +5 V (red) and GND (neighboring black) pin to drain a current of 25 mA. This should never happen when driving the BBB boards, as their power requirements are much higher and the supply should regulate well. The following image shows you the power cable distribution box. I soldered the power cables together with the cut ends of a PATA connector to a PCB board. The power cable distribution box What could happen is that the resistance of one PATA wire is too high and the voltage drop leads to a supply voltage of below 4.5 Volts. If that happens, some of the BBB boards will not power up. Either you need to retry booting these boards separately by their power button later when all others are booted up, or you need to use two PATA wires instead of one to decrease the resistance. Please have a look if this is possible with your power supply and if the two 5 V lines you want to connect do not belong to different regulation circuitries. Setting up the network backbone To interconnect BBB boards via Ethernet, we need a switch or a hub. There is a difference in the functionality between a switch and a hub: With hubs, computers can communicate with each other. Every computer is connected to the hub with a separate Ethernet cable. The hub is nothing more than a multiport repeater. This means that it just repeats all the information it receives for all other ports, and every connected PC has to decide whether the data is for it or not. This produces a lot of network traffic and can slow down the speed. Switches in comparison can control the flow of network traffic based on the address information in each packet. It learns which traffic packets are received by which PC and then forwards them only to the proper port. This allows simultaneous communication across the switch and improves the bandwidth. This is the reason why switches are the preferred choice of network interconnection for our BBB Beowulf cluster. The following table summarizes the main differences between a hub and a switch:   hub Switch Traffic control no yes Bandwidth low high I bought a 24-port Ethernet switch on eBay with 100 Megabit/s ports. This is enough for the BBB boards. The total bandwidth of the switch is 2.4 Gigabit/s. The network topology The typical network topology is a star configuration. This means that every BBB board has its own connection to the switch, and the switch itself is connected to the local area network (LAN). On most Beowulf clusters, there is one special board called the master node. This master node is used to provide the bridge between the cluster and the rest of the LAN. All users (if there are more persons that use the cluster) log in to the master node, and it is only responsible for user management and starting the correct programs on specified nodes. It usually doesn't contribute to any calculation tasks. However, as BBB only has one network connector, it is not possible to use it as a bridge, because a bridge requires two network ports: One connected to the LAN. The other connected to the switch of the cluster. Because of this, we only define one node as the master node, providing some special software features but also contributing to the calculations of the cluster. This way, all BBBs contribute to the overall calculation power, and we do not need any special hardware to build a network bridge. Regarding security, we can manage everything with SSH login rules and the kernel firewall, if required. The following diagram shows you the network topology used in this article. Every BBB has its own IP address, and you have to reserve the required amount of IP addresses in your LAN. They do not have to be successive; however, it makes it easier if you note down every IP for every board. You can give the boards hostnames such as node1, node2, node3, and so on to make them easier to follow. The network topology The RJ45 network cables There is only one thing you have to keep in mind regarding RJ45 Ethernet cables and 100 Megabit/s transmission speed. There are crossover cables and normal ones. The crossover cables have crossed lines regarding data transmission and receiving. This means that one cable can be used to connect two PCs without a hub or switch. Most modern switches can detect when data packets collide, which means when they are received on the transmitting ports and then automatically switch over the lines again. This feature is called auto MDI-X or auto-uplink. If you have a newer switch, you don't need to pay attention to which sort of cable you buy. Usually, normal RJ45 cables without crossover are the preferred choice. The Ethernet multiport switch As described earlier, we use an Ethernet switch rather than a hub. When buying a switch, you have to decide how many ports you want. For future upgrades, you can also buy an 8-port switch, for example, and later, if you want to go from seven boards (one port for the uplink) to 14 boards, you can upgrade with a second 8-port switch and connect both to the LAN. If you want to build a big system from the beginning, you might want to buy a 24-port switch or an even bigger one. The following image shows you my 24-port Ethernet switch with some connected RJ45 cables below the board housing: A 24-port cluster switch The storage memory One thing you might want to think of in the beginning is the amount of space you require for applications and data. The standard version of BBB has 2 GB flash memory onboard and newer ones have 4 GB. A critical feature of computational nodes is the amount of RAM they have installed. On BBB, this is only 512 MB. If you are of the opinion that this is not enough for your tasks, then you can extend the RAM by installing Linux on an external SD card and create a swap partition on it. However, you have to keep in mind that the external swap space is much slower than the DDR3 memory (MB/s compared to GB/s). If the software is nicely programmed, data can always be sufficiently distributed on the nodes, and each node does not need much RAM. However, with more complicated libraries and tasks, you might want to upgrade some day. Installing images on microSD cards For the installation of Linux, we will need Linux Root File System Images on the microSD card. It is always a good idea to keep these cards for future repair or extension purposes. I keep one installation SD for the master node and one for all slave nodes. When I upgrade the system to more slave nodes, I can just insert the installation SD and easily incorporate the new system with a few commands. The swap space on an SD card Usually, it should be possible to boot Linux from the internal memory and utilize the external microSD card solely as the swap space. However, I had problems utilizing the additional space as it was not properly recognized by the system. I obtained best results when booting from the same card I want the swap partition on. The external network storage To reduce the size of the used software, it is always a good idea to compile it dynamically. Each node you want to use for computations has to start the same program. Programs have to be accessible by each node, which means that you would have to install every program on every node. This is a lot of work when adding additional boards and is a waste of memory in general. To circumvent this problem, I use external network storage on the basis of Samba. The master node can then access the Samba share and create a share for all the client nodes by itself. This way, each node has access to the same software and data, and upgrades can be performed easily. Also, the need for local storage memory is reduced. Important libraries that have to be present on each local filesystem can be introduced by hard links pointing to the network storage location. The following image shows you the storage system of my BBB cluster: The storage topology Some of you might worry when I chose Samba over NFS and think that a 100 Megabit networking is too slow for cluster computations. First of all, I chose Samba because I was used to it and it is well known to most hobbyists. It is very easy to install, and I have used it for over 10 years. Only thing you have to keep in mind is that using Samba will cause your filesystem to treat capital and small letters equally. So, your Linux filenames (ext2, ext3, ext4, and so on) will behave like FAT/NTFS filenames. Regarding the network bandwidth, a double value will require 8 bytes of memory and thus, you can transfer a maximum of 2.4 billion double values per second on a hub with 24 ports and 100 Megabit/s. Additionally, libraries are optimized to keep the network talk as low as possible and solve as much as possible on the local CPU memory system. Thus, for most applications, the construction as described earlier will be sufficient. Summary In this article, you were introduced to the whole cluster concept regarding its hardware and interconnection. You were shown a working system configuration using only the minimally required amount of equipment and also some optional possibilities. A description of very basic housing including a cooling system was given as an example for a cheap yet nicely scalable possibility to mount the boards. You also learned how to build a cost-efficient power supply using a widely available ATX supply, and you were shown how to modify it to power several BBBs. Finally, you were introduced to the network topology and the purpose of network switches. A short description about the used storage system ended this article. If you interconnect everything as described in this article, it means that you have created the hardware basis of a super computer cluster. Resources for Article: Further resources on this subject: Protecting GPG Keys in BeagleBone [article] Making the Unit Very Mobile - Controlling Legged Movement [article] Pulse width modulator [article]
Read more
  • 0
  • 0
  • 3578

article-image-choosing-airframe-and-propellers-your-multicopter
Packt
21 Aug 2014
10 min read
Save for later

Choosing the airframe and propellers for your Multicopter

Packt
21 Aug 2014
10 min read
In this article by Ty Audronis, the author of Building Multicopter Video Drone, the process and thought process required to choose a few of the components required to build your multicopter will be discussed. (For more resources related to this topic, see here.) Let's dive into the process of choosing components for your multicopter. There are a ton of choices, permutations, and combinations available. In fact, there are so many choices out there that it's highly unlikely that two do it yourself (DIY) multicopters are configured alike. It's very important to note before we start this article that this is just one example. This is only an example of the thought process involved. This configuration may not be right for your particular needs, but the thought process applies to any multicopter you may build. With all these disclaimers in mind … let's get started! What kind of drone should I build? It sounds obvious, but believe it or not, a lot of people venture into a project like this with one thing in mind: "big!". This is completely the wrong approach to building a multicopter. Big is expensive, big is also less stable, and moreover, when something goes wrong, big causes more damage and is harder to repair. Ask yourself what your purpose is. Is it for photography? Videography? Fun and hobby interest? What will it carry? How many rotors should it have? There are many configurations, but three of these rotor counts are the most common: four, six, and eight (quad, hexa, and octo-copters). The knee-jerk response of most people is again "big". It's about balancing stability and battery life. Although eight rotors do offer more stability, it also decreases flight time because it increases the strain on batteries. In fact, the number of rotors in relation to flight time is exponential and not linear. Having a big platform is completely useless if the batteries only last two or three minutes. Redundancy versus stability Once you get into hexacopter and octocopters, there are two basic configurations of the rotors: redundant and independent. In an independent (or flat) configuration, the rotors are arranged in a circular pattern, equidistant from the center of the platform with each rotor (as you go around) turning in an opposite direction from the one before it. These look a lot like a pie with many slices. In a redundant configuration, the number of spars (poles from the center of the platform) is cut in half, and each has a rotor on the top as well as underneath. Usually, all the rotors on the top spin in one direction, and all rotors at the bottom spin in the opposite direction. The following image shows a redundant hexacopter (left) and an independent hexacopter (right): The advantage of redundancy is apparent. If a rotor should break or fail, the motor underneath it can spin up to keep the craft in the air. However, with less points of lift, stress on the airframe is greater, and stability is not quite as good. If you use the right guidance system, a flat configuration can overcome a failed rotor as well. For this reason (and for battery efficiency), we're going with a flat-six (independent hexacopter) configuration over the redundant, or octocopter configurations. The calculations you'll need There is an exorbitant amount of math involved in calculating just how you're going to make your multicopter fly. An entire book can be written on these calculations alone. However, the work has been done for you! There is a calculator available online at eCalc (http://www.ecalc.ch/xcoptercalc.php?ecalc&lang=en) to calculate how well your multicopter will function and for how long, based on the components you choose. The following screenshot shows the eCalc interface: Choosing your airframe Although we've decided to go with a flat-six airframe, the exact airframe is yet to be decided. The materials, brand, and price can vary incredibly. Let's take a quick look at some specifications you should consider. Carbon fiber versus aluminum Carbon fiber looks cool, sounds even cooler, but what is it? It's exactly what it sounds like. It's basically a woven fabric of carbon strands encased in an epoxy resin. It's extremely easy to form, very strong, and very light. Carbon fiber is the material they make super cars, racing motorcycles, and yes, aircraft from. However, it's very expensive and can be brittle if it's compromised. It can also be welded using nothing more than a superglue-like substance known as C.A. glue (cyanoacrylate or Superglue). Aluminum is also light and strong. However, it's bendable and more flexible. It's less expensive, readily available, and can make an effective airframe. It is also used in cars, racing motorcycles, and aircraft. It cannot be welded easily and requires very special equipment to form it and machine it. Also, aluminum can be easier to drill, while drilling carbon fiber can cause cracks and compromise the strength of the airframe. What we care about in a DIY multicopter is strength, weight, and yes … expense. There is nothing wrong with carbon fiber (in fact, in many ways, it is superior to aluminum), but we're going with an aluminum frame as our starting point. We'll need a fairly large frame (to keep the large rotors, which we'll probably need, from hitting each other while rotating). What we really want to look at is all the stress points on the airframe. If you really think about it, the motor mounts, and where each arm attaches to the hub of the airframe are the areas we need to examine carefully. A metal plate is a must for the motor mounts. If a carbon fiber motor mount is used, a metal backplate is a must. Many a multicopter has been lost because of screws popping right through the motor mounts. The following image shows a motor mount (left) where just such a thing happened. The fix (right) is to use a backplate when encountering carbon fiber motor mounts. This distributes the stress to the whole plate (rather than a small point the size of a screwhead). Washers are usually not enough. Similarly, because we've decided to use an airframe with long arms, leverage must be taken into account on the points where the arms attach to the hub. It's very important to have a sturdy hub that cradles the spars in a way that distributes the stress as much as possible. If a spar is merely sandwiched between two plates with a couple of bolts holding it … that may not be enough to hold the spars firmly. The following image shows a properly cradled spar: In the preceding image, you'll notice that the spars are cradled so that stress in any direction is distributed across a lot of surface area. Furthermore, you'll notice 45 degree angles in the cradles. As the cradle is tightened down, it cinches the aluminum spar and deforms it along these angles. This also prevents the spars from rolling. Between this cradling and the aluminum motor mounts (predrilled for many motor types), we're going to use the Turnigy H.A.L. (Heavy Aerial Lift) hexacopter frame. It carries a 775 mm motor span (plenty of room for up to 14-inch rotors) and has a protective cover for our electronics. Best of all, this frame retails for under 70 USD at http://www.hobbyking.com/hobbyking/store/uh_viewitem. asp?idproduct=25698&aff=492101. Now that we've chosen our airframe, we know it weighs 983 grams (based on the specifications mentioned on the previous link). Let's plug this information into our calculator (refer to the following screenshot). You can see that we've set our copter to 6 rotors, our weight to 983 grams, and specified that this weight is a without Drive system (not including our motors, props, ESCs, or batteries). You can leave all of the other entries alone. These specify the environment you'd be flying in. Air density can affect the efficiency of your rotors, and temperature can affect your motors. These default settings are at your typical temperature and elevation. Unless you're flying in the desert, high elevations, or in the cold, you can leave these alone. We're after what your typical performance will be. Choosing your propellers Let's skip down to the propellers. These will usually dictate what motors you choose, and the motors dictate the ESCs, and the ESCs and motors combined will determine your battery. So, let's take a look at the drive system in that order. This is another huge point of stress. If you consider it, every bit of weight is supported by the props in the air. So, here it's very important to have strong props that cut the air well, with as little flex as possible, and are very light. Flex can produce bounce, which can actually produce harmonic vibration between the guidance system and the flexing of the props (sending your drone into uncontrolled tumbles). Does one of the materials that we've already discussed sound strong, light, and very stiff? If you're thinking carbon fiber, you're right on the money. We're going to have a lot of weight here, so we'll go with pretty large props because they'll move a whole lot more air and carbon fiber because they're strong. The larger the props, the stronger they need to be, and consequently the more powerful the motor, ESC, and battery. Before we start shopping around for parts, let's plug in stats and see what we come up with. When we look at props, there are two stats we need to look at. These are diameter and pitch. The diameter is simple enough. It's just how big the props are. The pitch is another story. The pitch is how much of pitch the blade has. The tips of a propeller are more flat in relation to the rotation. In other words, they twist. Your typical blade would have something more like a 4.7-inch pitch at 10 inches. Why? Believe it or not, these motors encounter a ton of resistance. The resistance comes from the wind, and a fully-pitched blade may sound nice, but believe it or not, propulsion is really more of a game of efficiency than raw power. It's all about the balance. There's no doubt that we'll have to adjust our power system later, so for now let's start big. We'll go with a 14-inch propeller (because it's the biggest that can possibly fit on that frame without the props touching), with a typical (for that size) 8-inch pitch. The following screenshot shows these entries in our calculator: You can see we've entered 14 for Diameter and 8 for Pitch. Our propellers will be typical two-blade props. Three- and four-blade props can provide more lift, but also have more resistance and consequently will kill our batteries faster. The PConst (or power constraint) indicates how much power is absorbed by the props. The value of 1.3 is a typical value. Each brand and size of prop may be slightly different, and unless the specific prop you choose has those statistics available … leave this alone. A value of 1.0 is a perfectly efficient propeller. This is an unattainable value. The gear ratio is 1:1 because we're using a prop directly attached to a motor. If we were using a gear box, we'd change this value accordingly. Don't hit calculate yet. We don't have enough fields filled out. It should be said that most likely these propellers will be too large. We'll probably have to go down to a 12- or even 11-inch propeller (or change our pitch) for maximum efficiency. However … this is a good place to start. Summary In this article, we discussed what are the points to keep in mind when planning to build a multicopter, such as the type of multicopter, number of rotors, and various parameters to consider when choosing the airframe and propellers. Resources for Article:   Further resources on this subject: 3D Websites [article] Managing Adobe Connect Meeting Room [article] Getting Started with Adobe Premiere Pro CS6 Hotshot [article]
Read more
  • 0
  • 0
  • 1481
article-image-avoiding-obstacles-using-sensors
Packt
14 Aug 2014
8 min read
Save for later

Avoiding Obstacles Using Sensors

Packt
14 Aug 2014
8 min read
In this article by Richard Grimmett, the author of Arduino Robotic Projects, we'll learn the following topics: How to add sensors to your projects How to add a servo to your sensor (For more resources related to this topic, see here.) An overview of the sensors Before you begin, you'll need to decide which sensors to use. You require basic sensors that will return information about the distance to an object, and there are two choices—sonar and infrared. Let's look at each. Sonar sensors The sonar sensor uses ultrasonic sound to calculate the distance to an object. The sensor consists of a transmitter and receiver. The transmitter creates a sound wave that travels out from the sensor, as illustrated in the following diagram: The device sends out a sound wave 10 times a second. If an object is in the path of these waves, the waves reflect off the object. This then returns sound waves to the sensor. The sensor measures the returning sound waves. It uses the time difference between when the sound wave was sent out and when it returns to measure the distance to the object. Infrared sensors Another type of sensor is a sensor that uses infrared (IR) signals to detect distance. An IR sensor also uses both a transmitter and a receiver. The transmitter transmits a narrow beam of light and the sensor receives this beam of light. The difference in transit ends up as an angle measurement at the sensor, as shown in the following diagram: The different angles give you an indication of the distance to the object. Unfortunately, the relationship between the output of the sensor and the distance is not linear, so you'll need to do some calibration to predict the actual distance and its relationship to the output of the sensor. Connecting a sonar sensor to Arduino Here is an image of a sonar sensor, HC-SR04, which works well with Arduino: These sonar sensors are available at most places that sell Arduino products, including amazon.com. In order to connect this sonar sensor to your Arduino, you'll need some of those female-to-male jumper cables. You'll notice that there are four pins to connect the sonar sensor. Two of these supply the voltage and current to the sensor. One pin, the Trig pin, triggers the sensor to send out a sound wave. The Echo pin then senses the return from the echo. To access the sensor with Arduino, make the following connections using the male-to-female jumper wires: Arduino pin Sensor pin 5V Vcc GND GND 12 Trig 11 Echo Accessing the sonar sensor from the Arduino IDE Now that the HW is connected, you'll want to download a library that supports this sensor. One of the better libraries for this sensor is available at https://code.google.com/p/arduino-new-ping/. Download the NewPing library and then open the Arduino IDE. You can include the library in the IDE by navigating to Sketch | Import Library | Add Library | Downloads and selecting the NewPing ZIP file. Once you have the library installed, you can access the example program by navigating to File | Examples | NewPing | NewPingExample as shown in the following screenshot: You will then see the following code in the IDE: Now, upload the code to Arduino and open a serial terminal by navigating to Tools | Serial Monitor in the IDE. Initially, you will see characters that make no sense; you need to change the serial port baud rate to 115200 baud by selecting this field in the lower-right corner of Serial Monitor, as shown in the following screenshot: Now, you should begin to see results that make sense. If you place your hand in front of the sensor and then move it, you should see the distance results change, as shown in the following screenshot: You can now measure the distance to an object using your sonar sensor. Connecting an IR sensor to Arduino One popular choice is the Sharp series of IR sensors. Here is an image of one of the models, Sharp 2Y0A02, which is a unit that provides sensing to a distance of 150 cm: To connect this unit, you'll need to connect the three pins that are available on the bottom of the sensor. Here is the connection list: Arduino pin Sensor pin 5V Vcc GND GND A3 Vo Unfortunately, there are no labels on the unit, but there is a data sheet that you can download from www.phidgets.com/documentation/Phidgets/3522_0_Datasheet.pdf. The following image shows the pins you'll need to connect: One of the challenges of making this connection is that the female-to-male connection jumpers are too big to connect directly to the sensor. You'll want to order the three-wire cable with connectors with the sensor, and then you can make the connections between this cable and your Arduino device using the male-to-male jumper wires. Once the pins are connected, you are ready to access the sensor via the Arduino IDE. Accessing the IR sensor from the Arduino IDE Now, bring up the Arduino IDE. Here is a simple sketch that provides access to the sensor and returns the distance to the object via the serial link: The sketch is quite simple. The three global variables at the top set the input pin to A3 and provide a storage location for the input value and distance. The setup() function simply sets the serial port baud rate to 9600 and prints out a single line to the serial port. In the loop() function, you first get the value from the A3 input port. The next step is to convert it to a distance based on the voltage. To do this, you need to use the voltage to distance chart for the device; in this case, it is similar to the following diagram: There are two parts to the curve. The first is the distance up to about 15 centimeters and then the distance from 15 centimeters to 150 centimeters. This simple example ignores distances closer than 15 centimeters, and models the distance from 15 centimeters and out as a decaying exponential with the following form: Thanks to teaching.ericforman.com/how-to-make-a-sharp-ir-sensor-linear/, the values that work quite well for a cm conversion in distance are 30431 for the constant and -1.169 as the exponential for this curve. If you open the Serial Monitor tab and place an object in front of the sensor, you'll see the readings for the distance to the object, as shown in the following screenshot: By the way, when you place the object closer than 15 cm, you should begin to see distances that seem much larger than should be indicated. This is due to the voltage to distance curve at these much shorter distances. If you truly need very short distances, you'll need a much more complex calculation. Creating a scanning sensor platform While knowing the distance in front of your robotic project is normally important, you might want to know other distances around the robot as well. One solution is to hook up multiple sensors, which is quite simple. However, there is another solution that may be a bit more cost effective. To create a scanning sensor of this type, take a sensor of your choice (in this case, I'll use the IR sensor) and mount it on a servo. I like to use a servo L bracket for this, which is mounted on the servo Pas follows: You'll need to connect both the IR sensor as well as the servo to Arduino. Now, you will need some Arduino code that will move the servo and also take the sensor readings. The following screenshot illustrates the code: The preceding code simply moves the servo to an angle and then prints out the distance value reported by the IR sensor. The specific statements that may be of interest are as follows: servo.attach(servoPin);: This statement attaches the servo control to the pin defined servo.write(angle);: This statement sends the servo to this angle inValue = analogRead(inputPin);: This statement reads the analog input value from this pin distance = 30431 * pow(inValue, -1.169);: This statement translates the reading to distance in centimeters If you upload the sketch, open Serial Monitor , and enter different angle values, the servo should move and you should see something like the following screenshot: Summary Now that you know how to use sensors to understand the environment, you can create even more complex programs that will sense these barriers and then change the direction of your robot to avoid them or collide with them. You learned how to find distance using sonar sensors and how to connect them to Arduino. You also learned about IR sensors and how they can be used with Arduino. Resources for Article: Further resources on this subject: Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [article] Using PVR with Raspbmc [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 4057

article-image-shaping-model-meshmixer-and-printing-it
Packt
20 Jun 2014
4 min read
Save for later

Shaping a model with Meshmixer and printing it

Packt
20 Jun 2014
4 min read
(For more resources related to this topic, see here.) Shaping with Meshmixer Meshmixer was designed to provide a modeling interface that frees the user from working directly with the geometry of the mesh. In most cases, the intent of the program succeeds, but in some cases, it's good to see how the underlying mesh works. We'll use some brush tools to make our model better, thereby taking a look at how this affects the mesh structure. Getting ready We'll use a toy block scanned with 123D Catch. How to do it... We will proceed as follows: Let's take a look at the model's mesh by positioning the model with a visible large surface. Go to the menu and select View. Scroll down and select Toggle Wireframe (W). Choose Sculpt. From the pop-up toolbox, choose Brushes. Go to the menu and select ShrinkSmooth. Adjust your settings in the Properties section. Keep the size as 60 and its strength as 25. Use the smooth tool slowly across the model, watching the change it makes to the mesh. In the following example, the keyboard shortcut W is used to toggle between mesh views: Repeat using the RobustSmooth and Flatten brushes. Use these combinations of brushes to flatten one side of the toy block. Rotate your model to an area where there's heavy distortion. Make sure your view is in the wireframe mode. Go back to Brushes and select Pinch. Adjust the Strength to 85, Size to 39, Depth to -17, and Lazyness to 95. Keep everything else at default values. If you are uncertain of the default values, left-click on the small cogwheel icon next to the Properties heading. Choose Reset to Defaults. We're going to draw a line across a distorted area of the toy block to see how it affects the mesh. Using the pinch brush, draw a line across the model. Save your work and then select Undo/back from the Actions menu (Ctrl+ Z). Now, select your entire model. Go to the toolbox and select Edit. Scroll down and select Remesh (R). You'll see an even distribution of polygons in the mesh. Keep the defaults in the pop up and click on Accept. Now, go back and choose Clear Selection. Select the pinch brush again and draw a line across the model as you did before. Compare it to the other model with the unrefined mesh. Let's finish cleaning up the toy block. Click on Undo/back (Ctrl+ Z) to the pinch brush line that you drew. Now, use the pinch tool to refine the edges of the model. Work around it and sharpen all the edges. Finish smoothing the planes on the block and click on Save. We can see the results clearly as we compare the original toy block model to our modified model in the preceding image. How it works... Meshmixer works by using a mesh with a high definition of polygons. When a sculpting brush such as pinch is used to manipulate the surface, it rapidly increases the polygon count in the surrounding area. When the pinch tool crosses an area that has fewer and larger polygons, the interpolation of the area becomes distorted. We can see this in the following example when we compare the original and remeshed model in the wireframe view: In the following image, when we hide the wireframe, we can see how the distortion in the mesh has given the model on the left some undesirable texture along the pinch line: It may be a good idea to examine a model's mesh before sculpting it. Meshmixer works better with a dense polygon count that is consistent in size. By using the Remesh edit, a variety of mesh densities can be achieved by making changes in Properties. Experiment with the various settings and the sculpting brushes while in the wireframing stage. This will help you gain a better understanding of how mesh surface modeling works. Let's print! When we 3D print a model, we have the option of controlling how solid the interior will be and what kind of structure will fill it. How we choose between the options is easily determined by answering the following questions: Will it need to be structurally strong? If it's going to be used as a mechanical part or an item that will be heavily handled, then it does. Will it be a prototype? If it's a temporary object for examination purposes or strictly for display, then a fragile form may suffice. Depending on the use of a model, you'll have to decide how the object falls within these two extremes.
Read more
  • 0
  • 0
  • 6774