Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Single Board Computers

71 Articles
article-image-sending-data-google-docs
Packt
16 May 2014
9 min read
Save for later

Sending Data to Google Docs

Packt
16 May 2014
9 min read
(For more resources related to this topic, see here.) The first step is to set up a Google Docs spreadsheet for the project. Create a new sheet, give it a name (I named mine Power for this project, but you can name it as you wish), and set a title for the columns that we are going to use: Time, Interval, Power, and Energy (that will be calculated from the first two columns), as shown in the following screenshot: We can also calculate the value of the energy using the other measurements. From theory, we know that over a given period of time, energy is power multiplied by time; that is, Energy = Power * Time. However, in our case, power is calculated at regular intervals, and we want to estimate the energy consumption for each of these intervals. In mathematical terms, this means we need to calculate the integral of power as a function of time. We don't have the exact function between time and power as we sample this function at regular time intervals, but we can estimate this integral using a method called the trapezoidal rule. It means that we basically estimate the integral of the function, which is the area below the power curve, by a trapeze. The energy in the C2 cell in the spreadsheet is then given by the formula: Energy= (PowerMeasurement + NextPowerMeasurement)*TimeInverval/2 Concretely, in Google Docs, you will need the formula, D2 = (B2 + B3)*C2/2. The Arduino Yún board will give you the power measurement, and the time interval is given by the value we set in the sketch. However, the time between two measurements can vary from measurement to measurement. This is due to the delay introduced by the network. To solve this issue, we will transmit the exact value along with the power measurement to get a much better estimate of the energy consumption. Then, it's time to build the sketch that we will use for the project. The goal of this sketch is basically to wait for commands that come from the network, to switch the relay on or off, and to send data to the Google Docs spreadsheet at regular intervals to keep track of the energy consumption. We will build the sketch on top of the sketch we built earlier so I will explain which components need to be added. First, you need to include your Temboo credentials using the following line of code: #include "TembooAccount.h" Since we can't continuously measure the power consumption data (the data transmitted would be huge, and we will quickly exceed our monthly access limit for Temboo!), like in the test sketch, we need to measure it at given intervals only. However, at the same time, we need to continuously check whether a command is received from the outside to switch the state of the relay. This is done by setting the correct timings first, as shown in the following code: int server_poll_time = 50; int power_measurement_delay = 10000; int power_measurement_cycles_max = power_measurement_delay/server_ poll_time; The server poll time will be the interval at which we check the incoming connections. The power measurement delay, as you can guess, is the delay at which the power is measured. However, we can't use a simple delay function for this as it will put the entire sketch on hold. What we are going to do instead is to count the number of cycles of the main loop and then trigger a measurement when the right amount of cycles have been reached using a simple if statement. The right amount of cycles is given by the power measurement cycles_max variable. You also need to insert your Google Docs credentials using the following lines of code: const String GOOGLE_USERNAME = "yourGoogleUsername"; const String GOOGLE_PASSWORD = "yourGooglePass"; const String SPREADSHEET_TITLE = "Power"; In the setup() function, you need to start a date process that will keep a track of the measurement date. We want to keep a track of the measurement over several days, so we will transmit the date of the day as well as the time, as shown in the following code: time = millis(); if (!date.running()) { date.begin("date"); date.addParameter("+%D-%T"); date.run(); } In the loop() function of the sketch, we check whether it's time to perform a measurement from the current sensor, as shown in the following line of code: if (power_measurement_cycles > power_measurement_cycles_max); If that's the case, we measure the sensor value, as follows: float sensor_value = getSensorValue(); We also get the exact measurement interval that we will transmit along with the measured power to get a correct estimate of the energy consumption, as follows: measurements_interval = millis() - last_measurement; last_measurement = millis(); We then calculate the effective power from the data we already have. The amplitude of the current is obtained from the sensor measurements. Then, we can get the effective value of the current by dividing this amplitude by the square root of 2. Finally, as we know the effective voltage and that power is current multiplied by voltage, we can calculate the effective power as well, as shown in the following code: // Convert to current amplitude_current=(float)(sensor_value-zero_ sensor)/1024*5/185*1000000; effectivevalue=amplitude_current/1.414; // Calculate power float effective_power = abs(effective_value * effective_voltage/1000); After this, we send the data with the time interval to Google Docs and reset the counter for power measurements, as follows: runAppendRow(measurements_interval,effective_power); power_measurement_cycles = 0; Let's quickly go into the details of this function. It starts by declaring the type of Temboo library we want to use, as follows: TembooChoreo AppendRowChoreo; Start with the following line of code: AppendRowChoreo.begin(); We then need to set the data that concerns your Google account, for example, the username, as follows: AppendRowChoreo.addInput("Username", GOOGLE_USERNAME); The actual formatting of the data is done with the following line of code: data = data + timeString + "," + String(interval) + "," + String(effectiveValue); Here, interval is the time interval between two measurements, and effectiveValue is the value of the measured power that we want to log on to Google Docs. The Choreo is then executed with the following line of code: AppendRowChoreo.run(); Finally, we do this after every 50 milliseconds and get an increment to the power measurement counter each time, as follows: delay(server_poll_time); power_measurement_cycles++; The complete code is available at https://github.com/openhomeautomation/geeky-projects-yun/tree/master/chapter2/energy_log. The code for this part is complete. You can now upload the sketch and after that, open the Google Docs spreadsheet and then just wait until the first measurement arrives. The following screenshot shows the first measurement I got: After a few moments, I got several measurements logged on my Google Docs spreadsheet. I also played a bit with the lamp control by switching it on and off so that we can actually see changes in the measured data. The following screenshot shows the first few measurements: It's good to have some data logged in the spreadsheet, but it is even better to display this data in a graph. I used the built-in plotting capabilities of Google Docs to plot the power consumption over time on a graph, as shown in the following screenshot: Using the same kind of graph, you can also plot the calculated energy consumption data over time, as shown in the following screenshot: From the data you get in this Google Docs spreadsheet, it is also quite easy to get other interesting data. You can, for example, estimate the total energy consumption over time and the price that it will cost you. The first step is to calculate the sum of the energy consumption column using the integrated sum functionality of Google Docs. Then, you have the energy consumption in Joules, but that's not what the electricity company usually charges you for. Instead, they use kWh, which is basically the Joule value divided by 3,600,000. The last thing we need is the price of a single kWh. Of course, this will depend on the country you're living in, but at the time of writing this article, the price in the USA was approximately $0.16 per kWh. To get the total price, you then just need to multiply the total energy consumption in kWh with the price per kWh. This is the result with the data I recorded. Of course, as I only took a short sample of data, it cost me nearly nothing in the end, as shown in the following screenshot: You can also estimate the on/off time of the device you are measuring. For this purpose, I simply added an additional column next to Energy named On/Off. I simply used the formula =IF(C2<2;0;1). It means that if the power is less than 2W, we count it as an off state; otherwise, we count it as an on state. I didn't set the condition to 0W to count it as an off state because of the small fluctuations over time from the current sensor. Then, when you have this data about the different on/off states, it's quite simple to count the number of occurrences of each state, for example, on states, using =COUNTIF(E:E,"1"). I applied these formulas in my Google Docs spreadsheet, and the following screenshot is the result with the sample data I recorded: It is also very convenient to represent this data in a graph. For this, I used a pie chart, which I believe is the most adaptable graph for this kind of data. The following screenshot is what I got with my measurements: With the preceding kind of chart, you can compare the usage of a given lamp from day to day, for example, to know whether you have left the lights on when you are not there. Summary In this article, we learned to send data to Google docs, measure the energy consumption, and store this data to the Web. Resources for Article: Further resources on this subject: Home Security by BeagleBone [Article] Playing with Max 6 Framework [Article] Our First Project – A Basic Thermometer [Article]
Read more
  • 0
  • 0
  • 1708

article-image-creating-3d-world-roam
Packt
11 Apr 2014
5 min read
Save for later

Creating a 3D world to roam in

Packt
11 Apr 2014
5 min read
(For more resources related to this topic, see here.) We may be able to create models and objects within our 3D space, as well as generate backgrounds, but we may also want to create a more interesting environment within which to place them. 3D terrain maps provide an elegant way to define very complex landscapes. The terrain is defined using a grayscale image to set the elevation of the land. The following example shows how we can define our own landscape and simulate flying over it, or even walk on its surface: A 3D landscape generated from a terrain map Getting ready You will need to place the Map.png file in the pi3d/textures directory of the Pi3D library. Alternatively, you can use one of the elevation maps already present—replace the reference to Map.png for another one of the elevation maps, such as testislands.jpg. How to do it… Create the following 3dWorld.py script: #!/usr/bin/python3 from __future__ import absolute_import, division from __future__ import print_function, unicode_literals """ An example of generating a 3D environment using a elevation map """ from math import sin, cos, radians import demo import pi3d DISPLAY = pi3d.Display.create(x=50, y=50) #capture mouse and key presses inputs=pi3d.InputEvents() def limit(value,min,max): if (value < min): value = min elif (value > max): value = max return value def main(): CAMERA = pi3d.Camera.instance() tex = pi3d.Texture("textures/grass.jpg") flatsh = pi3d.Shader("uv_flat") # Create elevation map mapwidth,mapdepth,mapheight=200.0,200.0,50.0 mymap = pi3d.ElevationMap("textures/Map.png", width=mapwidth, depth=mapdepth, height=mapheight, divx=128, divy=128, ntiles=20) mymap.set_draw_details(flatsh, [tex], 1.0, 1.0) rot = 0.0 # rotation of camera tilt = 0.0 # tilt of camera height = 20 viewhight = 4 sky = 200 xm,ym,zm = 0.0,height,0.0 onGround = False # main display loop while DISPLAY.loop_running() and not inputs.key_state("KEY_ESC"): inputs.do_input_events() #Note:Some mice devices will be located on #get_mouse_movement(1) or (2) etc. mx,my,mv,mh,md=inputs.get_mouse_movement() rot -= (mx)*0.2 tilt -= (my)*0.2 CAMERA.reset() CAMERA.rotate(-tilt, rot, 0) CAMERA.position((xm,ym,zm)) mymap.draw() if inputs.key_state("KEY_W"): xm -= sin(radians(rot)) zm += cos(radians(rot)) elif inputs.key_state("KEY_S"): xm += sin(radians(rot)) zm -= cos(radians(rot)) elif inputs.key_state("KEY_R"): ym += 2 onGround = False elif inputs.key_state("KEY_T"): ym -= 2 ym-=0.1 #Float down! #Limit the movement xm=limit(xm,-(mapwidth/2),mapwidth/2) zm=limit(zm,-(mapdepth/2),mapdepth/2) if ym >= sky: ym = sky #Check onGround ground = mymap.calcHeight(xm, zm) + viewhight if (onGround == True) or (ym <= ground): ym = mymap.calcHeight(xm, zm) + viewhight onGround = True try: main() finally: inputs.release() DISPLAY.destroy() print("Closed Everything. END") #End How it works… Once we have defined the display, camera, textures, and shaders that we are going to use, we can define the ElevationMap object. It works by assigning a height to the terrain image based on the pixel value of selected points of the image. For example, a single line of an image will provide a slice of the ElevationMap object and a row of elevation points on the 3D surface. Mapping the map.png pixel shade to the terrain height We create an ElevationMap object by providing the filename of the image we will use for the gradient information (textures/Map.png), and we also create the dimensions of the map (width, depth, and height—which is how high the white spaces will be compared to the black spaces). The light parts of the map will create high points and the dark ones will create low points. The Map.png texture provides an example terrain map, which is converted into a three-dimensional surface. We also specify divx and divy, which determines how much detail of the terrain map is used (how many points from the terrain map are used to create the elevation surface). Finally, ntiles specifies that the texture used will be scaled to fit 20 times across the surface. Within the main DISPLAY.loop_running() section, we will control the camera, draw ElevationMap, respond to inputs, and limit movements in our space. As before, we use an InputEvents object to capture mouse movements and translate them to control the camera. We will also use inputs.key_state() to determine if W, S, R, and T have been pressed, which allow us to move forward, backwards, as well as rise up and down. To ensure that we do not fall through the ElevationMap object when we move over it, we can use mymap.calcHeight() to provide us with the height of the terrain at a specific location (x,y,z). We can either follow the ground by ensuring the camera is set to equal this, or fly through the air by just ensuring that we never go below it. When we detect that we are on the ground, we ensure that we remain on the ground until we press R to rise again. Summary In this article, we created a 3D world by covering how to define landscapes, the use of elevation maps, and the script required to respond to particular inputs and control movements in a space. Resources for Article: Further resources on this subject: Testing Your Speed [Article] Pulse width modulator [Article] Web Scraping with Python [Article]
Read more
  • 0
  • 0
  • 4742

article-image-testing-your-speed
Packt
20 Mar 2014
5 min read
Save for later

Testing Your Speed

Packt
20 Mar 2014
5 min read
(For more resources related to this topic, see here.) Creating the game controller In order to design a controller, we first need to know what sort of game is going to be played. I am going to explain how to make a game where the player is told a letter, and he/she has to press the button of that letter as quickly as possible. They then are told about another letter. The player has to hit as many buttons correctly as they can in a 30 second time limit. There are many ways in which this game can be varied; instead of ordering the player to press a particular button, the game could ask the player a multiple-choice question, and instead of colors, the buttons could be labeled with Yes, No, Maybe or different colors. You could give the player multiple commands at once, and make sure that he/she presses all the buttons in the right order. It would even be possible to make a huge controller and treat it as more of a board game. I will leave the game design up to you, but I recommend that you follow the instructions in this article until the end, and then change things to your liking once you know how everything works. The controller base So now that we know how the game is going to be played, it's time to design the controller. This is what my design looks like with four different letters: Make sure each button area is at least a little bigger than a paper clip, as these are what the buttons will be made of. I recommend a maximum of eight buttons. Draw your design on to the card, decorate it however you like, and then cut it out. Adding buttons Now for each button, we need to perform the following steps: Poke two small holes in the card, roughly 3 cm apart (or however long your paper clips are), as shown in the following figure. Use a sharp pencil or a pair of scissors to do this. Push a paper fastener through each hole and open them out, as shown in the following figures: Wrap a paper clip around the head of one of the fasteners, and (if necessary) bend it so that it grips the fastener tightly, as shown in the following figure: Bend the other end of the paper clip up very slightly, so it doesn't touch the second fastener unless you press down on it, as shown in the following figure: Turn the card over and tape one leg of each fastener in place, making sure that they don't touch, as shown in the following figure: Tape a length of wire to each of the two remaining legs of the fasteners. The ends of the wires should be exposed metal so that electricity can flow through the wire, paper fastener, and paper clip (as shown in the following figure). You may like to delay this step until later, when you have a better idea of how long the wire should be. Connecting to the Raspberry Pi Now that the controller is ready, it's time to connect it to the Raspberry Pi. One of the things that distinguishes the Raspberry Pi from a normal computer is its set of general purpose input/output (GPIO) pins. These are the 26 pins at the top-left corner of the Raspberry Pi, just above the logo. As the name suggests, they can be used for any purpose, and are capable of both sending and receiving signals. The preceding figure shows what each of the pins does. In order to create a (useful) circuit, we need to connect one of the power pins to one of the ground pins, with some sort of electrical component in between. The GPIO pins are particularly useful because we can make them behave like either power or ground pins, and they can also detect what they're connected to. Note that there are two versions of the pin numbering system. You will almost certainly have a revision 2 Raspberry Pi. The revision 2 board has two mounting holes, while the revision 1 board has none. (These holes are surrounded by metal and are large enough to put a screw through. It's easy to spot them if they're there.) It is safest to simply not use any of the pins that have different numbers in different revisions. To connect your controller to the Raspberry Pi, connect one wire from each button to a 3V3 Power pin, and each of the remaining wires to a different GPIO pin (one with GPIO in its name as in the previous figure). In my example, I will use pins 22, 23, 24, and 25. Everything is now connected as shown in the following figure: Summary In this article, we used the Python programming language to create a game. We created an electronic circuit to act as the game controller, and used code to detect when the buttons were being pressed. We learned the basics of the Python language, and saw how separating the code into multiple functions makes it more flexible and easier to manage. Resources for Article: Further resources on this subject: Installing MAME4All (Intermediate) [Article] Creating a file server (Samba) [Article] Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [Article]
Read more
  • 0
  • 0
  • 1539
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-making-unit-very-mobile-controlling-legged-movement
Packt
20 Dec 2013
10 min read
Save for later

Making the Unit Very Mobile - Controlling Legged Movement

Packt
20 Dec 2013
10 min read
(for more resources related to this topic, see here.) Mission briefing We've covered creating robots using a wheeled/track base. In this article, you will be introduced to some of the basics of servo motors and using the BeagleBone Black to control the speed and direction of your legged platform. Here is an image of a finished project: Why is it awesome? Even though you've learned to make your robot mobile by adding wheels or tracks, this mobile platform will only work well on smooth, flat surfaces. Often, you'll want your robot to work in environments where it is not smooth or flat; perhaps, you'll even want your robot to go upstairs or over curbs. In this article, you'll learn how to attach your board, both mechanically and electrically, to a platform with legs, so your projects can be mobile in many more environments. Robots that can walk: what could be more amazing than that? Your objectives In this article, you will learn: Connecting the BeagleBone Black to a mobile platform using a servo controller Creating a program in Linux to control the movement of the mobile platform Making your mobile platform truly mobile by issuing voice commands   Mission checklist In this article, you'll need to add a legged platform to make your project mobile. So, here is your parts' list: A legged robot: There are a lot of choices. As before, some are completely assembled, others have some assembly required, and you may even choose to buy the components and construct your own custom mobile platform. Also, as before, I'm going to assume that you don't want to do any soldering or mechanical machining yourself, so let's look at a several choices that are available completely assembled or can be assembled by simple tools (screwdriver and/or pliers). One of the easiest legged mobile platforms is one that has two legs and four servo motors. Here is an image of this type of platform: You'll use this platform in this article because it is the simplest to program and because it is the least expensive, requiring only four servos. To construct this platform, you must purchase the parts and then assemble it yourself. Find the instructions and parts list at http://www.lynxmotion.com/images/html/build112.htm. Another easy way to get all the mechanical parts (except servos) is to purchase a biped robot kit with six degrees of freedom (DOF). This will contain the parts needed to construct your four-servo biped. These six DOF bipeds can be purchased by searching eBay or by going to http://www.robotshop.com/2-wheeled-development-platforms-1.html. You'll also need to purchase the servo motors. For this type of robot, you can use standard size servos. I like the Hitec HS-311 or HS-322 for this robot. They are inexpensive but powerful enough. You can get those on Amazon or eBay. Here is an image of an HS-311: You'll need a mobile power supply for the BeagleBone Black. Again, I personally like the 5V cell phone rechargeable batteries that are available almost anywhere that supplies cell phones. Choose one that comes with two USB connectors, just in case you want to also use the powered USB hub. This one mounts well on the biped HW platform: You'll also need a USB cable to connect your battery to the BeagleBone Black, but you can just use the cable supplied with the BeagleBone Black. If you want to connect your powered USB hub, you'll need a USB to DC jack adapter for that as well. You'll also need a way to connect your batteries to the servo motor controller. Here is an image of a four AA battery holder, available at most electronics parts stores or from Amazon: Now that you have the mechanical parts for your legged mobile platform, you'll need some HW that will take the control signals from your BeagleBone Black and turn them into a voltage that can control the servo motors. Servo motors are controlled using a control signal called PWM. For a good overview of this type of control, see http://pcbheaven.com/wikipages/How_RC_Servos_Works/ or https://www.ghielectronics.com/docs/18/pwm. You can find tutorials that show you how to control servos directly using the BeagleBone Black's GPIO pins, for example, here at http://learn.adafruit.com/controlling-a-servowith-a-beaglebone-black/overview and http://www.youtube.com/watch?v=6gv3gWtoBWQ. For ease of use I chose to purchase a motor controller that can talk over USB and control the servo motor. These protect my board and make controlling many servos easy. My personal favorite for this application is a simple servo motor controller utilizing USB from Pololu that can control 18 servo motors. Here is an image of the unit: Again, make sure you order the assembled version. This piece of HW will turn USB commands into voltage that control your servo motors. Pololu makes a number of different versions of this controller, each able to control a certain number of servos. Once you've chosen your legged platform, simply count the number of servos you need to control, and chose the controller that can control that number of servos. One advantage of the 18 servo controller is the ease of connecting power to the unit via screw type connectors. Since you are going to connect this controller to your BeagleBone Black via USB, you'll also need a USB A to mini-B cable. Now that you have all the HW, let's walk through a quick tutorial on how a two-legged system with servos works and then some step-by-step instructions to make your project walk. Connecting the BeagleBone Black to the mobile platform using a servo controller Now that you have a legged platform and a servo motor controller, you are ready to make your project walk! Prepare for lift off Before you begin, you'll need some background on servo motors. Servo motors are somewhat similar to DC motors; however, there is an important difference. While DC motors are generally designed to move in a continuous way—rotating 360 degrees at a given speed—servos are generally designed to move within a limited set of angles. In other words, in the DC motor world, you generally want your motors to spin with continuous rotation speed that you control. In the servo world, you want your motor to move to a specific position that you control. Engage thrusters To make your project walk, you first need to connect the servo motor controller to the servos. There are two connections you need to make: the first to the servo motors, the second to the battery holder. In this section, you'll connect your servo controller to your PC to check to see if everything is working. First, connect the servos to the controller. Here is an image of your two-legged robot, and the four different servo connections: In order to be consistent, let's connect your four servos to the connections marked 0 through 3 on the controller using this configuration: 0 – left foot, 1 – left hip, 2 – right foot, and 3 – right hip. Here is an image of the back of the controller; it will tell you where to connect your servos: Connect these to the servo motor controller like this: the left foot to the top O connector, black cable to the outside (–), the left hip to the 1 connector, black cable out, right foot to the 2 connector, black cable out, and right hip to the 3 connector, black cable out. See the following image for a clearer description: Now you need to connect the servo motor controller to your battery. If you are using a standard 4 AA battery holder, connect it to the two green screw connectors, the black cable to the outside, and the red cable to the inside, as shown in the following image: Now you can connect the motor controller to your PC to see if you can talk with it.   Objective complete – mini debriefing Now that the HW is connected, you can use some SW provided by Polulu to control the servos. It is easiest to do this using your personal computer. First, download the Polulu SW from http://www.pololu.com/docs/0J40/3.a and install it based on the instructions on the website. Once it is installed, run the SW, and you should see the following screen: You first will need to change the configuration on Serial Settings, so select the Serial Settings tab, and you should see a screen as shown in the following screenshot: Make sure that the USB Chained option is selected; this will allow you to connect and control the motor controller over USB. Now go back to the main screen by selecting the Status tab, and now you can turn on the four servos. The screen should look like the following screenshot: Now you can use the sliders to control the servos. Make sure that the servo 0 moves the left foot, 1 the left hip, 2 the right foot, and 3 the right hip. You've checked the motor controllers and the servos, and you'll now connect the motor controller up to the BeagleBone Black control the servos from it. Remove the USB cable from the PC and connect it into the powered USB hub. The entire system will look like the following image: Let's now talk to the motor controller by downloading the Linux code from Pololu at http://www.pololu.com/docs/0J40/3.b. Perhaps, the best way is to log in to your Beagle Bone Black by using vncserver and a vncviewer window on your PC. To do this, log in to your BeagleBone Black using PuTTY, then type vncserver at the prompt to make sure vncserver is running. On your PC open the VNC Viewer application, enter your IP address, then press connect. Then enter your password that you created for the vncserver, and you should see the BeagleBone Black Viewer screen, which should look like this: Open a Firefox browser window and go to http://www.pololu.com/docs/0J40/3.b. Click on the Maestro Servo Controller Linux Software link. You will download the file maestro_linux_100507.tar.gz to the Download directory. Go to your download directory, move this file to your home directory by typing mv maestro_linux_100507.tar.gz .. and then you can go back to your home directory. Unpack the file by typing tar –xzfv maestro_linux_011507.tar.gz. This will create a directory called maestro_linux. Go to that directory by typing cd maestro_linux and then type ls. You should see something like this: The document README.txt will give you explicit instructions on how to install the SW. Unfortunately you can't run MaestroControlCenter on your BeagleBone Black. Our version of windowing doesn't support the graphics, but you can control your servos using the UscCmd command-line application. First type ./UscCmd --list and you should see the following: The unit sees your servo controller. If you just type ./UscCmd you can see all the commands you could send to your controller: Notice you can send a servo a specific target angle, although the target is not in angle values, so it makes it a bit difficult to know where you are sending your servo. Try typing ./UscCmd --servo 0, 10. The servo will most likely move to its full angle position. Type ./UscCmd --servo 0, 0 and it will stop the servo from trying to move. In the next section, you'll write some SW that will translate your angles to the commands that the servo controller will want to see. If you didn't run the Windows version of Maestro Controller and set the serial settings to USB Chained, your motor controller may not respond.
Read more
  • 0
  • 0
  • 2039
Banner background image

article-image-home-security-beaglebone
Packt
13 Dec 2013
7 min read
Save for later

Home Security by BeagleBone

Packt
13 Dec 2013
7 min read
(For more resources related to this topic, see here.) One of the best kept secrets of the security and access control industry is just how simple the monitoring hardware actually is. It is the software that runs on the monitoring hardware that makes it seem cool. The original BeagleBone or the new BeagleBone Black, have all the computing power you need to build yourself an extremely sophisticated access control, alarm panel, home automation, and network intrusion detection system. All for less than a year's worth of monitoring charges from your local alarm company! Don't get me wrong, monitored alarm systems have their place. Your elderly mother, for example, or your convenience store in a bad part of town. There is no substitute for a live human on the other end of the line. That said, if you are reading this, you are probably a builder or a hobbyist with all the skills required to do it yourself. BeagleBone is used as the development platform. The modular design of the alarm system allows the hardware to be used with any of the popular single board computers available in the market today. Any single board computer with at least eight accessible input/output pins will work. For example, the Arduino series of boards, the Gumstix line of hardware, and many others. The block diagram of the alarm system is shown in the following diagram: Block Diagram The adapter board is what is used to connect the single board computer to the alarm system. The adapter board comes with connectors for adding two more zones and four more outputs. Instructions are provided for adding zone inputs and panel outputs to the software. An alarm zone can be thought of as having two properties. The first is the actual hardware sensors connected to the panel. The second is the physical area being protected by the sensors. There are three or four types of sensors found in home and small business alarm systems. The first and most common is the magnetic door or window contact. The magnet is attached to the moving part (the window or the door) and the contacts are attached to the frame of the door or window. When the door or window is opened past a certain point the magnet can no longer hold the contacts closed, and they open to signal an alarm. The second most common sensor is the active sensor. The PIR or passive infrared motion sensor is installed in the corner of a room in order to detect the motion of a body which is warmer than the ambient temperature. Two other common sensors are temperature rise and CO detectors. These can both be thought of as life saving detectors. They are normally on a separate zone so that they are not disabled when the alarm system is not armed. The temperature rise detector senses a sudden rise in the ambient temperature and is intended to replace the old ionization type smoke detectors. No more burnt toast false alarms! The CO detector is used to detect the presence of Carbon Monoxide, which is a byproduct of combustion. Basically, faulty oil or gas furnaces and wood or coal burning stoves are the main culprit. Temperature Rise or CO Detector Physical zones are the actual physical location that the sensors are protecting. For example "ground floor windows" could be a zone. Other typical zones defended by a PIR could be garage or rear patio. In the latter case, outdoor PIR motion sensors are available at about twice the price of an indoor model. Depending on your climate, you may be able to install an indoor sensor outside, provided that it is sheltered from rain. The basic alarm system comes with four zone inputs and four alarm outputs. The outputs are just optically isolated phototransistors. So you can use them for anything you like. The first output is reserved in software for the siren, but you can do whatever you like with the other outputs. All four outputs are accessible from the alarm system web page, so you can remotely turn on or off any number of things. For example, you can use the left over three outputs to turn on and off lawn sprinklers, outdoor lighting or fountains and pool pumps. That's right. The alarm system has its own built in web server which provides you with access to the alarm system from anywhere with an internet connection. You could be on the other side of the world and if anything goes wrong, the alarm system will send you an e-mail telling you that something is wrong. Also, if you leave for the airport and forget to turn on or off the lights or lawn sprinkler, simply connect to the alarm system and correct the problem. You can also connect to the system via SSH or secure shell. This allows you to remotely run terminal applications on your BeagleBone. The alarm system, actually has very little to do so long as no alarms occur. The alarm system hardware generates an interrupt which is detected by the BeagleBone, so the BeagleBone spends most of its time idle. This is a waste of computing resources, so the system can also run network intrusion detection software. Not only can this alarm system protect you physical property, it can also keep your network safe as well. Can any local alarm system company claim that? Iptraf Iptraf is short for IP Traffic Monitor. This is a terminal-based program which monitors traffic on any of the interfaces connected to your network or the BeagleBone. My TraceRoute (mtr-0.85) Anyone who has ever used trace route on either Linux or Windows will know that it is used to find the path to a given IP address. MTR is a combination of trace route and ping in one single tool. Wavemon Wavemon is a simple ASCII text-based program that you can use to monitor your WiFi connections to the BeagleBone. Unlike the first two programs, Wavemon requires an Angstrom compatible WiFi adapter. In this case I used an AWUS036H wireless adapter. hcitool Bluetooth monitoring can be done in much the same way as WiFi monitoring; with hcitool. For example: hcitool scan will scan any visible Bluetooth devices in range. As with Wavemon, an external Bluetooth adapter is required. Your personal security system These are just some of the features of the security system you can build and customize for yourself. With advanced programming skills, you can create a security system with fingerprint ID access, that not only monitors and controls its physical surroundings but also the network that it is connected to. It can also provide asset tracking via RFID, barcode, or both; all for much less than the price of a commercial system. Not only that but you designed built and installed it. So tech support is free and should be very knowledgeable! Summary A block diagram of the alarm system is explained. The adapter board is what is used to connect the single board computer to the alarm system. The adapter board comes with connectors for adding two more zones and four more outputs. Instructions are provided for adding zone inputs and panel outputs to the software. Resources for Article: Further resources on this subject: Building a News Aggregating Site in Joomla! [Article] Breaching Wireless Security [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 3878

article-image-clusters-parallel-computing-and-raspberry-pi-brief-background
Packt
14 Nov 2013
12 min read
Save for later

Clusters, Parallel Computing, and Raspberry Pi – A Brief Background

Packt
14 Nov 2013
12 min read
(For more resources related to this topic, see here.) So what is a cluster? Each device on this network is often referred to as a node. Thanks to the Raspberry Pi's low cost and small physical footprint, building a cluster to explore parallel computing has become far cheaper and easier for users at home to implement. Not only does it allow you to explore the software side, but also the hardware as well. While Raspberry Pis wouldn't be suitable for a fully-fledged production system, they provide a great tool for learning the technologies that professional clusters are built upon. For example, they allow you to work with industry standards, such as MPI and cutting edge open source projects such as Hadoop. This article will provide you with a basic background to parallel computing and the technologies associated with it. It will also provide you with an introduction to using the Raspberry Pi. A very short history of parallel computing The basic assumption behind parallel computing is that a larger problem can be divided into smaller chunks, which can then be operated on separately at the same time. Related to parallelism is the concept of concurrency, but the two terms should not be confused. Parallelism can be thought of as simultaneous execution and concurrency as the composition of independent processes. You will encounter both of these approaches in this article. You can find out more about the differences between the two at the following site: http://blog.golang.org/concurrency-is-not-parallelism Parallel computing and related concepts have been in use by capital-intensive industries, such as Aircraft design and Defense, since the late 1950's and early 1960's. With the cost of hardware having dropped rapidly over the past five decades and the birth of open source operating systems and applications; home enthusiasts, students, and small companies now have the ability to leverage these technologies for their own uses. Traditionally parallel computing was found within High Performance Computing (HPC) architectures, those being systems categorized by high speed and density of calculations. The term you will probably be most familiar with in this context is, of course, supercomputers, which we shall look at next. Supercomputers The genesis of supercomputing can be found in the 1960's with a company called Control Data Corporation(CDC). Seymour Cray was an electrical engineer working for CDC who became known as the father of supercomputing due to his work on the CDC 6600, generally considered to be the first supercomputer. The CDC 6600 was the fastest computer in operation between 1964 and 1969. In 1972 Cray left CDC and formed his own company, Cray Research. In 1975 Cray Research announced the Cray-1 supercomputer. The Cray-1 would go on to be one of the most successful supercomputers in history and was still in use among some institutions until the late 1980's. The 1980's also saw a number of other players enter the market including Intel via the Caltech Concurrent Computation project, which contained 64 Intel 8086/8087 CPU's and Thinking Machines Corporation's CM-1 Connection Machine. This preceded an explosion in the 1990's with regards to the number of processors being included in supercomputing machines. It was in this decade, thanks to brute-force computing power that IBM infamously beat world chess master Garry Kasparov with the Deep Blue supercomputer. The Deep Blue machine contained some 30 nodes each including IBM RS6000/SP parallel processors and numerous "chess chips". By the 2000's the number of processors had blossomed to tens of thousands working in parallel. As of June 2013 the fastest supercomputer title was held by the Tianhe-2, which contains 3,120,000 cores and is capable of running at 33.86 petaflops per second. Parallel computing is not just limited to the realm of supercomputing. Today we see these concepts present in multi-core and multiprocessor desktop machines. As well as single devices we also have clusters of independent devices, often containing a single core, that can be connected up to work together over a network. Since multi-core machines can be found in consumer electronic shops all across the world we will look at these next. Multi-core and multiprocessor machines Machines packing multiple cores and processors are no longer just the domain of supercomputing. There is a good chance that your laptop or mobile phone contains more than one processing core, so how did we reach this point? The mainstream adoption of parallel computing can be seen as a result of the cost of components dropping due to Moore's law. The essence of Moore's law is that the number of transistors in integrated circuits doubles roughly every 18 to 24 months. This in turn has consistently pushed down the cost of hardware such as CPU's. As a result, manufacturers such as Dell and Apple have produced even faster machines for the home market that easily outperform the supercomputers of old that once took a room to house. Computers such as the 2013 Mac Pro can contain up to twelve cores, that is a CPU that duplicates some of its key computational components twelve times. These cost a fraction of the price that the Cray-1 did at its launch. Devices that contain multiple cores allow us to explore parallel-based programming on a single machine. One method that allows us to leverage multiple cores is threads. Threads can be thought of as a sequence of instructions usually contained within a single lightweight process that the operating system can then schedule to run. From a programming perspective this could be a separate function that runs independently from the main core of the program. Thanks to the ability to use threads in application development, by the 1990's a set of standards had come to dominate the area of shared memory multiprocessor devices, these were POSIX Threads(Pthreads) and OpenMP. POSIX threads is a standardized C language interface specified in the IEEE POSIX 1003.1c standard for programming threads, that can be used to implement parallelism. The other standard specified is OpenMP. To quote the OpenMP website, it can be described as: OpenMP is a specification for a set of compiler directives, library routines, and environment variables that can be used to specify shared memory parallelism in Fortran and C/C++ programs. http://openmp.org/ What this means in practice is that OpenMP is a standard that provides an API that helps to deal with problems, such as multi-threading and memory sharing. By including OpenMP in your project, you can write multithreaded applications without having to take care of many of the low-level implementation details as with writing an application purely using Pthreads. Commodity hardware clusters As with single devices with many CPU's, we also have groups of commodity off the shelf(COTS) computers, which can be networked together into a Local Area Network(LAN). These used to be commonly referred to as Beowulf clusters. In the late 1990's, thanks to the drop in the cost of computer hardware, the implementation of Beowulf clusters became a popular topic, with Wired magazine publishing a how-to guide in 2000: http://www.wired.com/wired/archive/8.12/beowulf.html The Beowulf cluster has its origin in NASA in the early 1990's, with Beowulf being the name given to the concept of a Network Of Workstations(NOW) for scientific computing devised by Donald J. Becker and Thomas Sterling. The implementation of commodity hardware clusters running technologies such as MPI lies behind the Raspberry Pi-based projects we will be building in this article. Cloud computing The next topic we will look at is cloud computing. You have probably heard the term before, as it is something of a buzzword at the moment. At the core of the term is a set of technologies that are distributed, scalable, metered (as with utilities), can be run in parallel, and often contain virtual hardware. Virtual hardware is software that mimics the role of a real hardware device and can be programmed as if it were in fact a physical machine. Examples of virtual machine software include VirtualBox, Red Hat Enterprise Virtualization, and parallel virtual machine(PVM). You can learn more about PVM here: http://www.csm.ornl.gov/pvm/ Over the past decade, many large Internet-based companies have invested in cloud technologies, the most famous perhaps being Amazon. Having realized they were under utilizing a large proportion of their data centers, Amazon implemented a cloud computing-based architecture which eventually resulted in a platform open to the public known as Amazon Web Services(AWS). Products such as Amazon's AWS Elastic Compute Cloud(EC2) have opened up cloud computing to small businesses and home consumers by allowing them to rent virtual computers to run their own applications and services. This is especially useful for those interested in building their own virtual computing clusters. Due to the elasticity of cloud computing services such as EC2, it is easy to spool up many server instances and link these together to experiment with technologies such as Hadoop. One area where cloud computing has become of particular use, especially when implementing Hadoop, is in the processing of big data. Big data The term big data has come to refer to data sets spanning terabytes or more. Often found in fields ranging from genomics to astrophysics, big data sets are difficult to work with and require huge amount of memory and computational power to query. These data sets obviously need to be mined for information. Using parallel technologies such as MapReduce, as realized in Apache Hadoop, have provided a tool for dividing a large task such as this amongst multiple machines. Once divided, tasks are run to locate and compile the needed data. Another Apache application is Hive, a data warehouse system for Hadoop that allows the use of a SQL-like language called HiveQL to query the stored data. As more data is produced year-on-year by more computational devices ranging from sensors to cameras, the ability to handle large datasets and process them in parallel to speed up queries for data will become ever more important. These big data problems have in-turn helped push the boundaries of parallel computing further as many companies have come into being with the purpose of helping to extract information from the sea of data that now exists. Raspberry Pi and parallel computing Having reviewed some of the key terms of High Performance Computing, it is now time to turn our attention to the Raspberry Pi and how and why we intend to implement many of the ideas explained so far. This article assumes that you are familiar with the basics of the Raspberry Pi and how it works, and have a basic understanding of programming. Throughout this article when using the term Raspberry Pi, it will be in reference to the Model B version. For those of you new to the device, we recommend reading a little more about it at the official Raspberry Pi home page: http://www.raspberrypi.org/ Other topics covered in this article, such as Apache Hadoop, will also be accompanied with links to information that provides a more in-depth guide to the topic at hand. Due to the Raspberry Pi's small size and low cost, it makes a good alternative to building a cluster in the cloud on Amazon, or similar providers which can be expensive or using desktop PC's. The Raspberry Pi comes with a built-in Ethernet port, which allows you to connect it to a switch, router, or similar device. Multiple Raspberry Pi devices connected to a switch can then be formed into a cluster; this model will form the basis of our hardware configuration in the article. Unlike your laptop or PC, which may contain more than one CPU, the Raspberry Pi contains just a single ARM processor; however, multiple Raspberry Pi's combined give us more CPU's to work with. One benefit of the Raspberry Pi is that it also uses SD cards as secondary storage, which can easily be copied, allowing you to create an image of the Raspberry Pi's operating system and then clone it for re-use on multiple machines. When starting out with the Raspberry Pi this is a useful feature. The Model B contains two USB ports allowing us to expand the device's storage capacity (and the speed of accessing the data) by using a USB hard drive instead of the SD card. From the perspective of writing software, the Raspberry Pi can run various versions of the Linux operating system as well as other operating systems, such as FreeBSD and the software and tools associated with development on it. This allows us to implement the types of technology found in Beowulf clusters and other parallel systems. We shall provide an overview of these development tools next. Programming languages and frameworks A number of programming languages including Fortran, C/C++, and Java are available on the Raspberry Pi, including via the standard repositories. These can be used for writing parallel applications using implementations of MPI, Hadoop, and the other frameworks we discussed earlier in this article. Fortran, C, and C++ have a long history with parallel computing and will all be examined to varying degrees throughout the article. We will also be installing Java in order to write Hadoop-based MapReduce applications. Fortran, due to its early implementation on supercomputing projects is still popular today for parallel computing application development, as a large body of code that performs specific scientific calculations exists. Apache Hadoop is an open source Java-based MapReduce framework designed for distributed parallel application development. A MapReduce framework allows an application to take, for example, a number of data sets, divide them up, and mine each data set independently. This can take place on separate devices and then the results are combined into a single data set from which we finally extract a meaningful value. Summary This concludes our short introduction to parallel computing and the tools we will be using on Raspberry Pi. You should now have a basic idea of some of the terms related to parallel computing and why using the Raspberry Pi is a fun and cheap way to build your own computing cluster. Our next task will be to set up our first Raspberry Pi, including installing its operating system. Once set up is complete, we can then clone its SD card and re-use it for future machines. Resources for Article : Further resources on this subject: Installing MAME4All (Intermediate) [Article] Using PVR with Raspbmc [Article] Coding with Minecraft [Article]
Read more
  • 0
  • 0
  • 2103
article-image-introducing-beagleboard
Packt
29 Oct 2013
9 min read
Save for later

Introducing BeagleBoard

Packt
29 Oct 2013
9 min read
(For more resources related to this topic, see here.) We'll first have a quick overview of the features of BeagleBoard (with focus on the latest xM version) —an open source hardware platform, borne for audio, video, and digital signal processing. Then we will introduce the concept of rapid prototyping and explain what we can do with the BeagleBoard support tools from MATLAB® and Simulink® by MathWorks®. Finally, this article ends with a summary. Different from most approaches that involve coding and compiling at a Linux PC and require intensive manual configuration in command-line manner, the rapid prototyping approach presented in this article is a Windows-based approach that features a Windows PC for embedded software development through user-friendly graphic interaction and relieves the developer from intensive coding so that you can concentrate on your application and algorithms and have the BeagleBoard run your inspiration. First of all, let's begin with a quick overview of this article. A quick overview of the BeagleBoard's functionality We can create a number of exciting projects to demonstrate how to build a prototype of an embedded audio, video, and digital signal processing system rapidly without intensive programming and coding. The main projects include: Installing Linux for BeagleBoard from a Windows PC Developing C/C++ with Eclipse on a Windows PC Automatic embedded code generation for BeagleBoard Serial communication and digital I/O application: Infrared motion detection Audio application: voice recognition Video application: motion detection These projects provide the workflow of building an embedded system. With the help of various online documents you can learn about setting up the development environment, writing software at a host PC running Microsoft Windows, and compiling the code for standalone ARM-executables at the BeagleBoard running Linux. Then you can learn the skills of rapid prototyping embedded audio and video systems via the BeagleBoard support tools from Simulink by MathWorks. The main features of these techniques include: Open source hardware A Windows-based friendly development environment Rapid prototyping and easy learning without intensive coding These features will save you from intensive coding and will also relieve the pressure on you to build an embedded audio/video processing system without learning the complicated embedded Linux. The rapid prototyping techniques presented allow you to concentrate on your brilliant concept and algorithm design, rather than being distracted by the complicated embedded system and low-level manual programming. This is beneficial for students and academics who are primarily interested in the development of audio/video processing algorithms, and want to build an embedded prototype for proof-of-concept quickly. BeagleBoard-xM BeagleBoard, the brainchild of a small group of Texas Instruments (TI) engineers and volunteers, is a pocket-sized, low-cost, fan-less, single-board computer containing TI Open Multimedia Application Platform 3 (OMAP3) System on a chip (SoC) processor, which integrates a 1 GHz ARM core and a TI's Digital Signal Processor (DSP) together. Since many consumer electronics devices nowadays run some form of embedded Linux-based environment and usually are on an ARM-based platform, the BeagleBoard was proposed as an inexpensive development kit for hobbyists, academics, and professionals for high-performance, ARM-based embedded system learning and evaluation. As an open hardware embedded computer with open source software development in mind, the BeagleBoard was created for audio, video, and digital signal processing with the purpose of meeting the demands of those who want to get involved with embedded system development and build their own embedded devices or solutions. Furthermore, by utilizing standard interfaces, the BeagleBoard comes with all of the expandability of today's desktop machines. The developers can easily bring their own peripherals and turn the pocket-sized BeagleBoard into a single-board computer with many additional features. The following figure shows the PCB layout and major components of the latest xM version of the BeagleBoard. The BeagleBoard-xM (referred to as BeagleBoard in this article unless specified otherwise) is an 8.25 x 8.25cm (3.25" x 3.25") circuit board that includes the following components: CPU: TI's DM3730 processor, which houses a 1 GHz ARM Cortex-A8 superscalar core and a TI's C64x+ DSP core. The power of the 32-bit ARM and C64+ DSP, plus a large amount of onboard DDR RAM arm the BeagleBoard with the capacity to deal with computational intensive tasks, such as audio and video processing. Memory: 512 MB MDDR SDRAM working 166MHz. The processor and the 512 MB RAM comes in a .44 mm (Package on Package) POP package, where the memory is mounted on top of the processor. microSD card slot: being provided as a means for the main nonvolatile memory. The SD cards are where we install our operating system and will act as a hard disk. The BeagleBoard is shipped with a 4GB microSD card containing factory-validated software (actually, an Angstrom distribution of embedded Linux tailored for BeagleBoard). Of course, this storage can be easily expanded by using, for example, an USB portable hard drive. USB2.0 On-The-Go (OTG) mini port: This port can be used as a communication link to a host PC and the power source deriving power from the PC over the USB cable. 4-port USB-2.0 hub: These four USB Type A connectors with full LS/FS/HS support. Each port can provide power on/off control and up to 500 mA as long as the input DC to the BeagleBoard is at least 3 A. RS232 port: A single RS232 port via UART3 of DM3730 processor is provided by a DB9 connector on BeagleBoard-xM. A USB-to-serial cable can be plugged directly into the DB9 connector. By default, when the BeagleBoard boots, system information will be sent to the RS232 port and you can log in to the BeagleBoard through it. 10/100 M Ethernet: The Ethernet port features auto-MDIX, which works for both crossover cable and straight-through cable. Stereo audio output and input: BeagleBoard has a hardware accelerated audio encoding and decoding (CODEC) chip and provides stereo in and out ports via two 3.5 mm jacks to support external audio devices, such as headphones, powered speakers, and microphones (either stereo or mono). Video interfaces: It includes S-video and Digital Visual Interface (DVI)-D output, LCD port, a Camera port. Joint Test Action Group (JTAG) connector: reset button, a user button, and many developer-friendly expansion connectors. The user button can be used as an application button. To get going, we need to power the BeagleBoard by either the USB OTG mini port, which just provides current of up to 500 mA to run the board alone, or a 5V power source to run with external peripherals. The BeagleBoard boots from the microSD card once the power is on. Various alternative software images are available on the BeagleBoard website, so we can replace the factory default images and have the BeagleBoard run with many other popular embedded operating systems (like Andria and Windows CE). The off-the-shelf expansion via standard interfaces on the BeagleBoard allows developers to choose various components and operating systems they prefer to build their own embedded solutions or a desktop-like system as shown below: BeagleBoard for rapid prototyping A rapid prototyping approach allows you to quickly create a working implementation of your proof-of-concept and verify your audio or video applications on hardware early, which overcomes barriers in the design-implementation-validation loops and helps you find the right solution for your applications. Rapid prototyping not only reduces the development time from concept to product, but also allows you to identify defects and mistakes in system and algorithm design at an early stage. Prototyping your concept and evaluating its performance on a target hardware platform gives you confidence in your design, and promotes its success in applications. The powerful BeagleBoard equipped with many standard interfaces provides a good hardware platform for rapid embedded system prototyping. On the other hand, the rapid prototyping tool, the BeagleBoard Support from Simulink package, provided by MathWorks with graphic user interface (GUI) allows developers to easily implement their concept and algorithm graphically in Simulink, and then directly run the algorithms at the BeagleBoard. In short, you design algorithms in MATLAB/Simulink and see them perform as a standalone application on the BeagleBoard. In this way, you can concentrate on your brilliant concept and algorithm design, rather than being distracted by the complicated embedded system and low-level manual programming. The prototyping tool reduces the steep learning curve of embedded systems and helps hobbyists, students, and academics who have a great idea, but have little background knowledge of embedded systems. This feature is particularly useful to those who want to build a prototype of their applications in a short time. MathWorks introduced the BeagleBoard support package for rapid prototyping in 2010. Since the release of MATLAB 2012a, support for the BeagleBoard-xM has been integrated into Simulink and is also available in the student version of MATLAB and Simulink. Your rapid prototyping starts with modeling your systems and implementing algorithms in MATLAB and Simulink. From your models, you can automatically generate algorithmic C code along with processor-specific, real-time scheduling code and peripheral drivers, and run them as standalone executables on embedded processors in real time. The following steps provide an overview of the work flow for BeagleBoard rapid prototyping in MATLAB/Simulink: Create algorithms for various applications in Simulink and MATLAB with a user-friendly GUI. The applications can be audio processing (for example, digital amplifiers), computer vision applications (for example, object tracking), control systems (for example, flight control), and so on. Verify and improve the algorithm work by simulation. With intensive simulation, it is expected that most defects, errors, and mistakes in algorithms will be identified. Then the algorithms are easily modified and updated to fix the identified issues. Run the algorithms as standalone applications on the BeagleBoard. Interactive parameter turning, signal monitoring, and performance optimization of applications running on the BeagleBoard. Summary In this article, we have familiarized ourselves with the BeagleBoard and rapid prototyping by using MATLAB/Simulink. We have also looked at some of the features of rapid prototyping and the basic steps in rapid prototyping in MATLAB/Simulink. Resources for Article: Further resources on this subject: 2-Dimensional Image Filtering [Article] Creating Interactive Graphics and Animation [Article] Advanced Matplotlib: Part 1 [Article]
Read more
  • 0
  • 0
  • 2554

article-image-installing-mame4all-intermediate
Packt
27 Sep 2013
3 min read
Save for later

Installing MAME4All (Intermediate)

Packt
27 Sep 2013
3 min read
(For more resources related to this topic, see here.) Getting ready You will need: A Raspberry Pi An SD card with the official Raspberry Pi OS, Raspbian, properly loaded A USB keyboard A USB mouse A 5V 1A power supply with Micro-USB connector A network connection And a screen hooked up to your Raspberry Pi How to do it... Perform the following steps for installing MAME4All: From the command line, enter startx to launch the desktop environment. From the desktop, launch the Pi Store application by double-clicking on the Pi Store icon. At the top-right of the application, there will be a Log In link. Click on this link and log in with your registered account. Type MAME4All in the search bar, and press Enter. Click on the MAME4All result. At the application's information page, click on the Download button on the right-hand side of the screen. MAME4All will automatically download, and a window will appear showing the installation process. Press any button to close the window once it has finished installing. MAME4All will look for your game files in the /usr/local/bin/indiecity/InstalledApps/MAME4ALL-pi/Full/roms directory. Perform the following steps for running MAME4All from the Pi Store: From the desktop, launch the Pi Store application by double-clicking on the Pi Store icon. At the top-right of the application, there will be a Log In link. Click on the link and log in with your registered account. Click on the My Library tab. Click on MAME4All, and then click on Launch. For running MAME4All from the command line, perform the following steps: Type cd /usr/local/bin/indiecity/InstalledApps/mame4all_pi/Full and press Enter. Type ./mame and press Enter for launching MAME4All. How it works... MAME4All is a Multi Arcade Machine Emulator that takes advantage of the Raspberry Pi's GPU to achieve very fast emulation of arcade machines. It is able to achieve this speed by compiling with DispManX, which offloads SDL code to the graphics core via OpenGL ES. When you run MAME4All, it looks for any game files you have in the roms directory and displays them in a menu for you to select from. If it doesn't find any files, it exits after a few seconds. The default keys for MAME4All-Pi are: 5 for inserting coins 1 for player 1 to start Arrow keys for player 1 joystick controls Ctrl, Alt, space bar, Z, X, and C for default action keys You can modify the MAME4All configuration by editing the /usr/local/bin/indiecity/InstalledApps/mame4all_pi/Full/mame.cfg file. There's more... A few useful reference links: For information on MAME project go to http://mamedev.org/ For information on MAME4All project go to http://code.google.com/p/mame4all-pi/ Summary In this article we saw how to install, launch, and play with a specially created version of MAME for the Raspberry Pi from the Pi Store Resources for Article: Further resources on this subject: Creating a file server (Samba) [Article] Webcam and Video Wizardry [Article] Coding with Minecraft [Article]
Read more
  • 0
  • 0
  • 6161

article-image-proportional-line-follower-advanced
Packt
26 Sep 2013
6 min read
Save for later

Proportional line follower (Advanced)

Packt
26 Sep 2013
6 min read
(For more resources related to this topic, see here.) Getting ready First, you will need to build an attachment to hold the color sensor onto the robot. Insert an axle that is five modules long into the color sensor. Place bushings onto the axle on either side of the sensor. This is illustrated in the following figure: Attach the two-pin one-axle cross blocks onto the axle outside the bushings. This is illustrated in the following figure: Insert 3-module pins into the cross blocks as shown in the following figure: The pins will attach to the robot just in front of the castor. The bottom of the color sensor should be approximately leveled with the plastic part of the castor holder. If you are on a flat hard surface, your light sensor will be half a centimeter above the ground. If you are on a soft surface, you may need to add a spacer to raise up the sensor. This is illustrated in the following figure: How to do it... We are going to write a proportional line following code similar to the code used for the ultrasonic motion sensor. We will write the following code: This program contains a loop so the robot will track a line for 30 seconds. The base speed of the robot is controlled by a constant value which in this case, is 15. You will need to determine a desired light sensor value for the robot to track on. You can either read the light sensor reading directly on your EV3 brick, or look at the panel on the lower-right hand corner of your screen. You should see all of the motors and sensors that are currently plugged into your brick. In the following screenshot, the current light sensor reading is 16. When tracking a line, you actually want to track on the edge of a line. Our code is designed to track on the right edge of a black line on a white surface. The line doesn't have to be black (or a white surface), but the stronger the contrast the better. One way to determine the desired light sensor value would be to place the light sensor on the edge of the line. Alternatively, you could take two separate readings on the bright surface and the dark surface and take the average value. In the code we discussed, the average value is 40, but you will have to determine the values which work in your own environment. Not only will the surfaces affect the value, but ambient room light can alter this value. The code next finds the difference between the desired value and the sensor reading. This difference is multiplied by a gain factor, which for the optical proportional line follower will probably be between 0 and 1. In this program, I chose a gain of 0.7. The result is added to the base speed of one motor and subtracted from the based speed of the other motor: MotorBPower = Speed - Gain * (LightSensor - DesiredValue) MotorCPower = Speed + Gain * (LightSensor - DesiredValue) After taking the light sensor readings, practice with several numbers to figure out the best speeds and proportionality constants to make your robot follow a line. How it works... This algorithm will make corrections to the path of the robot based on how far off from the line the robot is. It determines this by calculating the difference between the light sensor reading and the value of the light sensor reading on the edge. Each wheel of the robot rotates at a different speed proportional to how far from the line it is. There is a base speed for each wheel and then they will go either slower or faster for a smooth turning. You will find that a large gain value will be needed for sharp turns, but the robot will tend to overcorrect and wobble when it is following a straight line. A smaller gain and higher speed can work effectively when the line is relatively straight or follows a gradual curve. The most important factor to determine is the desired light sensor value. Although your color sensor can detect several colors, we will not be using that feature in this program. The color sensor included in your kit emits red light and we are measuring the reflection of that reflected light. The height of the sensor above the floor is critical, and there is a sweet spot for line tracking at about half a centimetre above the floor. The light comes out of the sensor in a cone. You want the light reflected into the sensor to be as bright as possible, so if your sensor is too high, the reflected intensity will be weaker. Assuming your color sensor is pointing straight down at the floor (as it is in our robot design), then you will see a circular red spot on the floor. Because the distance between the detector and the light emitter is about 5 to 6 mm, the diameter of this circle should be about 11 mm across. If the circle is large, then your color sensor is too high and the intensity will weaken. If the circle is smaller than this, then the sensor will not pick up the emitted light. The color sensor in the LEGO MINDSTORMS EV3 kit is different from the optical sensors included in the earlier LEGO NXT kits. Depending on your application, you might want to pick up some of the older NXT lights and color sensors. The light sensor in the NXT 1.0 kit could not detect color and only measured reflected intensity of a red LED. What is good about this sensor is that it will actually work flush against the surface and saves the need to calibrate changes due to the ambient lighting conditions. The color sensor in the NXT 2.0 kit actually emitted colored lights and contained a general photo detector. However, it did not directly measure color, but measured the reflection of colored light, which it would emit. This actually allowed you to track along different colored lines, but it was also slower. The new EV3 sensor detects colors directly, works quickly, and emits only red light. Summary This article taught us to alter our robot, so it can track a line using an optical sensor. We used a proportional algorithm and adjusted the parameters for optimum tracking. Finally, we also wrote a program allowing the robot to be calibrated without the use of a computer. Resources for Article : Further resources on this subject: Playing with Max 6 Framework [Article] Panda3D Game Development: Scene Effects and Shaders [Article] Our First Project – A Basic Thermometer [Article]
Read more
  • 0
  • 1
  • 4014
article-image-creating-file-server-samba
Packt
12 Apr 2013
7 min read
Save for later

Creating a file server (Samba)

Packt
12 Apr 2013
7 min read
(For more resources related to this topic, see here.) The Raspberry Pi with attached file storage functions well as a file server. Such a file server could be used as a central location for sharing files and documents, for storing backups of other computers, and for storing large media files such as photo, music, and video files. This recipe installs and configures samba and samba-common-bin. The Samba software distribution package, samba, contains a server for the SMB (CIFS) protocol used by Windows computers for setting up 'shared drives' or 'shared folders'. The samba-common-bin package contains a small collection of utilities for managing access to shared files. The recipe includes setting a file-sharing password for the user pi and providing read/write access to the files in pi user's home directory. However, it does not set up a new file share or show how to share a USB disk. The next recipe shows how to do that. After completing this recipe, other computers can exchange files with the default user pi. Getting ready The following are the ingredients: A Raspberry Pi with a 5V power supply An installed and configured "official" Raspbian Linux SD card A network connection A client PC connected to the same network as the Raspberry Pi This recipe does not require the desktop GUI and could either be run from the text-based console or from within an LXTerminals. If the Raspberry Pi's Secure Shell server is running and it has a network connection, this recipe can be remotely completed using a Secure Shell client. How to do it... The following are the steps for creating a file server: Log in to the Raspberry Pi either directly or remotely. Execute the following command: apt-get install samba samba-common-bin Download and install samba with the other packages that it depends on. This command needs to be run as a privileged user (use sudo). The package management application aptitude could be used as an alternative to the apt-get command utility for downloading and installing samba and samba-common-bin. In the preceding screenshot, the apt-get command is used to install the samba and samba-common-bin software distribution packages. Execute the following command: nano /etc/samba/smb.conf Edit the samba configuration file. The smb.conf file is protected and needs to be accessed as a privileged user (use sudo). The preceding screenshot starts the nano editor to edit the /etc/samba/smb.conf file. Change the security = user line. Uncomment the line (remove the hash, #, from the beginning of the line). The preceding screenshot of the Samba configuration file shows how to change samba security to use the Raspberry Pi's user accounts. Change the read only = yes line to be read only = no, as shown in the following screenshot: The preceding screenshot shows how to change the Samba configuration file to permit adding new files to user shares (read only = no). Save and exit the nano editor. Execute the following command: /etc/init.d/samba reload Tell the Samba server to reload its configuration file. This command is privileged (use sudo). In the preceding screenshot, the Samba server's configuration file is reloaded with the /etc/init.d/samba command. Execute the following command: smbpasswd –a pi This command needs to be run as a privileged user (use sudo). Enter the password (twice) that will be used for SMB (CIFS) file sharing. The preceding screenshot shows how to add an SMB password for the pi user. The Raspberry Pi is now accessible as a Windows share! From a Windows computer, use Map network drive to mount the Raspberry Pi as a network disk, as follows: The preceding screenshot starts mapping a network drive to the Raspberry Pi on Windows 7. Enter the UNC address raspberrypipi as the network folder. Choose an appropriate drive letter. The example uses the Z: drive. Select Connect using different credentials and click on Finish, as shown in the following screenshot: The preceding screenshot finishes mapping a network drive to the Raspberry Pi. Log in using the newly configured SMB (CIFS) password (from step 7). In the screenshot, a dialog box is displayed for logging in to the Raspberry Pi with the SMB (CIFS) username and password. The Raspberry Pi is now accessible as a Windows share! Only the home directory of the pi user is accessible at this point. The next recipe configures a USB disk for using it as a shared drive. How it works... This recipe begins by installing two software distribution packages – samba and samba-common-bin. This recipe uses the apt-get install command; however, the aptitude package management application could also be used to install software packages. The Samba package contains an implementation of the Server Message Block (SMB) protocol (also known as the Common Internet File System, CIFS). The SMB protocol is used by Microsoft Windows computers for sharing files and printers. The samba-common-bin package contains the smbpasswd command. This command is used to set up user passwords exclusively for using them with the SMB protocol. After the packages are installed, the Samba configuration file /etc/samba/smb.conf is updated. The file is updated to turn on user security and to enable writing files to user home directories. The smbpasswd command is used to add (-a) the pi user to the list of users authorized to share files with the Raspberry Pi using the SMB protocol. The passwords for file sharing are managed separately from the passwords used to log in to the Raspberry Pi either directly or remotely. The smbpasswd command is used to set the password for Samba file sharing. After the password has been added for the pi user, the Raspberry Pi should be accessible from any machine on the local network that is configured for the SMB protocol. The last steps of the recipe configure access to the Raspberry Pi from a Windows 7 PC using a mapped network drive. The UNC name for the file share, raspberrypipi, could also be used to access the share directly from Windows Explorer. There's more... This is a very simple configuration for sharing files. It enables file sharing for users with a login to the Raspberry Pi. However, it only permits the files in the user home directories to be shared. The next recipe describes how to add a new a file share. In addition to the SMB protocol server, smbd, the Samba software distribution package also contains a NetBIOS name server, nmbd. The NetBIOS name server provides naming services to computers using the SMB protocol. The nmbd server broadcasts the configured name of the Raspberry Pi, raspberrypi, to other computers on the local network. In addition to file sharing, a Samba server could also be used as a Primary Domain Controller (PDC) — a central network server that is used to provide logins and security for all computers on a LAN. More information on using the Samba package as a PDC can be found on the links given next. See Also Samba (software) http://en.wikipedia.org/wiki/Samba_(software) A Wikipedia article on the Samba software suite. nmbd - NetBIOS over IP naming service http://manpages.debian.net/cgi-bin/man.cgi?query=nmbd The Debian man page for nmbd. samba – a Windows SMB/CIFS file server for UNIX http://manpages.debian.net/cgi-bin/man.cgi?query=samba The Debian man page for samba. smb.conf - the configuration file for the Samba suite http://manpages.debian.net/cgi-bin/man.cgi?query=smb.conf The Debian man page for smb.conf. smbd - server to provide SMB/CIFS services to clients http://manpages.debian.net/cgi-bin/man.cgi?query=smbd The Debian man page for smbd. smbpasswd - change a user's SMB password http://manpages.debian.net/cgi-bin/man.cgi?query=smbpasswd The Debian man page for smbpasswd. System initialization http://www.debian.org/doc/manuals/debian-reference/ch04.en.html The Debian Reference Manual article on system initialization. Samba.org http://www.samba.org The Samba software website. Resources for Article : Further resources on this subject: Using PVR with Raspbmc [Article] Our First Project – A Basic Thermometer [Article] Instant Minecraft Designs – Building a Tudor-style house [Article]
Read more
  • 0
  • 0
  • 2235

article-image-using-pvr-raspbmc
Packt
18 Feb 2013
9 min read
Save for later

Using PVR with Raspbmc

Packt
18 Feb 2013
9 min read
(For more resources related to this topic, see here.) What is PVR? Personal Video Recording (PVR), with a TV tuner, allows you to record as well as watch Live TV. Recordings can be scheduled manually based on a time, or with the help of the TV guide, which can be downloaded from the TV provider (by satellite/aerial or cable), or from a content information provider, such as Radio Times via the Internet. Not only does PVR allow you to watch Live TV, but on capable backends (we'll look at what a backend is in a moment), it allows you to rewind and pause live TV. A single tuner allows you to tune into one channel at once, while two tuners would allow you to tune into two. As such, it's important to note that the capabilities listed earlier are not mutually exclusive, that is, with enough tuners it is possible to record one channel while watching another. This may, depending on the software you use as a backend, be possible on one tuner, if the two channels are on the same multiplexer. Raspbmc's role in PVR Raspbmc can function as both a PVR backend and a frontend. For PVR support in XBMC, it is necessary to have both a backend and one or more frontends. Let's see what a backend and frontend is: Backend: A backend is the part that tunes the channel, records your scheduled programs, and serves those channels and recorded television to the frontends. One backend can serve multiple frontends, if it is sufficiently powerful enough and there are enough tuners available. Frontend: A frontend is the part that receives content from the backend and plays back live television and recorded programs to the user. In the case of Raspbmc, XBMC serves as the frontend and allows us to play back the content. Multiple frontends can connect to one or more backends. This means that we can have several installations of Raspbmc play broadcast content from even a single tuner. As we've now learned, Raspbmc has a built-in PVR frontend in the form of XBMC. However, it also has a built-in backend. This backend is TVHeadend, and we'll look at getting that up and running shortly. Standalone backend versus built-in backend There are cases when it is more favorable to use an external, or standalone, backend rather than the one that ships with Raspbmc itself. Outlined as follows is a comparison: Standalone backend Raspbmc backend (TVHeadend) A better choice if you do not find TVHeadend feature-rich or prefer another backend. If you only intend to have one frontend, it makes sense to run everything off the same device, rather than relying on an external system. If you have a pre-existing backend, it is easier to configure Raspbmc to use that, rather than reconfiguring it completely. The process may be simplified for you as, generally, one can just connect their device and it will be detected in TVHeadend. If you are planning on having multiple frontends, it is more sensible to have a standalone backend. This is to ensure that the computer has enough horsepower, and also, you can serve from the same computer you are serving files from, and thus, only need one device on, rather than two (the streaming machine and the Pi). Raspbmc's auto-update system covers the backend that is included as well. This means you will always have a reliable and stable version of TVHeadend bundled with Raspbmc and you need not worry about having to update it to get new features. If you need to use a PCI or PCI expressbased tuner, you will need to use an external backend due to limitations of the Pi's connectivity. Better for wireless media centers. If you have low bandwidth throughput, then running the tuner locally on the Raspberry Pi makes more sense as it does not rely on any transfers between the network (unless using HDHomeRun). Setting up PVR We will now look at how to set up a PVR. This will include configuring the backend as well as getting it running in XBMC. An external backend The purpose of this title is to focus on the Raspberry Pi, and as there is a great variety of PVR software available, it would be implausible to cover the many options. If you are planning on using an external backend, it is recommended that you thoroughly search for information on the Internet. There are even books for popular and comprehensive PVR packages, such as MythTV. TVHeadend was chosen for Raspbmc because it is lightweight and easy to manage. Raspbmc's XBMC build will support the following backends at the time of writing: MythTV TVHeadend ForTheRecord/Argus TV MediaPortal Njoy N7 NextPVR VU+/Enigma DVBViewer VDR Setting up TVHeadend in Raspbmc It should be noted that not all TV tuners will work on your device. Due to the fact that the list changes frequently, it is not possible to list here the devices that work on the Raspberry Pi. However, the most popular tuners used with Raspbmc are AF90015 based. HDHomerun tuners by SilliconDust are supported as well (note that these tuners do not connect to your Pi directly, but are accessed through the network). With the right kernel modules, TVHeadend can support DVB-T (digital terrestrial), DVB-S (satellite), and DVB-C (cable) based tuners. By default, the TVHeadend service is disabled in Raspbmc. We'll need to enable it as follows: Go to Raspbmc Settings—we did this before by selecting it from the Programs menu. Under the System Configuration tab, check the TVHeadend server radio button found under the Service Management category. Click on OK to save your settings. Now that the TVHeadend is running, we can now access its management page by going to http://192.168.1.5:9981. You should substitute the preceding IP address, 192.168.1.5, with the actual IP address of the Raspberry Pi. You will be greeted with an interface much akin to the following screenshot: In the preceding screenshot we see that there are three main tabs available. They are as follows: Electronic Program Guide: This shows us what is being broadcast on each channel. It is empty in the preceding screenshot because we've not scanned and added any channels. Digital Video Recorder: This will allow you to schedule recordings of TV channels as well as use the Automatic recorder functionality, which can allow you to create powerful rules for automatic recording. You can also schedule recordings in XBMC; however, doing so via the web interface is probably more flexible. Configuration: This is where you can configure the EPG source, choose where recordings are saved, manage access to the backend, and manage tuners. The Electronic Program Guide and Digital Video Recorder tabs are intuitive and simple so we will instead look at the Configuration section. Our first step in configuring a tuner is to head over to TV Adapters: As shown in the preceding screenshot, TV tuners should automatically be detected and selectable in the drop-down menu (highlighted here). On the right, a box entitled Adapter Configuration can be used for adjusting the tuner's parameters. Now, we need to select the Add DVB Network by location option. The following dialog box will appear: Once we have defined the region we are in, TVHeadend will automatically begin scanning for new services on the correct frequencies. These services can be mapped to channels by selecting the Map DVB services to channels button as shown earlier. We are now ready to connect to the backend in XBMC. Connecting to our backend in XBMC Regardless of whether we have used Raspbmc's built-in backend or an external one, the process for connecting to it in XBMC is very much the same. We need to do the following: In XBMC, go to System | Settings | Add-ons | Disabled Add-ons | PVR clients. You will now see the following screenshot: Select the type of backend that you would like to connect to. You will then see a dialog allowing you to configure or enable the add-on. Select Configure and fill out the necessary connection details. Note that if you are connecting to the Raspbmc built-in backend, select the TVHeadend client to configure. The default settings will suffice: Click on OK to save these settings and select Enable. Note that the add-on is now located in System | Settings | Add-ons | Enabled add-ons | PVR clients rather than Disabled add-ons | PVR clients. Now, we need to go into Settings | System | Live TV. This allows you to configure a host of options related to Live TV. The most important one is the Enable Live TV option—be sure to check this box! Now, if we go back to the main menu, we'll see a Live TV option. Your channel information will be there already, although, like the instance shown as follows, it may need a bit of renaming: The following screenshot shows us a sample electronic program guide: Simply select a channel and press Play! The functionality that PVR offers is controlled in a similar manner to XBMC, so this won't be covered in this article. If you have got this far, you've done the hard part already. Summary We've now covered what PVR can do for us, the differences between a frontend and backend, and where a remote backend may be more suitable than the one Raspbmc has built in. We then covered how to connect to that backend in XBMC and play back content from it. Resources for Article : Further resources on this subject: Adding Pages, Image Gallery, and Plugins to a WordPress Blog [Article] Building a CRUD Application with the ZK Framework [Article] Playback Audio with Video and Create a Media Playback Component Using JavaFX [Article]
Read more
  • 0
  • 0
  • 3681