Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - IoT and Hardware

152 Articles
article-image-instant-optimizing-embedded-systems-using-busybox
Packt
25 Nov 2013
9 min read
Save for later

Instant Optimizing Embedded Systems Using BusyBox

Packt
25 Nov 2013
9 min read
(For more resources related to this topic, see here.) BusyBox Compiling BusyBox, the Swiss Army Knife of Embedded Linux, it can be compiled into a single binary for different architectures. Before compiling software, we must get a compiler and the corresponding libraries, a build host and a target platform, the build host is the one running the compiler, the target platform is the one running the target binary. Herein, the desktop development system is a 64 bit X86 Ubuntu systems, it will be used as our build host, and an ARM Android system will be used as our target platform. To compile BusyBox on X86-64 Ubuntu system for ARM, we need a cross compiler. The gcc-arm-linux-gnueabi cross compiler can be installed directly on Ubuntu: $ sudo apt-get install gcc-arm-linux-gnueabi On the other Linux distributions, Google's official NDK is a good choice if want to share Android's Bionic C library, but since Bionic C library lacks lots of POSIX C header files, if want to get most of BusyBox applets building, the prebuilt version of Linaro GCC with Glibc is preferable, we can download it from http://www.linaro.org/downloads/, for example: http://releases.linaro.org/13.04/components/toolchain/binaries/gcc-linaro-aarch64-none-elf-4.7-2013.04-20130415_linux.tar.bz2. Before compiling, to simplify the configuration, enable the largest generic configuration with make defconfig and configure cross compiler: arm-linux-gnueabi-gcc with: make menuconfig. $ make defconfig $ make menuconfig Busybox Settings ---> Build Options ---> (arm-linux-gnueabi-) Cross Compiler prefix After configuration, we can simply compile it with: $ make Then, a BusyBox binary will be compiled for ARM: $ file busybox busybox: ELF 32-bit LSB executable, ARM, version 1(SYSV), dynamically linked (uses shared libs), stripped To list the shared libraries required by BusyBox binary for ARM, a command arm-linux-gnueabi-readelf should be used: $ arm-linux-gnueabi-readelf -d ./busybox grep "Shared library:" | cut -d'[' -f2 | tr -d ']'| libm.so.6 libc.so.6 ld-linux.so.3 To get the full path, we can get the library search path at first: $ arm-linux-gnueabi-ld --verbose grep SEARCH | tr ';' 'n' | cut -d'"' -f2 | tr -d '"'| /lib/arm-linux-gnueabi /usr/lib/arm-linux-gnueabi /usr/arm-linux-gnueabi/lib Then, we can find out that /usr/arm-linux-gnueabi/lib is the real search path in our platform and we can get the full path of the libraries as below: $ ls /usr/arm-linux-gnueabi/lib/{libm.so.6,libc.so.6,ld-linux.so.3} /usr/arm-linux-gnueabi/lib/ld-linux.so.3 /usr/arm-linux-gnueabi/lib/libc.so.6/usr/arm-linux-gnueabi/lib/libm.so.6 By default, the binary is dynamically linked, to enable static linking, configure BusyBox as following: Busybox Settings ---> Build Options ---> [*] Build BusyBox as a static binary (no shared libs) If using a new Glibc to compile BusyBox with static linking, to avoid such error: inetd.c:(.text.prepare_socket_fd+0x7e): undefined reference to `bindresvport' We need to disable CONFIG_FEATURE_INETD_RPC: Networking Utilities ---> [*] inetd [ ] Support RPC services Then, recompile it with make. BusyBox Installation This section shows how to install the above compiled BusyBox binaries on an ARM Android system. The installation of BusyBox means to create soft links for all of its built-in applets, use wc applet as an example: $ ln -s busybox wc $ echo "Hello, Busybox." ./wc -w| 2 BusyBox can be installed at the earlier compiling stage or at run-time. To build a minimal embedded file system with BusyBox, we'd better install it at the compiling stage with 'make install' for it helps to create the basic directory architecture of a standard Linux root file system and create soft links to BusyBox under the corresponding directories. With this installation method, we need to configure the target installation directory as following, use ~/busybox-ramdisk/ directory as an example: Busybox Settings ---> Installation Options ("make install" behavior) ---> (~/busybox-ramdisk/) BusyBox installation prefix After installation, we may get such a list of the file and directories: $ ls ~/busybox-ramdisk/ bin linuxrc sbin usr But to install it on an existing ARM Android system, it may be easier to install BusyBox at run-time with its --install option. With --install, by default, hard links will be created, to create soft links (symbolic links), -s option should be appended. If want to create links across different file systems (E.g. in Android system, to install BusyBox to /system/bin but BusyBox itself is put in the /data directory), -s must be used. To use the -s option, BusyBox should be configured as below: Busybox Settings ---> General Configuration ---> [*] Support --install [-s] to install applet links at runtime Now, let’s introduce how to install the above compiled BusyBox binaries to an existing ARM Android system. To do so, the Android system must be rooted to make sure the /data and / directories are writable. We will not show how to root an Android device, please get help from your product maker. Or if no real rooted Android device available, the Android emulator (emulator) provided by ADT (Android Development Toolkit, http://developer.android.com/sdk/index.html) can be used to start a rooted Android system on a virtual ARM Android device. To create a virtual ARM Android device and to use the Android emulator, please read the online documents provided by Google on http://developer.android.com/tools/help/android.html and http://developer.android.com/tools/help/emulator.html. Now, let's assume a rooted Android system is already running there with the USB debugging option enabled for Android Debug Bridge (adb, http://developer.android.com/tools/help/adb.html) support, for example, to check if such a device is started, we can run: $ adb devices List of devices attached emulator-5554 device As we can see, a virtual Android device running on Android emulator: emulator-5554 is there. Now, we are able to show how to install the BusyBox binaries on the existing Android system. Since the dynamically linked and statically linked BusyBox binaries are different, we will introduce how to install them respectively. Install the statically linked Busybox binary To install the statically installed Busybox binary, we only need to upload the BusyBox binary itself: $ adb push busybox /data/ Afterwards, install it with the --install option, for example, install it to the /bin directory of the Andriod system: $ adb shell root@android:/ # mount -o remount,rw / root@android:/ # mkdir /bin/ root@android:/ # /data/busybox --install -s /bin To be able to create the /bin directory for installation, the / directory is remounted to be writable. After installation, soft links are created under /bin for all of the built-in applets. If the -s option is not used, it will fail to create the hard links across the /bin and /data directories, that's why we must use -s option, the failure log looks like: busybox: /bin/[: Invalid cross-device link busybox: /bin/[[: Invalid cross-device link busybox: /bin/acpid: Invalid cross-device link busybox: /bin/add-shell: Invalid cross-device link (...truncated...) To execute the just installed applets, use md5sum as an example: $ /bin/md5sum /bin/ls 19994347b06d5ef7dbcbce0932960395 /bin/ls To run the applets without the full path, the /bin directory should be appended to the PATH variable: $ export PATH=$PATH:/bin Then, all of the BusyBox applets can be executed directly, that means we have installed Busybox successfully. To make the settings permanently, the above commands can be added to a script and such a script can be run as an Android service. Install the statically linked Busybox binary To install the statically installed Busybox binary, we only need to upload the BusyBox binary itself: $ adb push busybox /data/ Afterwards, install it with the --install option, for example, install it to the /bin directory of the Andriod system: $ adb shell root@android:/ # mount -o remount,rw / root@android:/ # mkdir /bin/ root@android:/ # /data/busybox --install -s /bin To be able to create the /bin directory for installation, the / directory is remounted to be writable. After installation, soft links are created under /bin for all of the built-in applets. If the -s option is not used, it will fail to create the hard links across the /bin and /data directories, that's why we must use -s option, the failure log looks like: busybox: /bin/[: Invalid cross-device link busybox: /bin/[[: Invalid cross-device link busybox: /bin/acpid: Invalid cross-device link busybox: /bin/add-shell: Invalid cross-device link (...truncated...) To execute the just installed applets, use md5sum as an example: $ /bin/md5sum /bin/ls 19994347b06d5ef7dbcbce0932960395 /bin/ls To run the applets without the full path, the /bin directory should be appended to the PATH variable: $ export PATH=$PATH:/bin Then, all of the BusyBox applets can be executed directly, that means we have installed Busybox successfully. To make the settings permanently, the above commands can be added to a script and such a script can be run as an Android service. Install the dynamically linked BusyBox binary For a dynamically linked BusyBox, to install it, besides the installation of the BusyBox binary itself, the required dynamic linker/loader (ld-linux.so) and the dependent shared libraries (libc.so and libm.so) should be installed too. For the basic installation procedure are the same as the one for statically linked BusyBox, herein, we only introduce how to install the required ld-linux.so.3, libc.so.6 and libm.so.6. Without the above dynamic linker/loader and libraries, we may get such error while running the dynamically linked BusyBox: $ /data/busybox --install -s /bin /system/bin/sh: /data/busybox: No such file or directory Before installation, create another /lib directory on target Android system and then upload the above files to it: $ adb shell mkdir /lib $ adb push /usr/arm-linux-gnueabi/lib/ld-linux.so.3 /lib/ $ adb push /usr/arm-linux-gnueabi/lib/libc.so.6 /lib/ $ adb push /usr/arm-linux-gnueabi/lib/libm.so.6 /lib/ With the above installation, the dynamically linked BusyBox binary should also be able to run and we are able to install and execute its applets as before: $ /data/busybox --install -s /bin As we can see, the installation of the dynamically linked BusyBox binary requires extra installation of the dependent linker/loader and libraries. For real embedded system development, if only the BusyBox binary itself uses the shared libraries, the static linking should be used instead at compiling to avoid the extra installation and the time-cost run-time linking, otherwise, if many binaries share the same libraries, to reduce the total size cost for the size critical embedded systems, dynamic linking may be preferable. Summary In this article we learned about BusyBox compiling and its installation. BusyBox is a free GPL-licensed toolbox aimed at the embedded world. It is a collection of the tiny versions of many common Unix utilities. Resources for Article : Further resources on this subject: Embedding Doctests in Python Docstrings [Article] The Business Layer (Java EE 7 First Look) [Article] Joomla! 1.6: Organizing and Managing Content [Article]
Read more
  • 0
  • 0
  • 5006

article-image-3d-printing
Packt
18 Nov 2013
9 min read
Save for later

3D Printing

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) Function The software starts by taking a .stl or .obj file along, with all our settings, and converts it into GCode. Think of the GCode like an instruction set to our printer, which includes where to move, how fast to move, whether or not to extrude material, extruder temperature, lower platform, and so on. The following screenshot shows an example of GCode produced by the ReplicatorG slicing engine Skeinforge at the start of a print: Start of GCode The slicing engine is what tells your printer what to make and exactly how to make it. The algorithm involved directly relates to print quality, and thus gets the most attention from developers. It's at this stage we can see the tradeoff between software and hardware: maybe the printer has the capability to print with greater resolution but current software might only break the model down so far? Or perhaps it's the opposite, where the software can break down the model finer than the hardware is capable of producing? It turns out this is a major factor in what separates personal and commercial printers, and even a $200 and $2,000 printer—precision. More precision costs more money in both hardware and software development. The hardware must be able to handle the precision calculated in software, and where it cannot, and then software solutions must be implemented in circumvention. It's these reasons why algorithms improve by leaps and bounds with every software update, and why newer released printers outperform their predecessors. To make it easier on the microcontroller, in the printer the GCode is converted into the .s3g or .x3g code, which is essentially just optimized GCode. From here it is used to generate motor steps and direction pulses, which are sent to the motor controller and then to the motors. It's at this stage we realize that the process of 3D printing is just a handful of motors moving in a set pattern combined with a heater to melt the plastic material. The magic of 3D printing happens behind the scenes inside the slicing algorithm in order to create those explicit patterns. MakerWare You may be thinking, "This software is made by MakerBot and is intended to use with MakerBot printers that sounds like the best option"; well, you are mostly correct. MakerWare is currently still in beta, but is released for general use. The software is always in revision, and has made major improvements in a very short period time. Logically, it would make sense to use the software explicitly designed for use with your printer, but up until MakerWare v2.2, the ReplicatorG software had been superior. With all the improvements made in MakerWare's latest release v2.3.1, I would argue that the MakerWare software surpasses the ReplicatorG for use with a MakerBot 3D printer. This is one of the joys of being involved with MakerBot and the 3D printing environment- The products are always evolving and always improving. In a period of five years, MakerBot went from an idea to being acquired by Stratasys for $403 million. This speaks for how fast the industry is moving, and how fast the technology is advancing. For those interested, visit http://www.makerbot.com/blog/category/makerbot-software-updates/ to see a detailed description (lots of pictures) of the improvements in each MakerWare update. ReplicatorG You might be wondering why we would even consider the ReplicatorG software. The simple answer is with the release of MakerWare v2.3.1 and the purchase of a MakerBot, we wouldn't. The ReplicatorG software undoubtedly served as a building block for many of the features in MakerWare, and was the leading software for personal/hobbyist 3D printing for many years. The MakerWare software will meet all the needs for our designs, but if you are interested in learning more about open source 3D printing, I would suggest checking out this software. We have chosen to use MakerWare (v2.3.1) for the examples in the article, as this software is most tailored to our needs. Visit http://www.makerbot.com/makerware/ to download your own beta copy. MakerWare options and settings The first step after opening MakerWare is to add a model to the build platform by clicking on the Add button. Let's add Mr Jaws (by navigating to File | Examples | Mr_Jaws.stl) to our build platform. Once the model has been added, it needs to be selected by left-clicking it. This should highlight the model in yellow as shown in the following screenshot: Mr Jaws is selected Notice the buttons on the left-hand side, which are intuitively labeled Look, Move, Turn, and Scale. Clicking these buttons allows us to orient our model. Let's move Mr Jaws to the top-right corner, spin him 180 degrees in the Z-plane, and scale him to 110 percent. The result can be seen in the following screenshot: Mr Jaws is moved, rotated, and scaled Once we are satisfied with the orientation, click on Make located at the top of the screen to open up the print options. Ensure that your model is sitting on the platform by hitting the On Platform button (by navigating to Move | On Platform); else, you will print many layers of supporting material before finally reaching your part (if your part is floating above the platform), or you will damage your nozzle and platform (if your part is floating below the platform). Print options The default options are fairly straightforward, and will modify all the advanced options automatically. We will use the following table to describe each of the default options: Option Description I want to: This gives the option where to save the file or to send it directly to your MakerBot, if its plugged in by USB. Export For: This helps you select your MakerBot. Material: This helps you select the PLA that is recommended. Resolution: MakerWare has 3 quick set profiles which are Low, Standard, and High. These profiles reference the desired print resolution and directly control the Z-layer height. Remember that higher resolution requires longer print times. Raft: A raft is a surface slightly larger than the part which is built between the bottom of the part and the build platform. Rafts help reduce warping by having more surface area adhered to the build platform. Once a part has been printed, the raft is easily broken away. We will always be using rafts to help reduce errors in our models brought about by poor adhesion and warping. Supports: Supports are used to support sharp overhangs. We will always have the support box checked, as the software will determine when and where to use supports and will not print them if they are not needed. This ensures we will never run into errors from floating layers. By evaluating the advanced options, we are able to observe the result from changing the default options and gain a little more insight into settings that influence our print. The following is a table describing these advanced options: Advanced options Description Profile: Profiles handle all the settings of a print. Profiles are a grouping of preselected options. By default, there are 4 profiles: Low, Standard, and High but you can create your own custom profiles. Slicer: The options are between MakerBot Slicer and Skeinforge slicer, and can be changed by creating a new profile. It is recommended that we use the MakerBot slicing engine because it has been optimized for use with MakerBot printers as mentioned earlier. Quality | Infill: Infill is the density of the object measured by a percentage. By default, the amount of infill is low (less than 20 percent) to save both time and material. The slicing engine will automatically create a pattern for the infill (most commonly honeycomb). Quality | Number of Shells: The number of shells will represent the perimeter thickness of your model. One shell corresponds to one layer (which is approximately 0.4 mm the width of the nozzle opening in XY, and will depend on the next property, Quality | Layer Height, for the Z thickness). All the profiles default to two shells. If strength is a concern for your model, it is suggested to increase the number of shells rather than infill. Adding shells will also increase print times. Quality | Layer Height: Layer height is the height of each individual cross-sectional layer. MakerBot Replicator 2 is capable of heights as low as 0.1 mm. This corresponds to 10 layers for a model of 1 mm in height. Lowering the height will increase print times. Temperature | Extruders: The temperature for the extruder by default sets to 230 C. Greater temperatures can improve adhesion but may require slower printing. Every individual has their own "magic number" for temperature, which they feel works best but its best to say within +/- 10 C range of the default. For our models we will be using the default 230 C. Temperature | Build Plate A heated build plate is only required if we are printing in ABS, in order to reduce warping. The default is 110 C, and slightly higher temperatures will improve adhesion but also risk greater warping upon cooling. Speed | Speed while Extruding: Extruder speed and temperature are directly linked to one another; the speed needs to be slow enough to allow the layer currently being extruded to bond with the layer underneath. Greater speeds have the potential to reduce accuracy but will decrease print time. Be extremely cautious while modifying this parameter, as it takes experience to match increased speed and temperatures properly. Speed | Speed while Travelling: When not extruding, the extruder head is capable of faster travel. It is recommended to leave this parameter as set. Preview before printing This is an amazing tool and you should check it off every time. Preview before printing allows you to see an approximate material use and time estimate to print the given model. Summary This article thus explained some basic steps to build a 3D model in MakerBot and to get it ready for 3D printing. Resources for Article: Further resources on this subject: Design Tools and Basics [Article] Materials with Ogre 3D [Article] Importing 3D Formats into Away3D [Article]
Read more
  • 0
  • 0
  • 3542

article-image-designing-objects-3d-printing
Packt
15 Nov 2013
9 min read
Save for later

Designing Objects for 3D Printing

Packt
15 Nov 2013
9 min read
(For more resources related to this topic, see here.) How a 3D printer works A 3D printer needs to take a description of a three-dimensional object and turn it into a physical object. Like Blender, a 3D printer uses values along the X, Y, and Z axes to determine the shape of an object. But where Blender sees an object as perhaps cylinders, spheres, cubes, or edges and faces, a 3D printer is all about layers and perimeters. First, a slicing program opens the object file that you made and it slices the object into vertical layers as seen in the following screenshot: Then, each layer is printed out one by one in a growing stack as seen in the following screenshot: But you can get a better idea of how these layers stack up if you can see it interactively. I have provided an interactive illustration that allows you to see the dragon slice by slice. Scrolling through the frames, you can see how the walls of the dragon's body are built: Open up 4597OS_01_LayersDisplay.blendin your download packet. Examine the thickness of the body at each layer. Press Alt + A to play the animation. Press Esc to stop playing it. You can also drag the current time indicator in the timeline back and forth to look at individual frames, or use the right and left arrow keys. Note how the dragon starts as a series of islands. Look at the dragon's hands. The fingers start off floating in space until they are joined to the arms. The exact method a 3D printer uses to print a layer varies. Some printers work like a pencil, drawing an outline of the shape on that layer and then filling in the shape with cross-hatching. Look at the left side of the preceding screenshot again. The printer would first outline the tail, then fill it in. Next, it would move to one haunch, outline it, and fill it in, and then the other. And finally, it would outline and fill each foot. You can get a better idea of how this happens with this 3D printer's hot end simulator. The hot end is the printer's nozzle where the 3D printing material is extruded. Other printers may use a print head much like an inkjet printer. The print head moves across the printing bed and deposits material where needed. Types of 3D printers So what kinds of printers are there? How do they print and how are they different? The terminology is still a bit confusing. The American Society for Testing and Materials (ASTM International) recently came up with the following categories: Material extrusion is also known as Molten Polymer Deposition (MPD), Fused Deposition Modeling (FDM), or Fused Filament Fabrication (FFF); these extrude a gooey material out in layers to build up the proper shape. This is the class of printers that includes most hobbyist 3D printers. They work like the simulator you just used. These can use plastic, metal wire, wax, sugar, frosting, chocolate, cookie dough pasta, pizza, and even corn chips. Material jetting is also known as photopolymer jetting. Like an inkjet, this printer squirts liquid photopolymers at the right moment, which are cured immediately with ultraviolet light, layer by layer. The object being built is supported by a layer of gel that is also applied by the print head, so overhang is not a problem. Binder jetting uses a two part system. A thin layer of composite material is spread across the print bed. Then, an inkjet-like printing head sprays a binder fluid and possibly colored ink, which combine with the composite material to produce solid colored and sometimes textured objects. This can be plastic, gypsum, or metals, such as copper, tungsten, bronze, and stainless steel. For metals, a second step is needed to make them solid. The binder is removed and metal is infused where the binder used to be. Sheet lamination printers may use materials, such as paper or metal, and will color, cut out, and glue layers together into objects. Vat photopolymerization is also called Stereolithography (SLA). Photopolymerization printers use light to cure liquid material into the right shape. This process uses resins, wax, or liquid plastics for the material. It may use a laser or a high resolution DLP video projector similar to one you would hook up to your computer to give a PowerPoint presentation. Powder bed fusion is also known as Granular Materials Binding. These printers use a laser or heat to fuse layers of powder into the right shape. These can use metal, ceramic, gypsum, or plastic powder. There are several subtypes of powder bed fusion printers. Selective Laser Sintering (SLS) is used with thermoplastics, wax, and ceramic powders. A thin coat of powder is spread across the printing bed. Then, the printing head prints the layer by fusing selected areas with the laser. The printing bed then drops down. Another coat of powder is added and the laser prints the next layer. Selective heat sintering (SHS) uses heat instead of a laser and can be used with thermoplastic powder. Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is a subcategory of selective laser sintering. The laser beam melts the metal and makes solid parts with metal alloys like aluminum, iron, stainless steel, maraging steel, nickel, chromium, cobalt, and titanium alloys. In theory, it can be used with most alloys. Directed energy deposition, also known as Electron Beam Melting (EBM), is similar to SLS, but uses an electron beam instead of a laser. The high heat generated by the electron beam allows use of pure metal powder such as titanium alloys, and can make high-detail, high-strength objects that do not need any postmanufacturing heat treatment. Question: Earlier, I mentioned a company named Made In Space, which is making a 3D printer to be used in zero gravity. What kind of printer is it making? Directed energy deposition Vat photopolymerization Material extrusion Powder bed fusion Answer: Option 3, material extrusion is correct. Extruding a material avoids liquid or powder floating around in zero gravity. Basic parts of a 3D printer As you have observed, there are a wide variety of 3D printers. But there are some parts they all have in common. The printing bed is what the 3D object is built upon. The printing head holds the laser, the printing jet, or the hot end of the extruder. And then there are controls to position the printing bed and the printing head in relation to each other; one control for the X dimension, one for the Y dimension, and one for the Z dimension. There are no hard and fast rules for which controls the printing bed and printing head have. The Cube printing head is controlled in the X dimension only and the printing bed is controlled in the Y and Z dimensions, whereas the MendelMaxPro puts X and Z controls on the printer head and controls the printing bed only in the Y dimension. How is a 3D printer controlled? Generally, the answer is stepper motors. Stepper motors are motors that move in small discrete angles of rotation instead of spinning like most regular motors. This allows you to make definite, easily repeatable motions. It is also one reason why there are minimum sizes on the detail that you can make. A 3D printer can't make detail smaller than one step of the stepper motor. Then, through wires, drums, gears, and threaded rods, the motion of the stepper motor is scaled to fit the medium that the printer uses. A hobbyist printer that uses a filament of the ABS or PLA plastic that feeds off of a reel will provide the kind of detail that those plastics can support. A high-end stereolithography printer may get much finer detail. The next graphic is a diagram of the insides of a stepper motor. The rotor is in the center. It rotates and is attached to a shaft that pokes out of the motor. The stators are attached to the outer shell of the motor. They are wrapped with copper wire and an electrical current is run through the wire to give each stator a negative charge, a positive charge, or no charge as indicated in the next graphic. In the graphic, red represents a positive charge, the blue is a negative charge, and the grey has no charge. The rotor in the center has 50 teeth. The stators around the outside have a total of 48 teeth. It's this imbalance in the number of teeth that allow the stepper motor's rotor to walk around step-by-step. The positive charge of the rotor is attracted to the stator teeth that are negatively charged. In the following screenshot, you can see that the rotor teeth aren't well aligned with the uncharged stator that is counter-clockwise from the blue stator. To do a single step, the stepper motor controller changes the negative charge from the blue stator in the following screenshot to the stator just counter-clockwise to it. Then, the teeth in the rotor try to align with that stator. So, the rotor moves just a little, a step. To continue moving more steps, the stator with the negative charge keeps moving to the next stator, as follows: The stepper motor is then attached to a control belt or a shaft with a screw thread to give the printer precise control of the print head and the printing bed. There may be one or more stepper motors controlling a single axis. Summary In this article, we covered the fundamentals of how a 3D printer works and the different kinds of printers that there are. And you discovered that 3D printers can handle a wide variety of materials from wood, to plastic, to titanium. We also covered how to control a 3D printer by a stepper motor. Resources for Article: Further resources on this subject: Introduction to the Editing Operators in Blender: A Sequel [Article] Getting Started with Blender’s Particle System [Article] Blender 3D: Interview with Allan Brito [Article]
Read more
  • 0
  • 0
  • 2847
Banner background image

Packt
14 Nov 2013
12 min read
Save for later

Clusters, Parallel Computing, and Raspberry Pi – A Brief Background

Packt
14 Nov 2013
12 min read
(For more resources related to this topic, see here.) So what is a cluster? Each device on this network is often referred to as a node. Thanks to the Raspberry Pi's low cost and small physical footprint, building a cluster to explore parallel computing has become far cheaper and easier for users at home to implement. Not only does it allow you to explore the software side, but also the hardware as well. While Raspberry Pis wouldn't be suitable for a fully-fledged production system, they provide a great tool for learning the technologies that professional clusters are built upon. For example, they allow you to work with industry standards, such as MPI and cutting edge open source projects such as Hadoop. This article will provide you with a basic background to parallel computing and the technologies associated with it. It will also provide you with an introduction to using the Raspberry Pi. A very short history of parallel computing The basic assumption behind parallel computing is that a larger problem can be divided into smaller chunks, which can then be operated on separately at the same time. Related to parallelism is the concept of concurrency, but the two terms should not be confused. Parallelism can be thought of as simultaneous execution and concurrency as the composition of independent processes. You will encounter both of these approaches in this article. You can find out more about the differences between the two at the following site: http://blog.golang.org/concurrency-is-not-parallelism Parallel computing and related concepts have been in use by capital-intensive industries, such as Aircraft design and Defense, since the late 1950's and early 1960's. With the cost of hardware having dropped rapidly over the past five decades and the birth of open source operating systems and applications; home enthusiasts, students, and small companies now have the ability to leverage these technologies for their own uses. Traditionally parallel computing was found within High Performance Computing (HPC) architectures, those being systems categorized by high speed and density of calculations. The term you will probably be most familiar with in this context is, of course, supercomputers, which we shall look at next. Supercomputers The genesis of supercomputing can be found in the 1960's with a company called Control Data Corporation(CDC). Seymour Cray was an electrical engineer working for CDC who became known as the father of supercomputing due to his work on the CDC 6600, generally considered to be the first supercomputer. The CDC 6600 was the fastest computer in operation between 1964 and 1969. In 1972 Cray left CDC and formed his own company, Cray Research. In 1975 Cray Research announced the Cray-1 supercomputer. The Cray-1 would go on to be one of the most successful supercomputers in history and was still in use among some institutions until the late 1980's. The 1980's also saw a number of other players enter the market including Intel via the Caltech Concurrent Computation project, which contained 64 Intel 8086/8087 CPU's and Thinking Machines Corporation's CM-1 Connection Machine. This preceded an explosion in the 1990's with regards to the number of processors being included in supercomputing machines. It was in this decade, thanks to brute-force computing power that IBM infamously beat world chess master Garry Kasparov with the Deep Blue supercomputer. The Deep Blue machine contained some 30 nodes each including IBM RS6000/SP parallel processors and numerous "chess chips". By the 2000's the number of processors had blossomed to tens of thousands working in parallel. As of June 2013 the fastest supercomputer title was held by the Tianhe-2, which contains 3,120,000 cores and is capable of running at 33.86 petaflops per second. Parallel computing is not just limited to the realm of supercomputing. Today we see these concepts present in multi-core and multiprocessor desktop machines. As well as single devices we also have clusters of independent devices, often containing a single core, that can be connected up to work together over a network. Since multi-core machines can be found in consumer electronic shops all across the world we will look at these next. Multi-core and multiprocessor machines Machines packing multiple cores and processors are no longer just the domain of supercomputing. There is a good chance that your laptop or mobile phone contains more than one processing core, so how did we reach this point? The mainstream adoption of parallel computing can be seen as a result of the cost of components dropping due to Moore's law. The essence of Moore's law is that the number of transistors in integrated circuits doubles roughly every 18 to 24 months. This in turn has consistently pushed down the cost of hardware such as CPU's. As a result, manufacturers such as Dell and Apple have produced even faster machines for the home market that easily outperform the supercomputers of old that once took a room to house. Computers such as the 2013 Mac Pro can contain up to twelve cores, that is a CPU that duplicates some of its key computational components twelve times. These cost a fraction of the price that the Cray-1 did at its launch. Devices that contain multiple cores allow us to explore parallel-based programming on a single machine. One method that allows us to leverage multiple cores is threads. Threads can be thought of as a sequence of instructions usually contained within a single lightweight process that the operating system can then schedule to run. From a programming perspective this could be a separate function that runs independently from the main core of the program. Thanks to the ability to use threads in application development, by the 1990's a set of standards had come to dominate the area of shared memory multiprocessor devices, these were POSIX Threads(Pthreads) and OpenMP. POSIX threads is a standardized C language interface specified in the IEEE POSIX 1003.1c standard for programming threads, that can be used to implement parallelism. The other standard specified is OpenMP. To quote the OpenMP website, it can be described as: OpenMP is a specification for a set of compiler directives, library routines, and environment variables that can be used to specify shared memory parallelism in Fortran and C/C++ programs. http://openmp.org/ What this means in practice is that OpenMP is a standard that provides an API that helps to deal with problems, such as multi-threading and memory sharing. By including OpenMP in your project, you can write multithreaded applications without having to take care of many of the low-level implementation details as with writing an application purely using Pthreads. Commodity hardware clusters As with single devices with many CPU's, we also have groups of commodity off the shelf(COTS) computers, which can be networked together into a Local Area Network(LAN). These used to be commonly referred to as Beowulf clusters. In the late 1990's, thanks to the drop in the cost of computer hardware, the implementation of Beowulf clusters became a popular topic, with Wired magazine publishing a how-to guide in 2000: http://www.wired.com/wired/archive/8.12/beowulf.html The Beowulf cluster has its origin in NASA in the early 1990's, with Beowulf being the name given to the concept of a Network Of Workstations(NOW) for scientific computing devised by Donald J. Becker and Thomas Sterling. The implementation of commodity hardware clusters running technologies such as MPI lies behind the Raspberry Pi-based projects we will be building in this article. Cloud computing The next topic we will look at is cloud computing. You have probably heard the term before, as it is something of a buzzword at the moment. At the core of the term is a set of technologies that are distributed, scalable, metered (as with utilities), can be run in parallel, and often contain virtual hardware. Virtual hardware is software that mimics the role of a real hardware device and can be programmed as if it were in fact a physical machine. Examples of virtual machine software include VirtualBox, Red Hat Enterprise Virtualization, and parallel virtual machine(PVM). You can learn more about PVM here: http://www.csm.ornl.gov/pvm/ Over the past decade, many large Internet-based companies have invested in cloud technologies, the most famous perhaps being Amazon. Having realized they were under utilizing a large proportion of their data centers, Amazon implemented a cloud computing-based architecture which eventually resulted in a platform open to the public known as Amazon Web Services(AWS). Products such as Amazon's AWS Elastic Compute Cloud(EC2) have opened up cloud computing to small businesses and home consumers by allowing them to rent virtual computers to run their own applications and services. This is especially useful for those interested in building their own virtual computing clusters. Due to the elasticity of cloud computing services such as EC2, it is easy to spool up many server instances and link these together to experiment with technologies such as Hadoop. One area where cloud computing has become of particular use, especially when implementing Hadoop, is in the processing of big data. Big data The term big data has come to refer to data sets spanning terabytes or more. Often found in fields ranging from genomics to astrophysics, big data sets are difficult to work with and require huge amount of memory and computational power to query. These data sets obviously need to be mined for information. Using parallel technologies such as MapReduce, as realized in Apache Hadoop, have provided a tool for dividing a large task such as this amongst multiple machines. Once divided, tasks are run to locate and compile the needed data. Another Apache application is Hive, a data warehouse system for Hadoop that allows the use of a SQL-like language called HiveQL to query the stored data. As more data is produced year-on-year by more computational devices ranging from sensors to cameras, the ability to handle large datasets and process them in parallel to speed up queries for data will become ever more important. These big data problems have in-turn helped push the boundaries of parallel computing further as many companies have come into being with the purpose of helping to extract information from the sea of data that now exists. Raspberry Pi and parallel computing Having reviewed some of the key terms of High Performance Computing, it is now time to turn our attention to the Raspberry Pi and how and why we intend to implement many of the ideas explained so far. This article assumes that you are familiar with the basics of the Raspberry Pi and how it works, and have a basic understanding of programming. Throughout this article when using the term Raspberry Pi, it will be in reference to the Model B version. For those of you new to the device, we recommend reading a little more about it at the official Raspberry Pi home page: http://www.raspberrypi.org/ Other topics covered in this article, such as Apache Hadoop, will also be accompanied with links to information that provides a more in-depth guide to the topic at hand. Due to the Raspberry Pi's small size and low cost, it makes a good alternative to building a cluster in the cloud on Amazon, or similar providers which can be expensive or using desktop PC's. The Raspberry Pi comes with a built-in Ethernet port, which allows you to connect it to a switch, router, or similar device. Multiple Raspberry Pi devices connected to a switch can then be formed into a cluster; this model will form the basis of our hardware configuration in the article. Unlike your laptop or PC, which may contain more than one CPU, the Raspberry Pi contains just a single ARM processor; however, multiple Raspberry Pi's combined give us more CPU's to work with. One benefit of the Raspberry Pi is that it also uses SD cards as secondary storage, which can easily be copied, allowing you to create an image of the Raspberry Pi's operating system and then clone it for re-use on multiple machines. When starting out with the Raspberry Pi this is a useful feature. The Model B contains two USB ports allowing us to expand the device's storage capacity (and the speed of accessing the data) by using a USB hard drive instead of the SD card. From the perspective of writing software, the Raspberry Pi can run various versions of the Linux operating system as well as other operating systems, such as FreeBSD and the software and tools associated with development on it. This allows us to implement the types of technology found in Beowulf clusters and other parallel systems. We shall provide an overview of these development tools next. Programming languages and frameworks A number of programming languages including Fortran, C/C++, and Java are available on the Raspberry Pi, including via the standard repositories. These can be used for writing parallel applications using implementations of MPI, Hadoop, and the other frameworks we discussed earlier in this article. Fortran, C, and C++ have a long history with parallel computing and will all be examined to varying degrees throughout the article. We will also be installing Java in order to write Hadoop-based MapReduce applications. Fortran, due to its early implementation on supercomputing projects is still popular today for parallel computing application development, as a large body of code that performs specific scientific calculations exists. Apache Hadoop is an open source Java-based MapReduce framework designed for distributed parallel application development. A MapReduce framework allows an application to take, for example, a number of data sets, divide them up, and mine each data set independently. This can take place on separate devices and then the results are combined into a single data set from which we finally extract a meaningful value. Summary This concludes our short introduction to parallel computing and the tools we will be using on Raspberry Pi. You should now have a basic idea of some of the terms related to parallel computing and why using the Raspberry Pi is a fun and cheap way to build your own computing cluster. Our next task will be to set up our first Raspberry Pi, including installing its operating system. Once set up is complete, we can then clone its SD card and re-use it for future machines. Resources for Article : Further resources on this subject: Installing MAME4All (Intermediate) [Article] Using PVR with Raspbmc [Article] Coding with Minecraft [Article]
Read more
  • 0
  • 0
  • 2184

article-image-introducing-beagleboard
Packt
29 Oct 2013
9 min read
Save for later

Introducing BeagleBoard

Packt
29 Oct 2013
9 min read
(For more resources related to this topic, see here.) We'll first have a quick overview of the features of BeagleBoard (with focus on the latest xM version) —an open source hardware platform, borne for audio, video, and digital signal processing. Then we will introduce the concept of rapid prototyping and explain what we can do with the BeagleBoard support tools from MATLAB® and Simulink® by MathWorks®. Finally, this article ends with a summary. Different from most approaches that involve coding and compiling at a Linux PC and require intensive manual configuration in command-line manner, the rapid prototyping approach presented in this article is a Windows-based approach that features a Windows PC for embedded software development through user-friendly graphic interaction and relieves the developer from intensive coding so that you can concentrate on your application and algorithms and have the BeagleBoard run your inspiration. First of all, let's begin with a quick overview of this article. A quick overview of the BeagleBoard's functionality We can create a number of exciting projects to demonstrate how to build a prototype of an embedded audio, video, and digital signal processing system rapidly without intensive programming and coding. The main projects include: Installing Linux for BeagleBoard from a Windows PC Developing C/C++ with Eclipse on a Windows PC Automatic embedded code generation for BeagleBoard Serial communication and digital I/O application: Infrared motion detection Audio application: voice recognition Video application: motion detection These projects provide the workflow of building an embedded system. With the help of various online documents you can learn about setting up the development environment, writing software at a host PC running Microsoft Windows, and compiling the code for standalone ARM-executables at the BeagleBoard running Linux. Then you can learn the skills of rapid prototyping embedded audio and video systems via the BeagleBoard support tools from Simulink by MathWorks. The main features of these techniques include: Open source hardware A Windows-based friendly development environment Rapid prototyping and easy learning without intensive coding These features will save you from intensive coding and will also relieve the pressure on you to build an embedded audio/video processing system without learning the complicated embedded Linux. The rapid prototyping techniques presented allow you to concentrate on your brilliant concept and algorithm design, rather than being distracted by the complicated embedded system and low-level manual programming. This is beneficial for students and academics who are primarily interested in the development of audio/video processing algorithms, and want to build an embedded prototype for proof-of-concept quickly. BeagleBoard-xM BeagleBoard, the brainchild of a small group of Texas Instruments (TI) engineers and volunteers, is a pocket-sized, low-cost, fan-less, single-board computer containing TI Open Multimedia Application Platform 3 (OMAP3) System on a chip (SoC) processor, which integrates a 1 GHz ARM core and a TI's Digital Signal Processor (DSP) together. Since many consumer electronics devices nowadays run some form of embedded Linux-based environment and usually are on an ARM-based platform, the BeagleBoard was proposed as an inexpensive development kit for hobbyists, academics, and professionals for high-performance, ARM-based embedded system learning and evaluation. As an open hardware embedded computer with open source software development in mind, the BeagleBoard was created for audio, video, and digital signal processing with the purpose of meeting the demands of those who want to get involved with embedded system development and build their own embedded devices or solutions. Furthermore, by utilizing standard interfaces, the BeagleBoard comes with all of the expandability of today's desktop machines. The developers can easily bring their own peripherals and turn the pocket-sized BeagleBoard into a single-board computer with many additional features. The following figure shows the PCB layout and major components of the latest xM version of the BeagleBoard. The BeagleBoard-xM (referred to as BeagleBoard in this article unless specified otherwise) is an 8.25 x 8.25cm (3.25" x 3.25") circuit board that includes the following components: CPU: TI's DM3730 processor, which houses a 1 GHz ARM Cortex-A8 superscalar core and a TI's C64x+ DSP core. The power of the 32-bit ARM and C64+ DSP, plus a large amount of onboard DDR RAM arm the BeagleBoard with the capacity to deal with computational intensive tasks, such as audio and video processing. Memory: 512 MB MDDR SDRAM working 166MHz. The processor and the 512 MB RAM comes in a .44 mm (Package on Package) POP package, where the memory is mounted on top of the processor. microSD card slot: being provided as a means for the main nonvolatile memory. The SD cards are where we install our operating system and will act as a hard disk. The BeagleBoard is shipped with a 4GB microSD card containing factory-validated software (actually, an Angstrom distribution of embedded Linux tailored for BeagleBoard). Of course, this storage can be easily expanded by using, for example, an USB portable hard drive. USB2.0 On-The-Go (OTG) mini port: This port can be used as a communication link to a host PC and the power source deriving power from the PC over the USB cable. 4-port USB-2.0 hub: These four USB Type A connectors with full LS/FS/HS support. Each port can provide power on/off control and up to 500 mA as long as the input DC to the BeagleBoard is at least 3 A. RS232 port: A single RS232 port via UART3 of DM3730 processor is provided by a DB9 connector on BeagleBoard-xM. A USB-to-serial cable can be plugged directly into the DB9 connector. By default, when the BeagleBoard boots, system information will be sent to the RS232 port and you can log in to the BeagleBoard through it. 10/100 M Ethernet: The Ethernet port features auto-MDIX, which works for both crossover cable and straight-through cable. Stereo audio output and input: BeagleBoard has a hardware accelerated audio encoding and decoding (CODEC) chip and provides stereo in and out ports via two 3.5 mm jacks to support external audio devices, such as headphones, powered speakers, and microphones (either stereo or mono). Video interfaces: It includes S-video and Digital Visual Interface (DVI)-D output, LCD port, a Camera port. Joint Test Action Group (JTAG) connector: reset button, a user button, and many developer-friendly expansion connectors. The user button can be used as an application button. To get going, we need to power the BeagleBoard by either the USB OTG mini port, which just provides current of up to 500 mA to run the board alone, or a 5V power source to run with external peripherals. The BeagleBoard boots from the microSD card once the power is on. Various alternative software images are available on the BeagleBoard website, so we can replace the factory default images and have the BeagleBoard run with many other popular embedded operating systems (like Andria and Windows CE). The off-the-shelf expansion via standard interfaces on the BeagleBoard allows developers to choose various components and operating systems they prefer to build their own embedded solutions or a desktop-like system as shown below: BeagleBoard for rapid prototyping A rapid prototyping approach allows you to quickly create a working implementation of your proof-of-concept and verify your audio or video applications on hardware early, which overcomes barriers in the design-implementation-validation loops and helps you find the right solution for your applications. Rapid prototyping not only reduces the development time from concept to product, but also allows you to identify defects and mistakes in system and algorithm design at an early stage. Prototyping your concept and evaluating its performance on a target hardware platform gives you confidence in your design, and promotes its success in applications. The powerful BeagleBoard equipped with many standard interfaces provides a good hardware platform for rapid embedded system prototyping. On the other hand, the rapid prototyping tool, the BeagleBoard Support from Simulink package, provided by MathWorks with graphic user interface (GUI) allows developers to easily implement their concept and algorithm graphically in Simulink, and then directly run the algorithms at the BeagleBoard. In short, you design algorithms in MATLAB/Simulink and see them perform as a standalone application on the BeagleBoard. In this way, you can concentrate on your brilliant concept and algorithm design, rather than being distracted by the complicated embedded system and low-level manual programming. The prototyping tool reduces the steep learning curve of embedded systems and helps hobbyists, students, and academics who have a great idea, but have little background knowledge of embedded systems. This feature is particularly useful to those who want to build a prototype of their applications in a short time. MathWorks introduced the BeagleBoard support package for rapid prototyping in 2010. Since the release of MATLAB 2012a, support for the BeagleBoard-xM has been integrated into Simulink and is also available in the student version of MATLAB and Simulink. Your rapid prototyping starts with modeling your systems and implementing algorithms in MATLAB and Simulink. From your models, you can automatically generate algorithmic C code along with processor-specific, real-time scheduling code and peripheral drivers, and run them as standalone executables on embedded processors in real time. The following steps provide an overview of the work flow for BeagleBoard rapid prototyping in MATLAB/Simulink: Create algorithms for various applications in Simulink and MATLAB with a user-friendly GUI. The applications can be audio processing (for example, digital amplifiers), computer vision applications (for example, object tracking), control systems (for example, flight control), and so on. Verify and improve the algorithm work by simulation. With intensive simulation, it is expected that most defects, errors, and mistakes in algorithms will be identified. Then the algorithms are easily modified and updated to fix the identified issues. Run the algorithms as standalone applications on the BeagleBoard. Interactive parameter turning, signal monitoring, and performance optimization of applications running on the BeagleBoard. Summary In this article, we have familiarized ourselves with the BeagleBoard and rapid prototyping by using MATLAB/Simulink. We have also looked at some of the features of rapid prototyping and the basic steps in rapid prototyping in MATLAB/Simulink. Resources for Article: Further resources on this subject: 2-Dimensional Image Filtering [Article] Creating Interactive Graphics and Animation [Article] Advanced Matplotlib: Part 1 [Article]
Read more
  • 0
  • 0
  • 5050

article-image-layer-height-fill-settings-and-perimeters-our-objects
Packt
08 Oct 2013
7 min read
Save for later

Layer height, fill settings, and perimeters in our objects

Packt
08 Oct 2013
7 min read
(For more resources related to this topic, see here.) Getting ready Open up Slic3r and go to the Print Settings tab. We're staying in the Simple mode for now, because it's easier to track the changes we make to the changes in our final print. A good thing to do when making setting changes is to only make one change at a time. This is so that if something goes wrong, or right, we know exactly what change did it. How to do it... The Print Settings section is where a lot of changes will happen as we print. Let's go down the list of options in this section so we know what they are and why we might want to change them, sometimes from print to print. First up is Layer height option. The default layer height of 0.4mm is ok, assuming that we have a 0.5mm nozzle. So we can leave that for now. If our nozzle is more than 0.5mm though, we will have a lot of squeeze out of our filament. So if our nozzle is larger, increase the size of the layer. 80 percent of the nozzle diameter is a good rule of thumb. This also means that if we have a nozzle less than 0.5mm, we can make our layer height default smaller. Again, 80 percent of the nozzle diameter is a good starting place. Depending on our object we are printing, the Perimeters (minimum) setting of 3 is good. If there are gaps in the walls, especially of sloped surfaces, increasing the number of perimeters is something to try. Next, Solid layers is the setting of how many layers Slic3r will tell the printer to fill completely at the top and the bottom of the print. For the bottom layers, this will give the object a stable base that is less prone to warping. For the top, the default of 3 layers is based on the extruded filament width, and how much coverage the filament will give as it gets to the top of the object. For the Infill settings, a value of 0.15 for Fill density should stay, but change Fill pattern to honeycomb. It's a bit slower, but more stable. This is how the inner part of the object is filled with plastic. Since filling the entire object uses a lot of plastic, and isn't needed, we set the infill settings. How it works... Layer height, infill settings, perimeters, what does it all mean? Let's look into those settings and what they stand for, in more detail. Layer height The layer height of the print means how thick each layer of plastic is deposited on the model. The thinner the layer, the smoother and more detailed the print can be. We don't always want thin layers though. Some prints, such as for mechanical parts, or parts that will not be seen, can be done with thicker layer heights. If we're printing parts for a RepRap or another printer, the layer height can be thicker for the structural elements. It doesn't directly relate to the structural strength however. That would be covered in a moment when discussing the Perimeters and Infill settings. For thicker layer heights, it's usually a good idea to have layers at or under your nozzle size. This is so the extruder will press the plastic into the layer below. If the layer height is higher, the plastic will have a chance to cool before touching the foundation layer, and also only have gravity to help weld the two layers together. If we're printing objects for viewing, such as a statue or a decorative item, we'll usually want to go with thinner layer heights. This comes at the expense of printing speed, because the printer will now need to lay down more layers to complete the model. So finding a balance between looks and speed is something we will constantly juggle. For very detailed objects, resolutions as low as 0.1mm have been achieved by some printers. Perimeters These make up the walls of the object. They are also important for adding stability to the object being printed. The Slic3r developers recommend a minimum of two perimeters for printing. Having at least two will help both the structure of the outside, and help to cover up imperfections in the print. There is also a setting for solid layers. It is related to Perimeters, in that, it determines the number of solid layers at the top and the bottom of the print. The default setting for this is three perimeters. For models that are not solid, set them with the Infill settings; having more than one top layer will help bridge any gaps in the model and will result in a better fill for the top of the model. The default setting for Slic3r is for the three top and bottom layers to be solid. Depending on our model, and what we want to do with it, we can change this. Coming up is a hack for making hollow objects such as vases from normally solid objects. Infill settings Infill in our objects gives them stability. Too much infill, however, such as making our object solid, can not only cause printing issues, but also is a waste of plastic. The Fill Density setting ranges from 0 to 1, with 0 being 0 percent, and 1 being 100 percent. The default setting for Fill Density is 40 percent, or 0.4 in the preference. This is a decent setting to start with, but for structural components, or ones that will depend on being rigid under stress, raising that up would be a good idea. The developers suggest a minimum of 0.2 as the setting to support flat ceilings. Any lower, the top of your model is likely to sag inward. The Fill Pattern is interesting. This setting is how Slic3r will tell our printer how to fill in the inside of our model. The honeycomb option is good for structure, but takes longer to print. The developers also recommend rectilinear and line for infill, but there are several others to choose from. A bit of experimentation will reveal what is best for what models we want to print. There's more... Settings can be more than just settings. More than just a tool for making nicer quality prints. We can use some settings to alter the objects themselves, to make changes to the objects, and have it come out as what we want without having to touch the modeling software. Vases and other hollow objects There are some interesting things you can do while printing models and changing these settings. For instance, if you set the Infill Fill Density to 0, and the Top setting of Solid layers to 0, you can make any object hollow with the top open. We'll need to make sure the model can actually print this way, structurally. If it can, it is an interesting way to make custom vases or other open cupped objects. Having a higher setting on the Perimeters (minimum) will help some prints with this. Summary This article talked about some of the most important settings for printing your objects. It delved into how each setting works, and how changing it affects your final printed object. Resources for Article: Further resources on this subject: Learn Baking in Blender [Article] Getting Started with Blender’s Particle System- A Sequel [Article] The Trivadis Integration Architecture Blueprint: Implementation scenarios [Article]
Read more
  • 0
  • 0
  • 3597
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-installing-mame4all-intermediate
Packt
27 Sep 2013
3 min read
Save for later

Installing MAME4All (Intermediate)

Packt
27 Sep 2013
3 min read
(For more resources related to this topic, see here.) Getting ready You will need: A Raspberry Pi An SD card with the official Raspberry Pi OS, Raspbian, properly loaded A USB keyboard A USB mouse A 5V 1A power supply with Micro-USB connector A network connection And a screen hooked up to your Raspberry Pi How to do it... Perform the following steps for installing MAME4All: From the command line, enter startx to launch the desktop environment. From the desktop, launch the Pi Store application by double-clicking on the Pi Store icon. At the top-right of the application, there will be a Log In link. Click on this link and log in with your registered account. Type MAME4All in the search bar, and press Enter. Click on the MAME4All result. At the application's information page, click on the Download button on the right-hand side of the screen. MAME4All will automatically download, and a window will appear showing the installation process. Press any button to close the window once it has finished installing. MAME4All will look for your game files in the /usr/local/bin/indiecity/InstalledApps/MAME4ALL-pi/Full/roms directory. Perform the following steps for running MAME4All from the Pi Store: From the desktop, launch the Pi Store application by double-clicking on the Pi Store icon. At the top-right of the application, there will be a Log In link. Click on the link and log in with your registered account. Click on the My Library tab. Click on MAME4All, and then click on Launch. For running MAME4All from the command line, perform the following steps: Type cd /usr/local/bin/indiecity/InstalledApps/mame4all_pi/Full and press Enter. Type ./mame and press Enter for launching MAME4All. How it works... MAME4All is a Multi Arcade Machine Emulator that takes advantage of the Raspberry Pi's GPU to achieve very fast emulation of arcade machines. It is able to achieve this speed by compiling with DispManX, which offloads SDL code to the graphics core via OpenGL ES. When you run MAME4All, it looks for any game files you have in the roms directory and displays them in a menu for you to select from. If it doesn't find any files, it exits after a few seconds. The default keys for MAME4All-Pi are: 5 for inserting coins 1 for player 1 to start Arrow keys for player 1 joystick controls Ctrl, Alt, space bar, Z, X, and C for default action keys You can modify the MAME4All configuration by editing the /usr/local/bin/indiecity/InstalledApps/mame4all_pi/Full/mame.cfg file. There's more... A few useful reference links: For information on MAME project go to http://mamedev.org/ For information on MAME4All project go to http://code.google.com/p/mame4all-pi/ Summary In this article we saw how to install, launch, and play with a specially created version of MAME for the Raspberry Pi from the Pi Store Resources for Article: Further resources on this subject: Creating a file server (Samba) [Article] Webcam and Video Wizardry [Article] Coding with Minecraft [Article]
Read more
  • 0
  • 0
  • 8780

article-image-proportional-line-follower-advanced
Packt
26 Sep 2013
6 min read
Save for later

Proportional line follower (Advanced)

Packt
26 Sep 2013
6 min read
(For more resources related to this topic, see here.) Getting ready First, you will need to build an attachment to hold the color sensor onto the robot. Insert an axle that is five modules long into the color sensor. Place bushings onto the axle on either side of the sensor. This is illustrated in the following figure: Attach the two-pin one-axle cross blocks onto the axle outside the bushings. This is illustrated in the following figure: Insert 3-module pins into the cross blocks as shown in the following figure: The pins will attach to the robot just in front of the castor. The bottom of the color sensor should be approximately leveled with the plastic part of the castor holder. If you are on a flat hard surface, your light sensor will be half a centimeter above the ground. If you are on a soft surface, you may need to add a spacer to raise up the sensor. This is illustrated in the following figure: How to do it... We are going to write a proportional line following code similar to the code used for the ultrasonic motion sensor. We will write the following code: This program contains a loop so the robot will track a line for 30 seconds. The base speed of the robot is controlled by a constant value which in this case, is 15. You will need to determine a desired light sensor value for the robot to track on. You can either read the light sensor reading directly on your EV3 brick, or look at the panel on the lower-right hand corner of your screen. You should see all of the motors and sensors that are currently plugged into your brick. In the following screenshot, the current light sensor reading is 16. When tracking a line, you actually want to track on the edge of a line. Our code is designed to track on the right edge of a black line on a white surface. The line doesn't have to be black (or a white surface), but the stronger the contrast the better. One way to determine the desired light sensor value would be to place the light sensor on the edge of the line. Alternatively, you could take two separate readings on the bright surface and the dark surface and take the average value. In the code we discussed, the average value is 40, but you will have to determine the values which work in your own environment. Not only will the surfaces affect the value, but ambient room light can alter this value. The code next finds the difference between the desired value and the sensor reading. This difference is multiplied by a gain factor, which for the optical proportional line follower will probably be between 0 and 1. In this program, I chose a gain of 0.7. The result is added to the base speed of one motor and subtracted from the based speed of the other motor: MotorBPower = Speed - Gain * (LightSensor - DesiredValue) MotorCPower = Speed + Gain * (LightSensor - DesiredValue) After taking the light sensor readings, practice with several numbers to figure out the best speeds and proportionality constants to make your robot follow a line. How it works... This algorithm will make corrections to the path of the robot based on how far off from the line the robot is. It determines this by calculating the difference between the light sensor reading and the value of the light sensor reading on the edge. Each wheel of the robot rotates at a different speed proportional to how far from the line it is. There is a base speed for each wheel and then they will go either slower or faster for a smooth turning. You will find that a large gain value will be needed for sharp turns, but the robot will tend to overcorrect and wobble when it is following a straight line. A smaller gain and higher speed can work effectively when the line is relatively straight or follows a gradual curve. The most important factor to determine is the desired light sensor value. Although your color sensor can detect several colors, we will not be using that feature in this program. The color sensor included in your kit emits red light and we are measuring the reflection of that reflected light. The height of the sensor above the floor is critical, and there is a sweet spot for line tracking at about half a centimetre above the floor. The light comes out of the sensor in a cone. You want the light reflected into the sensor to be as bright as possible, so if your sensor is too high, the reflected intensity will be weaker. Assuming your color sensor is pointing straight down at the floor (as it is in our robot design), then you will see a circular red spot on the floor. Because the distance between the detector and the light emitter is about 5 to 6 mm, the diameter of this circle should be about 11 mm across. If the circle is large, then your color sensor is too high and the intensity will weaken. If the circle is smaller than this, then the sensor will not pick up the emitted light. The color sensor in the LEGO MINDSTORMS EV3 kit is different from the optical sensors included in the earlier LEGO NXT kits. Depending on your application, you might want to pick up some of the older NXT lights and color sensors. The light sensor in the NXT 1.0 kit could not detect color and only measured reflected intensity of a red LED. What is good about this sensor is that it will actually work flush against the surface and saves the need to calibrate changes due to the ambient lighting conditions. The color sensor in the NXT 2.0 kit actually emitted colored lights and contained a general photo detector. However, it did not directly measure color, but measured the reflection of colored light, which it would emit. This actually allowed you to track along different colored lines, but it was also slower. The new EV3 sensor detects colors directly, works quickly, and emits only red light. Summary This article taught us to alter our robot, so it can track a line using an optical sensor. We used a proportional algorithm and adjusted the parameters for optimum tracking. Finally, we also wrote a program allowing the robot to be calibrated without the use of a computer. Resources for Article : Further resources on this subject: Playing with Max 6 Framework [Article] Panda3D Game Development: Scene Effects and Shaders [Article] Our First Project – A Basic Thermometer [Article]
Read more
  • 0
  • 1
  • 6670

Packt
11 Sep 2013
15 min read
Save for later

Get Connected – Bluetooth Basics

Packt
11 Sep 2013
15 min read
(For more resources related to this topic, see here.) Why Bluetooth? There are other forms of wireless communication that we could use, like infrared and Wi-Fi, but Bluetooth is perfect for many household projects. It is cheap, very easy to set up, will typically use less power than Wi-Fi because of the shorter range, and it's very responsive. It's important to keep in mind that there isn't a single "best" form of communication. Each type will suit each project (or perhaps budget) in different ways. In terms of performance, I have found that a short message will be transmitted in under 20 milliseconds from one device to another, and the signal will work for just less than 10 meters (30 feet). These numbers, however, will vary based on your environment. Things you need The things required for this project are as follows: Netduino Breadboard Bluetooth module Windows Phone 8 Lots of different Bluetooth modules exist, but I have found that the JY-MCU is very cheap (around $10) and reliable. Any Windows Phone 8 device can be used, as they all have Bluetooth. The project setup The setup for this project is extremely basic because we are just connecting the Bluetooth module and nothing else. Once our phone is connected we will use it to control the onboard LED, however, you can expand this to control anything else too. The Bluetooth module you buy may look slightly different to the diagram, but not to worry, just make sure you match up the labels on the Bluetooth module (GND, 3-3V or VCC, TX, and RX) to the diagram. If you encounter a situation where everything is hooked up right but no data is flowing, examine the minimum baud rate in your Bluetooth module's manual or specifications sheet. It has been reported that some Bluetooth modules do not work well communicating at 9600 baud. This can be easily remedied by setting the baud rate in your SerialPort's constructor to 115200. For example, SerialPort(new SerialPort(SerialPorts.COM1, 115200, Parity.None, 8, StopBits.One). Once it is wired up, we can get onto the coding. First we will do the Netduino part. The Netduino will listen for messages over Bluetooth, and will set the brightness of the onboard LED based on the percentage it receives. The Netduino will also listen for "ping", and if it receives this then it will send the same text back to the other device. We do this as an initial message to make sure that it gets from the phone to the Netduino, and then back to the phone successfully. After that we will code the phone application. The phone will connect, send a "ping", and then wait until it receives the "ping" back. When the phone receives the "ping" back then it can start sending messages. In this article only Windows Phone 8 will be covered, however, the same concepts apply, and it won't be too hard to code the equivalent app for another platform. The Netduino code will remain the same no matter what device you connect to. Coding Because we will be using a phone to connect to the Netduino, there are two distinct parts which need to be coded. The Netduino code Open up Visual Studio and create a new Netduino Plus 2 Application. Add a reference to SecretLabs.NETMF.Hardware.PWM. Open Program.cs from the Solution Explorer. You need to add the following using statements at the top: using System.IO.Ports;using System.Text;using NPWM = SecretLabs.NETMF.Hardware.PWM; You need to get the phone paired with the Bluetooth module on the Netduino. So in Program.cs, replace the Main method with this: private static SerialPort _bt;public static void Main(){_bt = new SerialPort(SerialPorts.COM1, 9600,Parity.None, 8, StopBits.One);_bt.Open();while (true){Thread.Sleep(Timeout.Infinite);}} This code creates a new instance of a SerialPort (the Bluetooth module), then opens it, and finally has a loop (which will just pause forever). Plug in your Netduino and run the code. Give it a few seconds until the blue light goes off—at this point the Bluetooth module should have a flashing red LED. On your Windows Phone, go to Settings | Bluetooth and make sure that it is turned on. In the list of devices there should be one which is the Bluetooth module (mine is called "linvor") so tap it to connect. If it asks for a pin try the default of "1234", or check the data sheet. As it connects, the red LED on the Bluetooth module will go solid, meaning that it is connected. It will automatically disconnect in 10 seconds; that's fine. Now that you've checked that it connects correctly, start adding in the real code: private static SerialPort _bt;private static NPWM _led;private static string _buffer;public static void Main(){_led = new NPWM(Pins.ONBOARD_LED);_bt = new SerialPort(SerialPorts.COM1, 9600,Parity.None, 8, StopBits.One);_bt.DataReceived += new SerialDataReceivedEventHandler(rec_DataReceived);_bt.Open();while (true){Thread.Sleep(Timeout.Infinite);}} This is close to the code you replaced but also creates an instance of the onboard LED, and declares a string to use as a buffer for the received data. Next you need to create the event handler that will be fired when data is received. Something that can easily trip you up is thinking that each message will come through as a whole. That's incorrect. So if you send a "ping" from your phone, it will usually come through in two separate messages of "p" and "ing". The simplest way to work around that is to just have a delimiter that marks the end of a message (in the same way as military personnel end radio communications by saying "10-4"). So send the message as "ping|" with a pipe at the end. This code for the DataReceived event handler builds up a buffer until it finds a pipe (|), then processes the message, then resets the buffer (or sets it to whatever is after the pipe, which will be the first part of the next message): private static void rec_DataReceived(object sender,SerialDataReceivedEventArgs e){byte[] bytes = new byte[_bt.BytesToRead];_bt.Read(bytes, 0, bytes.Length);char[] converted = new char[bytes.Length];for (int b = 0; b < bytes.Length; b++){converted[b] = (char)bytes[b];}string str = new String(converted);if (str != null && str.Length > 0){if (str.IndexOf("|") > -1){_buffer += str.Substring(0, str.IndexOf("|"));ProcessReceivedString(_buffer);_buffer = str.Substring(str.LastIndexOf("|") +1);}else{_buffer += str;}}} At the start of the event handler, you create a byte array to hold the received data, then loop through that array and convert each byte to a char and put those chars into a char array. Once you have a char array, create a new string using the char array as a parameter, which will give the string representation of the array. After checking that it is not null or empty you check whether it has a pipe (meaning it contains the end of a message). If so, add all the characters up to the pipe onto the buffer and then process the buffer. If there is no pipe then just add to the buffer. The only thing that remains is the method to process the received string (the buffer) and a method to send messages back to the phone. So put these methods below the event handler that you just added: private static void ProcessReceivedString(string _buffer){if (_buffer == "ping"){Write(_buffer);}else{uint val = UInt32.Parse(_buffer);_led.SetDutyCycle(val);}}private static void Write(string message){byte[] bytes = Encoding.UTF8.GetBytes(message + "|");_bt.Write(bytes, 0, bytes.Length);} As mentioned before, if you receive a "ping" then just send it back, or alternatively convert the string into an unsigned integer and set the brightness of the onboard LED. The last method simply adds a pipe to the end of the string, converts it to a byte array, then writes it to the Bluetooth SerialPort to send to the phone. At this point, you should run the code on the Netduino, but keep in mind that the same thing as before will happen because we are not sending it any data yet. So next up, we need to make the phone application that helps us send messages to the Netduino. The phone code As mentioned, we will be using a Windows Phone 8 device to connect to the Netduino. The same principles demonstrated in this section will apply to any platform, and it all revolves around just knowing how to read and write the Bluetooth data. You may notice that much of the phone code resembles the Netduino code—this is because both are merely sending and receiving messages. Before moving on, you will need the Windows Phone 8 SDK installed. Download and install it from here: http://developer.windowsphone.com You may need to close any copies of Visual Studio that are open. Once it is installed you can go ahead and open the Netduino project (from the previous section) again, then follow these steps: We could create the phone project in the same solution as the Netduino project, but in terms of debugging, it's easier to have them in separate instances of Visual Studio. So open up another copy of Visual Studio and click on File | New | Project. Find the Windows Phone App template by navigating to Installed | Templates | Visual C# | Windows Phone. Name the project and then click on OK to create it. A dialog may appear asking you to choose which version of the OS you would like to target. Make sure that Windows Phone OS 8.0 is selected (Windows Phone 7.1 does not have the required APIs for third party developers). When creating a new Windows Phone application, MainPage.xaml will automatically be displayed. This is the first page of the app that the user will see when they run your app. XAML is the layout language used on Windows Phone, and if you've ever used HTML then you will be quite at home. In the XAML window, scroll down until you find the grid named ContentPanel. Replace it with: <Grid x_Name="ContentPanel" Grid.Row="1"Margin="12,0,12,0"><Slider IsEnabled="False" Minimum="0" Maximum="100"x:Name="slider" ValueChanged="slider_ValueChanged"/></Grid> This will add a Slider control to the page with the value at the far left being 0 and the far right being 100—essentially a percent. Whenever the user drags the slider, it will fire the ValueChanged event handler, which you will add soon. That is the only UI change you need to make. So in the Solution Explorer, right-click on MainPage.xaml | View Code. Add these Using statements to the top: using Windows.Storage.Streams;using System.Text;using Windows.Networking.Sockets;using Windows.Networking.Proximity;using System.Runtime.InteropServices.WindowsRuntime; We need to declare some variables, so replace the MainPage constructor with this: StreamSocket _socket;string _receivedBuffer = "";bool _isConnected = false;public MainPage(){InitializeComponent();TryConnect();}private void slider_ValueChanged(object sender,RoutedPropertyChangedEventArgs<double> e){if (_isConnected){Write(((int)slider.Value).ToString());}}async private void Write(string str){var dataBuffer = GetBufferFromByteArray(Encoding.UTF8.GetBytes(str + "|"));await _socket.OutputStream.WriteAsync(dataBuffer);}private IBuffer GetBufferFromByteArray(byte[] package){using (DataWriter dw = new DataWriter()){dw.WriteBytes(package);return dw.DetachBuffer();}} The StreamSocket is essentially a way to interact with the phone's Bluetooth chip, which will be used in multiple methods in the app. When the slider's value changes, we check that the phone is connected to the Netduino, and then use the Write method to send the value. The Write method is similar to the one we made on the Netduino, except it requires a few lines extra to convert the byte array into an IBuffer. In the previous step, you might have noticed that we ran a method called TryConnect in the MainPage constructor. As you may have guessed, in this method we will try to connect to the Netduino. Add this method below the ones you added previously: private async void TryConnect(){PeerFinder.AlternateIdentities["Bluetooth:Paired"] ="";var pairedDevices = await PeerFinder.FindAllPeersAsync();if (pairedDevices.Count == 0){MessageBox.Show("Make sure you pair the devicefirst.");}else{SystemTray.SetProgressIndicator(this,new ProgressIndicator { IsIndeterminate = true,Text = "Connecting", IsVisible = true });PeerInformation selectedDevice = pairedDevices[0];_socket = new StreamSocket();await _socket.ConnectAsync(selectedDevice.HostName, "1");WaitForData(_socket);Write("ping");}} We first get a list of all devices that have been paired with the phone (even if they are not currently connected), and display an error message if there are no devices. If it does find one or more devices, then we display a progress bar at the top of the screen (in the SystemTray) and proceed to connect to the first Bluetooth device in the list. It is important to note that in the example code we are connecting to the first device in the list—in a real-world app, you would display the list to the user and let them decide which is the right device. After connecting, we call a method to wait for data to be received (this will happen in the background and will not block the rest of the code), and then write the initial ping message. Don't worry, we are almost there! The second last method you need to add is the one that will wait for the data to be received. It is an asynchronous method, which means that it can have a line within it that blocks execution (for instance, in the following code the line that waits for data will block the thread), but the rest of the app will carry on fine. Add in this method: async private void WaitForData(StreamSocket socket){try{byte[] bytes = new byte[5];await socket.InputStream.ReadAsync(bytes.AsBuffer(), 5, InputStreamOptions.Partial);bytes = bytes.TakeWhile((v, index) =>bytes.Skip(index).Any(w => w != 0x00)).ToArray();string str = Encoding.UTF8.GetString(bytes, 0,bytes.Length);if (str.Contains("|")){_receivedBuffer += str.Substring(0,str.IndexOf("|"));DoSomethingWithReceivedString(_receivedBuffer);_receivedBuffer = str.Substring(str.LastIndexOf("|") + 1);}else{_receivedBuffer += str;}}catch{MessageBox.Show("There was a problem");}finally{WaitForData(socket);}} Yes, this code looks complicated, but it is simple enough to understand. First we create a new byte array (the size of the array isn't too important, and you can change it to suit your application), then wait for data to come from the Netduino. Once it does, we copy all non-null bytes to our array, then convert the array to a string. From here, it is exactly like the Netduino code. The final code left to write is the part that handles the received messages. In this simple app, we don't need to check for anything except the return of the "ping". Once we receive that ping, we know it has connected successfully and we enable the slider control to let the user start using it: private void DoSomethingWithReceivedString(string buffer){if (buffer == "ping"){_isConnected = true;slider.IsEnabled = true;SystemTray.SetProgressIndicator(this, null);}} We also set the progress bar to null to hide it. Windows Phone (and other platforms) needs to explicitly define what capabilities they require for security reasons. Using Bluetooth is one such capability, so to define that we are using it, in the Solution Explorer find the Properties item below the project name. Left-click on the little arrow on the left of it to expand its children. Now double-click on WMAppManifest.xml to open it up then click the Capabilities tab near the top. The list on the left defines each specific capability. Ensure that both ID_CAP_PROXIMITY and ID_CAP_NETWORKING are checked. And that's it! Make sure your Netduino is plugged in (and running the program you made in this article), then plug your Windows Phone 8 in, and run the code. The run button may say Emulator X, you will need to change it to Device by clicking on the little down arrow on the right of the button. Once the two devices are connected, slide the slider on the phone forwards and backwards to see the onboard LED on the Netduino go brighter and dimmer. Not working? If the phone does not connect after a few seconds then something has probably gone wrong. After double-checking your wiring, the best thing to try is to unplug both the Netduino and phone, then plug them back in. If you are using a different Bluetooth board, you may have to pair it again to the phone. Repeat step 5 of the The Netduino Code section of this article. With both plugged back in, run the Netduino code (and give it a few seconds to boot up), then run the phone code. If that still doesn't work, unplug both again, and only plug back in the Netduino. When it is powered up, it will run the last application deployed to it. Then with your phone unplugged, go to the app list and find the phone app you made, and tap on it to run it. Summary You've managed to control your Netduino from afar! This article had a lot more code than most of the rest will because of needing to code both the Netduino and phone. However, the knowledge you've gained here will help you in many other projects, and we will be using this article as a base for some of the others. Resources for Article: Further resources on this subject: Automating the Audio Parameters – How it Works [Article] Ease the Chaos with Automated Patching [Article] Skype automation [Article]
Read more
  • 0
  • 0
  • 1797

article-image-design-tools-and-basics
Packt
29 Aug 2013
20 min read
Save for later

Design Tools and Basics

Packt
29 Aug 2013
20 min read
(For more resources related to this topic, see here.) Owning a Makerbot 3D printer means being able to make anything you want at a push of a button, right? 3D printer owners quickly find that while 3D printers have no end of things they can produce, they also are not without their limitations. Designing an object without 3D printing in mind will result in a failed print that more resembles a bird nest or a bowl of spaghetti. Making a 3D printable object requires learning a few rules, some careful planning, and design. But once you know the rules the results can be astounding. 3D printers can even produce things with ease that traditional manufacturing cannot, for example, objects with complex internal geometry that machining cannot touch. There are many places online such as Makerbot's own Thingiverse that hosts a daily growing library of printable objects. Printing out other people's designs is all well and good for a while, but the most exciting part about 3D printing is that it can produce your designs and models. Eventually, learning how to model for 3D printing is a must. Can you learn 3D modeling? If you've ever won a round of Pictionary you've got all the artistic skill it takes to get started. If you've ever gotten past level 1 on Tetris then you've got spatial reasoning. If you've ever played with modeling clay then you know all about designing in three dimensions. Design basics There are some design rules and basic ideas that will be true regardless of the modeling software used. The working of 3D printing 3D printing has come a long way in terms of technology and cost allowing home 3D printers to be a reality. In this process there have been choices that will limit what can be printed. Seeing a 3D printer in action is the best way to learn about the process. Fortunately there are many 3D printing time lapse videos online of printers in action that can be found with a simple search. 3D printers build an object layer-by-layer from the bottom to the top. Plastic filament is heated and extruded, and each layer is built upon the last one. Usually the outside of the object is drawn and sometimes additional shells are added for strength. Then the inside is usually filled with a lattice to save plastic and provide some support for higher layers, however the inside is mostly air. This continues until the object is complete as shown in the following screenshot: Because of this layer-by-layer process, if a design is made so that any part has nothing underneath it, dangling in the air, then the printer will still extrude some plastic to try to print the part which will just dangle from the nozzle and be dragged into the next area where it will build up an ugly mess and ruin the print: Building for supportless prints One way of fixing the dangling object problem is to configure the preparing software to build the model "with supports". This means the slicer will automatically build a support lattice of plastic, up to the dangling part so that it has something to print on. Higher-end printers can actually print with a different material that can be dissolved away, but so far most home printers only use break-away supports. Either way after the print is complete it is left to the user to clean up this support material to extract the desired part. While supports do allow the creation of objects that would be impossible any other way, the supports themselves are a waste of material and often don't remove cleanly leading to a messy bottom surface where they contact the print. If a part is designed needing supports that are hard to remove, such as if they're internal and partially obscured, it can be difficult and frustrating to completely remove the support material (this can be true for even the higher-end 3D printers). The process of removing it may actually damage the print. It is possible and very easy with just the slightest application of cleverness to make designs that are printable without the need for any supports. So the blueprints in this article focus on making designs that print without supports. The limitations imposed by this demands just a little more effort but allow for the teaching of principles that are generally good to know. Designing for dual extruders Some models of Makerbot and other 3D printers have the ability to print in multiple colors at once using two different extruder heads feeding plastic from two different spools. There are some fun prints that come from this process. But as most Makerbots and other brands of home 3D printers do not have dual extruders at this time this article will not explore this process in detail. The basic idea of the process is creating two files that are aligned to print in the same space and combining them in the slicer. Designing supportless – overhangs and bridges When designing for supportless printing the rules are simple: Y prints, H prints okay, T does not print well. Branching out with overhangs It is possible to have the current layer slightly larger than the previous layer provided the overhang is not more than 45 degrees. This is because the current layer will have enough of the previous layer to stick to. Hence a shape like the capital letter Y will successfully print standing up. However, if the overhang is too great or too abrupt the new layer will droop causing a print fail, hence a shape like the capital letter T does not print. (If the T is serif and thus has downward dangling bits, it will fail even worse, as illustrated previously.) So it is important to try to keep overhangs within a 45 degree cone as they go upwards. Building bridges If a part of the print has nothing above it, but has something on either side that it can attach to, then it may be able to bridge the gap. But use caution. The printer makes no special effort in making bridges; they are drawn like any other layer: outline first, then infill. As long as the outline has something to attach to on both sides it should be fine. But if that outline is too complex or contains parts that will print in mid-air, it may not succeed. Being aware of bridges in the design and keeping them simple is the key to successful bridging. Even with a simple bridge some 3D printers need a little bit more calibration to print it well. Hence a shape like the capital letter H will successfully print most of the time. Of course this discussion is purely illustrative of the way overhangs work or fail. In real life if a Y, H, or T needed to be printed the best way to do it would be to lay them down. But for purposes of illustration it still stands that Y prints, H prints okay, T does not. Choosing a modeling tool There are many choices of modeling programs that can be used to produce 3D printable objects. There are many factors including versatility, simplicity, and cost to take into account. A tool with too steep a learning curve can turn off new users to the idea all together. A tool with too limited a set of tools can frustrate a user when they hit the limit. Investing a lot of money into something that doesn't end up going anywhere can be extremely disappointing. So it is important to explore the options. SolidWorks (www.solidworks.com) and other drafting oriented programs can do technical shapes with extreme precision. They include the necessary tools to accurately describe a shape that can be brought into the real work with high fidelity. However these sorts of tools tend to be costly and don't do artistic or organic shapes very well. Their highly technical nature also gives them a steep learning curve. OpenSCAD (www.openscad.org) is free and famous among the people who make 3D printers and can make technically accurate models as well. OpenSCAD also allows the models to be parametric, meaning that by changing a few variables and recalculating a new shape is generated. But OpenSCAD is difficult to use unless the user has a very technical and programmatic mind since the shapes are literally built from lines of code. Zbrush (pixologic.com/zbrush), Sculptris (pixologic.com/sculptris), or Wings3D (www.wings3d.com) are great tools for modeling organic shapes like the kind used in video games or animation. Sculptris and Wings3D are free and are very easy to pick up and use. But these tools lack when precision is necessary. Sketchup (www.sketchup.com) is a great free program with a library of shapes built in ready to import and play with. Its modeling tools are great for precise or architectural models. Sketchup doesn't do organic shapes well either and it can be tricky loading the plug-ins necessary for Sketchup to export their models to something printable. Even then models from Sketchup often have to go through an extensive clean up phase before they'll be ready to print. Autodesk 123D (www.123dapp.com) is not one but a whole suite of free programs designed around 3D modeling with specific focus on 3D printing. There are programs to design creatures or precise shapes. There is even an app for converting pictures of real life objects into 3D models. Some are programs that run in browser, some are downloads and some are apps for Apple devices. It's an eclectic and powerful group of programs. The Autodesk 123D suite's weakness is in its general immaturity. Autodesk is making great efforts to make modeling for 3D printing accessible for everyone but its tools still need to mature somewhat before they'll be ready to explore in depth. Blender (blender.org) is a 3D animation program that features a robust set of modeling tools. Good for artistic and organic shapes, it can also be used when precision is needed. On top all that it is free and open source, so it is still in constant development. If Blender doesn't have a particular feature it is only a matter of time until it will be added. If fact by the time this article is published chances are the version of Blender used to make it will already be out of date, but most of Blender's functionality remains unchanged version-to-version. Blender is also completely customizable so that every feature, from the key strokes used to the overall look, can be changed. The downside of Blender is that its user interface is somewhat unintuitive. This causes Blender to have a famously difficult learning curve. Because of Blender's versatility and availability it is the tool of choice for the beginning 3D designer and this work. Installing Blender This will be the first project in the article. Fortunately downloading and installing Blender is as easy as 1-2-3-4. Go to www.blender.org. Click on the Download link. Choose and download the installer for your system: Windows, Mac, or Linux, 32 bit or 64 bit. Run the installer. The installer will guide the process of loading Blender and adding icons to the system. Windows Blender.org offers installer executable and ZIP files. The zip files are for advanced users who want a portable version of Blender. When in doubt choose the executable since it will set up icons making for easy access. If in doubt whether to use the 32 or 64-bit versions picking the 32-bit will insure compatibility, but it is a good idea to find out what type of system it is being installed on as 64-bit offers significant performance improvements. Windows 7 or greater will confirm that the installer should be run. Click on Yes to assure Windows that it's okay to install Blender. Then the installer will run. The install wizard's defaults are fine for most users. Simply put the mouse over the Next button and click on every button that appears under it. On the second screen read over the Blender Terms of Service and click on I Agree to proceed. Unless you manage your installed programs directories yourself it is best to leave the defaults on the third screen as it is. Then click on Next and the install process will start. When the install process finishes leave the check box check and click on Finish to exit the installer and run Blender. Getting acquainted with Blender When Blender starts up, the Splash Screen can be dismissed by left-clicking outside the screen. The screenshots in this article use a custom color palette for print and a smaller window to minimize wasted screen space. Customizing the color palette will be discussed briefly later but these cosmetic changes will not affect the instructions presented at all. The Blender interface is broken into several different customizable panels to keep things organized. Each panel has resizing widgets in the upper-right and lower-left corners. By clicking on these widgets the panels can be split to add more panels or expanded into the territory of another to collapse panels at the user's preference. However, for now the default panels will be discussed as they provide the most common functionality for beginners. The 3D View panel The main window where things will be happening is the 3D View panel. The largest portion of the 3D view consists of the viewport where most of the work will take place. On the left-hand side of the 3D View panel is a tool bar that consists of tools relevant to the selection and mode. If the tool bar is ever not visible it can be revealed (or hidden again) by pressing Tor by clicking on the plus icon on the right-hand side that will appear when the toolbar is hidden. The specific tools in this bar will be explored as they are needed in the projects. There is another plus icon on the right that will bring up the viewport properties with properties relevant to the current selection or the viewport. This can also be revealed or hidden by pressing the Nkey. Again, the specifics will be explored further as needed. At the bottom of the 3D View panel is the 3D view menu bar with additional options followed by menus and icons related to editing and views. Hovering the mouse over each button will show what they are for. The Outliner panel The Outliner panel contains a hieratical view of all the objects in the scene. Each object can be selected by clicking on their name or the object can be hidden, locked, or excluded from rendering. Rendering means making a high quality picture from a scene for things such as animation or presentations. Doing a proper render includes setting up scene lights, cameras, textures, material properties, and many other functions that will not be explored in this article as it does not do anything that helps produce models for printing. However, exploring this functionality outside of this article can be good when trying to show off the models if printing them is not an option. The Properties panel The Properties panel is broken up into many tabs indicated by small icons. Hovering over the icons will show the name of the tab. For modeling the two tabs that will be used the most are the object and modifier tabs. Specific exploration of the tools contained therein will be done as needed. The Info panel On the top of the Blender windows is the Info panel. On the left-hand side of the Info panel there is an easy to navigate menu similar to the menu in most applications. This menu can be collapsed by clicking on the + button next to it and expanded by clicking the same. On the far right of the Info panel there is data about the current scene. If the data cannot be seen, hover the mouse over the panel, and use the middle scroll wheel or click-and-hold the middle mouse button (pressing the wheel like a button) and moving the mouse to pan the panel until the desired data is visible. The Timeline panel The Timeline panel is only relevant to doing animation and can be effectively ignored or collapsed for the purposes of this article. Because this article is only using a limited subset of Blender's functionality some things such as the Timeline panel could be customized away. However, since it is not the focus of this article to tell the reader how to customize their version of Blender, and because Blender has a much broader application, the screenshots that follow will have the Timeline panel visible. The reader is encouraged to explore Blender's other functionalities such as rendering and animation at their leisure and desire. Proper stance While all of Blender's functions are available from buttons and menus on the screen, typical Blender users rely heavily on hotkeys and shortcuts. Already the T and N keys have been discussed to bring up or collapse the Tools and Properties tabs in the 3D view. For this reason it is recommended that to use Blender have one hand on the mouse and the other hand on the keyboard at all times. This tends to be a common stance for many people but is mentioned for the few for whom learning this will be of great help. Blender customization One of Blender's strengths is its customizability. Almost every feature from the look and color, right down to the keystrokes and hotkeys are used for every action in Blender. Customization is accomplished in the Properties menu accessed from the File | User Preferences menu. The buttons across the top, switch to the various categories. Each category is packed with options. A full exploration of these options is beyond the scope of this article but the reader is encouraged to explore these options and make Blender their own. For instance if the reader is using a setup where a middle mouse button is unavailable, Blender contains an option to emulate the middle mouse button by pressing Alt along with left-click. Other systems may require other accommodations many of which are available in this menu. Setting up for Mac OSX Mac OSX users require special consideration. Blender is made for a three button mouse. If a single button mouse is all that is available, click when the instructions say Left-Mouse Button, use Alt with click for Middle-Mouse Button, and press command with click for Right-Mouse Button. General Blender tips Blender employs some conventions that are unique to its environment and as such getting acquainted with its most common quirks early can avoid frustration. First and perhaps most importantly, Ctrl+ Z for undo works in Blender will undo a multitude of mistakes. Undo in Blender remembers many past steps allowing backing up to a point before a grievous error was made. Remembering this when following along with the blueprints that follow will save the reader much frustration. Next, the location of the mouse pointer is important when using hotkeys. For example the T and N keys for the Tools and Properties tabs do not bring up those tabs if the mouse pointer is not hovering over the 3D View. If the mouse is hovering over a different panel the reaction could be unpredictable. Pressing the A key with the mouse over the 3D View will toggle selection of all objects in the scene. Pressing the A key with the mouse over the Object Tools tab will collapse the expandable menu hiding all the options of that menu. Blender uses the right-click on the mouse to select objects by default. This is perhaps the most counter intuitive thing for first time users, particularly because it will be encountered so frequently. But not everything has been swapped, just object selection. This behavior can of course be customized. If the reader would like to customize selection to the left mouse button then it is left to them to adjust the instructions accordingly. Finally, the relation of Blender units to real life units is not by default defined in Blender. Generally it is just easier to remember that 1 grid point will translate to 1 millimeter in the printed object. As the scale is increased Blender inserts darker grid lines every 10 grid lines by default which correlate to centimeters. So the default cube in the default scene would measure 2 mm on each side, which is less than 1/10 of an inch, which is very small. Suggested shortcuts In the projects in this article, when a new idea is introduced it will first be introduced with detailed steps. Once a process is taught the next time the name of the operation and the shortcut for that process will be all that is given. This does not mean that the keyboard shortcuts are the only way to do an operation but they are often the preferred method for experienced Blender users. The reader is free to accomplish the operation in any way that is comfortable for them. The blueprints This article has been designed to teach 3D printing design in a hands-on approach. A series of projects or blueprints will be presented and each one will introduce new tools and techniques. Each one builds on the last. Despite being a "virtual" process, 3D modeling has a surprisingly muscle memory aspect to it. The movements and processes need to be more than a mechanical process being executed, they need to be practiced so they can be fluid and eventually seamless. To that end the reader is advised not to skip any of the blueprints and follow along actually doing each one. The objects being designed in this article are, most of them, very small so that they can be printed without taking too much time or producing too much waste. The reader can make larger versions if they like but that is left for their own challenge activities. Summary 3D printing is cool. Learning to design your own models is the best way to take full advantage of 3D printing today. This article will teach 3D modeling by a series of hands-on activities so it's a good idea not to skip and actually follow along with each blueprint. While home 3D printers have the capability to do break away supports these are messy and wasteful. It is possible to design things to be able to print without the need of any supports. When designing things for support-less 3D prints remember Y prints, H prints okay, T does not print. Keep outward inclines gradual and no more than 45 degrees to be safe. There are many 3D modeling programs to choose from. Some are expensive, some are free. Some are better for technical works, others do artistic or organic shapes better. Some are easy to learn, some take more practice. This article will use Blender since it is free and open source, has tools for modeling technical and organic shapes and is not too difficult to learn if you learn by doing. Blender can be a bit tricky to get started with since it employs some conventions unique to its environment. Blender can be customized but this article will stick with the defaults so everyone is on the same page. Generally remember that Ctrl+ Z undoes a multiple mistakes and can get Blender back to the state it was before, useful in tutorials to get back on track. The location of the mouse pointer is important when using Blender's hotkeys, which is the best way to learn to use Blender. Blender uses the right-click on mouse for selection by default. Finally, Blender's units translate to real life by 1 Blender grid space = 1 millimeter. Resources for Article: Further resources on this subject: Building Your First Bean in ColdFusion [Article] Designer Friendly Templates [Article] The Trivadis Integration Architecture Blueprint [Article]
Read more
  • 0
  • 0
  • 4028
article-image-webcam-and-video-wizardry
Packt
26 Jul 2013
13 min read
Save for later

Webcam and Video Wizardry

Packt
26 Jul 2013
13 min read
(For more resources related to this topic, see here.) Setting up your camera Go ahead, plug in your webcam and boot up the Pi; we'll take a closer look at what makes it tick. If you experimented with the dwc_otg.speed parameter to improve the audio quality during the previous article, you should change it back now by changing its value from 1 to 0, as chances are that your webcam will perform worse or will not perform at all, because of the reduced speed of the USB ports.. Meet the USB Video Class drivers and Video4Linux Just as the Advanced Linux Sound Architecture (ALSA) system provides kernel drivers and a programming framework for your audio gadgets, there are two important components involved in getting your webcam to work under Linux: The Linux USB Video Class (UVC) drivers provide the low-level functions for your webcam, which are in accordance with a specifcation followed by most webcams produced today. Video4Linux (V4L) is a video capture framework used by applications that record video from webcams, TV tuners, and other video-producing devices. There's an updated version of V4L called V4L2, which we'll want to use whenever possible. Let's see what we can find out about the detection of your webcam, using the following command: pi@raspberrypi ~ $ dmesg The dmesg command is used to get a list of all the kernel information messages that of messages, is a notice from uvcvideo. Kernel messages indicating a found webcam In the previous screenshot, a Logitech C110 webcam was detected and registered with the uvcvideo module. Note the cryptic sequence of characters, 046d:0829, next to the model name. This is the device ID of the webcam, and can be a big help if you need to search for information related to your specifc model. Finding out your webcam's capabilities Before we start grabbing videos with our webcam, it's very important that we find out exactly what it is capable of in terms of video formats and resolutions. To help us with this, we'll add the uvcdynctrl utility to our arsenal, using the following command: pi@raspberrypi ~ $ sudo apt-get install uvcdynctrl Let's start with the most important part—the list of supported frame formats. To see this list, type in the following command: pi@raspberrypi ~ $ uvcdynctrl -f List of frame formats supported by our webcam According to the output of this particular webcam, there are two main pixel formats that are supported. The first format, called YUYV or YUV 4:2:2, is a raw, uncompressed video format; while the second format, called MJPG or MJPEG, provides a video stream of compressed JPEG images. Below each pixel format, we find the supported frame sizes and frame rates for each size. The frame size, or image resolution, will determine the amount of detail visible in the video. Three common resolutions for webcams are 320 x 240, 640 x 480 (also called VGA), and 1024 x 768 (also called XGA). The frame rate is measured in Frames Per Second (FPS) and will determine how "fuid" the video will appear. Only two different frame rates, 15 and 30 FPS, are available for each frame size on this particular webcam. Now that you know a bit more about your webcam, if you happen to be the unlucky owner of a camera that doesn't support the MJPEG pixel format, you can still go along, but don't expect more than a slideshow of images of 320 x 240 from your webcam. Video processing is one of the most CPU-intensive activities you can do with the Pi, so you need your webcam to help in this matter by compressing the frames first. Capturing your target on film All right, let's see what your sneaky glass eye can do! We'll be using an excellent piece of software called MJPG-streamer for all our webcam capturing needs. Unfortunately, it's not available as an easy-to-install package for Raspbian, so we will have to download and build this software ourselves. Often when we compile software from source code, the application we're building will want to make use of code libraries and development headers. Our MJPG-streamer application, for example, would like to include functionality for dealing with JPEG images and Video4Linux devices. Install the libraries and headers for JPEG and V4L by typing in the following command: pi@raspberrypi ~ $ sudo apt-get install libjpeg8-dev libv4l-dev Next, we're going to download the MJPG-streamer source code using the following command: pi@raspberrypi ~ $ wget http: // mjpg-streamer.svn.sourceforge.net/viewvc/mjpg-streamer/mjpg-streamer/?view=tar -O mjpg-streamer.tar.gz The wget utility is an extraordinarily handy web download tool with many uses. Here we use it to grab a compressed TAR archive from a source code repository, and we supply the extra -O mjpg-streamer.tar.gz to give the downloaded tarball a proper filename. Now we need to extract our mjpg-streamer.tar.gz article, using the following command: pi@raspberrypi ~ $ tar xvf mjpg-streamer.tar.gz The tar command can both create and extract archives, so we supply three fags here: x for extract, v for verbose (so that we can see where the files are being extracted to), and f to tell tar to use the article we specify as input, instead of reading from the standard input. Once you've extracted it, enter the directory containing the sources: pi@raspberrypi ~ $ cd mjpg-streamer Now type in the following command to build MJPG-streamer with support for V4L2 devices: pi@raspberrypi ~/mjpg-streamer $ make USE_LIBV4L2=true Once the build process has finished, we need to install the resulting binaries and other application data somewhere more permanent, using the following command: pi@raspberrypi ~/mjpg-streamer $ sudo make DESTDIR=/usr install You can now exit the directory containing the sources and delete it, as we won't need it anymore: pi@raspberrypi ~/mjpg-streamer $ cd .. && rm -r mjpg-streamer Let's fre up our newly-built MJPG-streamer! Type in the following command, but adjust the values for resolution and frame rate to a moderate setting that you know (from the previous section) that your webcam will be able to handle: pi@raspberrypi ~ $ mjpg_streamer -i "input_uvc.so -r 640x480 -f 30" -o "output_http.so -w /usr/www" MJPG-streamer starting up You may have received a few error messages saying Inappropriate ioctl for device; these can be safely ignored. Other than that, you might have noticed the LED on your webcam (if it has one) light up as MJPG-streamer is now serving your webcam feed over the HTTP protocol on port 8080. Press Ctrl + C at any time to quit MJPG-streamer. To tune into the feed, open up a web browser (preferably Chrome or Firefox) on a computer connected to the same network as the Pi and enter the following line into the address field of your browser, but change [IP address] to the IP address of your Pi. That is, the address in your browser should look like this: http://[IP address]:8080. You should now be looking at the MJPG-streamer demo pages, containing a snapshot from your webcam. MJPG-streamer demo pages in Chrome The following pages demonstrate the different methods of obtaining image data from your webcam: The Static page shows the simplest way of obtaining a single snapshot frame from your webcam. The examples use the URL http://[IP address]:8080/?action=snapshot to grab a single frame. Just refresh your browser window to obtain a new snapshot. You could easily embed this image into your website or blog by using the <img src = "http://[IP address]:8080/?action=snapshot"/> HTML tag, but you'd have to make the IP address of your Pi reachable on the Internet for anyone outside your local network to see it. The Stream page shows the best way of obtaining a video stream from your webcam. This technique relies on your browser's native support for decoding MJPEG streams and should work fne in most browsers except for Internet Explorer. The direct URL for the stream is http://[IP address]:8080/?action=stream. The Java page tries to load a Java applet called Cambozola, which can be used as a stream viewer. If you haven't got the Java browser plugin already installed, you'll probably want to steer clear of this page. While the Cambozola viewer certainly has some neat features, the security risks associated with the plugin outweigh the benefits of the viewer. The JavaScript page demonstrates an alternative way of displaying a video stream in your browser. This method also works in Internet Explorer. It relies on JavaScript code to continuously fetch new snapshot frames from the webcam, in a loop. Note that this technique puts more strain on your browser than the preferred native stream method. You can study the JavaScript code by viewing the page source of the following page: http://[IP address]:8080/javascript_simple.html The VideoLAN page contains shortcuts and instructions to open up the webcam video stream in the VLC media player. We will get to know VLC quite well during this article; leave it alone for now. The Control page provides a convenient interface for tweaking the picture settings of your webcam. The page should pop up in its own browser window so that you can view the webcam stream live, side-by-side, as you change the controls. Viewing your webcam in VLC media player You might be perfectly content with your current webcam setup and viewing the stream in your browser; for those of you who prefer to watch all videos inside your favorite media player, this section is for you. Also note that we'll be using VLC for other purposes further in this article, so we'll go through the installation here. Viewing in Windows Let's install VLC and open up the webcam stream: Visit http://www.videolan.org/vlc/download-windows.html and download the latest version of the VLC installer package(vlc-2.0.5-win32.exe, at the time of writing). Install VLC media player using the installer. Launch VLC using the shortcut on the desktop or from the Start menu. From the Media drop-down menu, select Open Network Stream…. Enter the direct stream URL we learned from the MJPG-streamer demo pages (http://[IP address]:8080/?action=stream), and click on the Play button. (Optional) You can add live audio monitoring from the webcam by opening up a command prompt window and typing in the following command: "C:Program Files (x86)PuTTYplink" pi@[IP address] -pw [password] sox -t alsa plughw:1 -t sox - | "C:Program Files (x86)sox-14-4-1sox" -q -t sox - -d Viewing in Mac OS X Let's install VLC and open up the webcam stream: Visit http://www.videolan.org/vlc/download-macosx.html and download the latest version of the VLC dmg package for your Mac model. The one at the top, vlc-2.0.5.dmg (at the time of writing), should be fne for most Macs. Double-click on the VLC disk image and drag the VLC icon to the Applications folder. Launch VLC from the Applications folder. From the File drop-down menu, select Open Network…. Enter the direct stream URL we learned from the MJPG-streamer demo pages (http://[IP address]:8080/?action=stream) and click on the Open button. (Optional) You can add live audio monitoring from the webcam by opening up a Terminal window (located in /Applications/Utilities ) and typing in the following command: ssh pi@[IP address] sox -t alsa plughw:1 -t sox - | sox -q -t sox - -d Viewing on Linux Let's install VLC or MPlayer and open up the webcam stream: Use your distribution's package manager to add the vlc or mplayer package. For VLC, either use the GUI to Open a Network Stream or launch it from the command line with vlc http://[IP address]:8080/?action=stream For MPlayer, you need to tag on an MJPG article extension to the stream, using the following command: mplayer "http://[IP address]:8080/?action= stream&stream.mjpg" (Optional) You can add live audio monitoring from the webcam by opening up a Terminal and typing in the following command: ssh pi@[IP address] sox -t alsa plughw:1 -t sox - | sox -q -t sox - -d Recording the video stream The best way to save a video clip from the stream is to record it with VLC, and save it into an AVI article container. With this method, we get to keep the MJPEG compression while retaining the frame rate information. Unfortunately, you won't be able to record the webcam video with sound. There's no way to automatically synchronize audio with the MJPEG stream. The only way to produce a video article with sound would be to grab video and audio streams separately and edit them together manually in a video editing application such as VirtualDub. Recording in Windows We're going to launch VLC from the command line to record our video: Open up a command prompt window from the Start menu by clicking on the shortcut or by typing in cmd in the Run or Search fields. Then type in the following command to start recording the video stream to a article called myvideo.avi, located on the desktop: C:> "C:Program Files (x86)VideoLANVLCvlc.exe" http://[IP address]:8080/?action=stream --sout="#standard{mux=avi,dst=%UserProfile%Desktopmyvideo.avi,access=file}" As we've mentioned before, if your particular Windows version doesn't have a C:Program Files (x86) folder, just erase the (x86) part from the path, on the command line. It may seem like nothing much is happening, but there should now be a growing myvideo.avi recording on your desktop. To confirm that VLC is indeed recording, we can select Media Information from the Tools drop-down menu and then select the Statistics tab. Simply close VLC to stop the recording. Recording in Mac OS X We're going to launch VLC from the command line, to record our video: Open up a Terminal window (located in /Applications/Utilities) and type in the following command to start recording the video stream to a file called myvideo.avi, located on the desktop: $ /Applications/VLC.app/Contents/MacOS/VLC http://[IP address]:8080/?action=stream --sout='#standard{mux=avi,dst=/Users/[username]/Desktop/myvideo.avi,access=file}' Replace [username] with the name of the account you used to log in to your Mac, or remove the directory path to write the video to the current directory. It may seem like nothing much is happening, but there should now be a growing myvideo.avi recording on your desktop. To confirm that VLC is indeed recording, we can select Media Information from the Window drop-down menu and then select the Statistics tab. Simply close VLC to stop the recording. Recording in Linux We're going to launch VLC from the command line to record our video: Open up a Terminal window and type in the following command to start recording the video stream to a file called myvideo.avi, located on the desktop: $ vlc http://[IP address]:8080/?action=stream --sout='#standard{mux=avi,dst=/home/[username]/Desktop/myvideo.avi,access=file}' Replace [username] with your login name, or remove the directory path to write the video to the current directory.
Read more
  • 0
  • 1
  • 4230

article-image-creating-file-server-samba
Packt
12 Apr 2013
7 min read
Save for later

Creating a file server (Samba)

Packt
12 Apr 2013
7 min read
(For more resources related to this topic, see here.) The Raspberry Pi with attached file storage functions well as a file server. Such a file server could be used as a central location for sharing files and documents, for storing backups of other computers, and for storing large media files such as photo, music, and video files. This recipe installs and configures samba and samba-common-bin. The Samba software distribution package, samba, contains a server for the SMB (CIFS) protocol used by Windows computers for setting up 'shared drives' or 'shared folders'. The samba-common-bin package contains a small collection of utilities for managing access to shared files. The recipe includes setting a file-sharing password for the user pi and providing read/write access to the files in pi user's home directory. However, it does not set up a new file share or show how to share a USB disk. The next recipe shows how to do that. After completing this recipe, other computers can exchange files with the default user pi. Getting ready The following are the ingredients: A Raspberry Pi with a 5V power supply An installed and configured "official" Raspbian Linux SD card A network connection A client PC connected to the same network as the Raspberry Pi This recipe does not require the desktop GUI and could either be run from the text-based console or from within an LXTerminals. If the Raspberry Pi's Secure Shell server is running and it has a network connection, this recipe can be remotely completed using a Secure Shell client. How to do it... The following are the steps for creating a file server: Log in to the Raspberry Pi either directly or remotely. Execute the following command: apt-get install samba samba-common-bin Download and install samba with the other packages that it depends on. This command needs to be run as a privileged user (use sudo). The package management application aptitude could be used as an alternative to the apt-get command utility for downloading and installing samba and samba-common-bin. In the preceding screenshot, the apt-get command is used to install the samba and samba-common-bin software distribution packages. Execute the following command: nano /etc/samba/smb.conf Edit the samba configuration file. The smb.conf file is protected and needs to be accessed as a privileged user (use sudo). The preceding screenshot starts the nano editor to edit the /etc/samba/smb.conf file. Change the security = user line. Uncomment the line (remove the hash, #, from the beginning of the line). The preceding screenshot of the Samba configuration file shows how to change samba security to use the Raspberry Pi's user accounts. Change the read only = yes line to be read only = no, as shown in the following screenshot: The preceding screenshot shows how to change the Samba configuration file to permit adding new files to user shares (read only = no). Save and exit the nano editor. Execute the following command: /etc/init.d/samba reload Tell the Samba server to reload its configuration file. This command is privileged (use sudo). In the preceding screenshot, the Samba server's configuration file is reloaded with the /etc/init.d/samba command. Execute the following command: smbpasswd –a pi This command needs to be run as a privileged user (use sudo). Enter the password (twice) that will be used for SMB (CIFS) file sharing. The preceding screenshot shows how to add an SMB password for the pi user. The Raspberry Pi is now accessible as a Windows share! From a Windows computer, use Map network drive to mount the Raspberry Pi as a network disk, as follows: The preceding screenshot starts mapping a network drive to the Raspberry Pi on Windows 7. Enter the UNC address raspberrypipi as the network folder. Choose an appropriate drive letter. The example uses the Z: drive. Select Connect using different credentials and click on Finish, as shown in the following screenshot: The preceding screenshot finishes mapping a network drive to the Raspberry Pi. Log in using the newly configured SMB (CIFS) password (from step 7). In the screenshot, a dialog box is displayed for logging in to the Raspberry Pi with the SMB (CIFS) username and password. The Raspberry Pi is now accessible as a Windows share! Only the home directory of the pi user is accessible at this point. The next recipe configures a USB disk for using it as a shared drive. How it works... This recipe begins by installing two software distribution packages – samba and samba-common-bin. This recipe uses the apt-get install command; however, the aptitude package management application could also be used to install software packages. The Samba package contains an implementation of the Server Message Block (SMB) protocol (also known as the Common Internet File System, CIFS). The SMB protocol is used by Microsoft Windows computers for sharing files and printers. The samba-common-bin package contains the smbpasswd command. This command is used to set up user passwords exclusively for using them with the SMB protocol. After the packages are installed, the Samba configuration file /etc/samba/smb.conf is updated. The file is updated to turn on user security and to enable writing files to user home directories. The smbpasswd command is used to add (-a) the pi user to the list of users authorized to share files with the Raspberry Pi using the SMB protocol. The passwords for file sharing are managed separately from the passwords used to log in to the Raspberry Pi either directly or remotely. The smbpasswd command is used to set the password for Samba file sharing. After the password has been added for the pi user, the Raspberry Pi should be accessible from any machine on the local network that is configured for the SMB protocol. The last steps of the recipe configure access to the Raspberry Pi from a Windows 7 PC using a mapped network drive. The UNC name for the file share, raspberrypipi, could also be used to access the share directly from Windows Explorer. There's more... This is a very simple configuration for sharing files. It enables file sharing for users with a login to the Raspberry Pi. However, it only permits the files in the user home directories to be shared. The next recipe describes how to add a new a file share. In addition to the SMB protocol server, smbd, the Samba software distribution package also contains a NetBIOS name server, nmbd. The NetBIOS name server provides naming services to computers using the SMB protocol. The nmbd server broadcasts the configured name of the Raspberry Pi, raspberrypi, to other computers on the local network. In addition to file sharing, a Samba server could also be used as a Primary Domain Controller (PDC) — a central network server that is used to provide logins and security for all computers on a LAN. More information on using the Samba package as a PDC can be found on the links given next. See Also Samba (software) http://en.wikipedia.org/wiki/Samba_(software) A Wikipedia article on the Samba software suite. nmbd - NetBIOS over IP naming service http://manpages.debian.net/cgi-bin/man.cgi?query=nmbd The Debian man page for nmbd. samba – a Windows SMB/CIFS file server for UNIX http://manpages.debian.net/cgi-bin/man.cgi?query=samba The Debian man page for samba. smb.conf - the configuration file for the Samba suite http://manpages.debian.net/cgi-bin/man.cgi?query=smb.conf The Debian man page for smb.conf. smbd - server to provide SMB/CIFS services to clients http://manpages.debian.net/cgi-bin/man.cgi?query=smbd The Debian man page for smbd. smbpasswd - change a user's SMB password http://manpages.debian.net/cgi-bin/man.cgi?query=smbpasswd The Debian man page for smbpasswd. System initialization http://www.debian.org/doc/manuals/debian-reference/ch04.en.html The Debian Reference Manual article on system initialization. Samba.org http://www.samba.org The Samba software website. Resources for Article : Further resources on this subject: Using PVR with Raspbmc [Article] Our First Project – A Basic Thermometer [Article] Instant Minecraft Designs – Building a Tudor-style house [Article]
Read more
  • 0
  • 0
  • 4835

article-image-our-first-project-basic-thermometer
Packt
26 Feb 2013
24 min read
Save for later

Our First Project – A Basic Thermometer

Packt
26 Feb 2013
24 min read
(For more resources related to this topic, see here.) Building a thermometer A thermometer is a device used for recording temperatures and changes in temperatures. The origins of the thermometer go back several centuries, and the device has evolved over the years. Traditional thermometers are usually glass devices that measure these changes via a substance such as mercury, which rises in the glass tube and indicates a number in Fahrenheit or Celsius. The introduction of microelectronics has allowed us to build our own digital thermometers. This can be useful for checking the temperature in parts of your house such as the garage or monitoring the temperature in rooms where it can affect the contents, for example, a wine cellar. Our thermometer will return its readings to the Raspberry Pi and display them in the terminal window. Lets start by setting up the hardware for our thermometer. Setting up our hardware There are several components that you will need to use in this article. You can solder the items to your shield if you wish or use the breadboard if you plan to use the same components for the projects in the articles that follow. Alternatively, you may have decided to purchase an all-in-one unit that combines some of the following components into a single electronic unit. We will make the assumption that you have purchased separate electronic components and will discuss the process of setting these up. We recommend that you switch Raspberry Pi off while connecting the components, especially if you plan on soldering any of the items. If your device is switched on and you accidently spill hot solder onto an unintended area of the circuit board, this can short your device, damaging it. Soldering while switched off allows you to clean up any mistakes using the de-soldering tool. An introduction to resistors Let's quickly take a look at resistors and what exactly these are. A resistor is an electronic component with two connection points (known as terminals) that can be used to reduce the amount of electrical energy passing through a point in a circuit. This reduction in energy is known as resistance. Resistance is measured in Ohms (O). You can read more about how this is calculated at the Wikipedia link http://en.wikipedia.org/wiki/Ohm's_law. You will find resistors are usually classified into two groups, fixed resistors and variable resistors. The typical types of fixed resistor you will encounter are made of carbon film with the resistance property marked in colored bands, giving you the value in Ohms. Components falling into the variable resistance group are those with resistance properties that change when some other ambient property in their environment changes. Let's now examine the two types of resistors we will be using in our circuit — a thermistor and a 10K Ohm resistor. Thermistor A thermistor is an electronic component which, when included in a circuit, can be used to measure temperature. The device is a type of resistor that has the property whereby its resistance varies as the temperature changes. It can be found in a variety of devices, including thermostats and electronic thermometers. There are two categories of thermistors available, these being Negative Thermistor Coefficient(NTC)and Positive Thermistor Coefficient(PTC). The difference between them is that as the temperature increases the resistance decreases in the case of a NTC, or increases in the case of a PTC. There are two numerical properties that we are interested in with regards to using this device in our project. These are the resistance of the thermistor at room temperature (25 degrees Celsius) and the beta coefficient of the thermistor. The coefficient can be thought of as the amount the resistance changes by as the ambient temperature around the thermistor changes. When you purchase a thermistor, you should have been provided with a datasheet that lists these two values. If you are unsure of the resistance of your thermistor, you can always check it by hooking it up to a voltage detector and taking a reading at room temperature. For example, if you bought a 10K thermistor, you should expect a reading of around 10K Ohms. For this project, we recommend purchasing a 10K thermistor. 10K Ohm resistor A 10K Ohm resistor, unlike a thermistor, is designed to have a constant resistance regardless of temperature change. This type of device falls into the fixed resistor category. You can tell the value of a resistor by the colored bands located on its body. When you purchase resistors, you may find they come with a color-coding guide, otherwise you can check the chart on Wikipedia (http://en.wikipedia.org/wiki/Electronic_color_code#Resistor_color_coding) in order to ascertain what the value is. As part of the circuit we are building, you will need the 10K resistor in order to convert the changing resistance into a voltage that the analog pin on your Raspberry Pi to Arduino can understand. Wires For this project, you will require three wires. One will attach to the 5V pin on your shield, one to the ground, and finally, one to the analog 0 pin. In the wiring guide, we will be using red, black, and yellow wires. The red will connect to 5V pin, the black to ground, and the yellow to the analog 0 pin. Breadboard Finally, in order to connect our component, we will use the breadboard as we did when connecting up the LED. Connecting our components Setting up our components for the thermometer is a fairly easy task. Once again, at this point, there is no need to attempt any soldering if you plan on re-using the components. Follow these steps in order to connect up everything in the correct layout. Take the red wire and connect it from the 5V pin on the shield to the connect point on the bus strip corresponding to the supply voltage. There are often two bus strips on a breadboard. These can be found on either of the long sides of the board and often have a blue or red strip indicating supply voltage and ground. Next take the black wire and connect it from the ground pin to the ground on the breadboard. We are now going to hook up the resistor. Connect one pin of your 10K resistor to the supply voltage strip that your red wire is connected to and take the other end and connect it to a terminal strip. Terminal strips are the name given to the area located in the middle of the breadboard where you connect your electronic components. Now that the resistor is in place, our next task will be to connect the thermistor. Insert one leg/wire of the thermistor into the ground on the bus strip, and place the second leg into the same row as you placed the resistor. The thermistor and resistor are daisy-chained together with the supply voltage. This leaves us now with the final task, which is connecting up the analog pin to our daisy chain. Finally connect one end of your yellow wire from the analog 0 (A0) on your shield to the terminal strip you selected for the preceding components. Sanity check The setup of your circuit is now complete. However, before switching on your Raspberry Pi check that you have connected up everything correctly. You can compare your setup to the following diagram: Our thermometer circuit is now complete, and you can now boot up your Raspberry Pi. Of course, without any software to return readings to the screen, the circuit is little more than a combination of electronic components! So let's get started on the software portion of our project. Software for our thermometer Now that we have the hardware for our thermometer, we will need to write some code that is capable of converting the values returned from the thermistor into a readable temperature in Celsius and Fahrenheit. First up, we are going to look at a new code editing application. This IDE allows you to develop code in the Raspberry Pi X Window System environment and compile the code via a Makefile. We will start by looking at the Geany IDE. Geany IDE Geany is a lightweight Linux integrated development environment. It can be installed onto Raspbian and then used for writing code in the Arduino/C++ programming language. An added benefit of using this IDE is that we can set up a custom Makefile with the commands we have been using to compile arduPi-based projects. By combining the Makefile and Geany, we have an IDE that mimics the functionality we would use in the Arduino IDE, but with the added benefit we can save files without renaming them and compile our applications with one click. Installing the IDE We are going to use the apt-get tool to install Geany on to your Raspberry Pi. Start off with loading up your Terminal window. From the prompt, run the following command: sudo apt-get install geany You'll get the prompt alerting you to the fact that Geany will take up a certain amount of disk space. You can accept the prompt by selecting Y. Once complete, you will now see Geany located under the Programming menu option. Select the Geany icon from the previous menu to load the application. Once loaded, you will be presented with a code-editing interface. Along the top of the screen, you can find a standard toolbar. This includes the File menu where you can open and save files you are working on, and menus for Edit, Search, View, Document, Project, Build, Tools, and Help. The left-hand side of the screen contains a window that has a number of features including allowing you to jump to a function when you are editing your code. The bottom panel on the screen contains information about your application when you compile it. This is useful when debugging your code, as error messages will be displayed here. Geany has an extensive number of features. You can find a comprehensive guide to the IDE at the Geany website http://www.geany.org/. For our application development at this stage, we are only interested in creating a new file, opening a file, saving a file, and compiling a file. The options we need are located under the File menu item and the Build menu item. Feel free though to explore the IDE and get comfortable with it. In order to use the make option for compiling our application under the Build menu, we need to create a Makefile — we will now take a look at how to achieve this. An introduction to Makefiles The next tool we are going to use is the Makefile. A Makefile is executed by the Linux command make. Make is a command-line utility that allows you to compile executable files by storing the parameters into a Makefile and then calling it as needed. This method allows us to store common compilation directives and re-use them without having to type out the command each time. As you are familiar with, we have used the following command in order to compile our LED example: g++ -lrt -lpthread blink_test.cpp arduPi.o -o blink_test Using a Makefile, we could store this and then execute it when located in the same directory as the files using a simpler command. make We can try out creating a Makefile using the code. Load up Geany from the programming menu if you don't currently have it open. If you don't have a new document open, create a new one from the File menu. Now add the following lines to Blink_test/Makefile, making sure to tab the second line once: Blink: arduPi.o g++ -lrt -lpthread blink_test.cpp arduPi.o -o blink_test If you don't tab the second line containing the compilation instructions, then the Makefile won't run. Now that you have created the Makefile, we can save and run it with the following steps: From the File menu, select Save. From the Save dialog, navigate to the directory where you saved your blink_test.cpp and save the file with the title Makefile. Now open the blink_test.cpp file from the directory where you saved your Makefile. We can test our Makefile by selecting the Build option from the menu and selecting Make. In the panel at the bottom of the IDE, you will see a message indicating that the Makefile was executed successfully. Now from the Terminal window, navigate to the directory containing your blink_test project. Located in this directory, you will find your freshly compiled blink_test file. If you still have your LED example at hand, hook it up to the shield and from the command line, you can run the application by typing the following command: ./blink_test The LED should start blinking. Hopefully, you can see from this example that integrating the Makefile into the IDE allows us to write code and compile it as we go in order to debug it. This will be very useful when you start to work on projects with greater complexity. Once we have written the code to record our temperature readings, we will re-visit the Makefile and create a custom one to build our thermometer application via Geany. Now that you have set up Geany and briefly looked at Makefiles, lets get started with writing our application. Thermometer code We will be using the arduPi library for writing our code as we did for the LED test. As well as using standard Arduino and C++ syntax, we are going to explore some calculations that are used to return the results we need. In order to convert the values we are collecting from our circuit and convert them into a readable temperature, we are going to need to use an equation that converts resistance into temperature. This equation is known as the Steinhart-Hart equation. The Steinhart-Hart equation models the resistance of our thermistor at different temperatures and can be coded into an application in order to display the temperature in Kelvin, Fahrenheit, and Celsius. We will use a simpler version of this in our program (called the B parameter equation) and can use the values from the datasheet provided with our thermistor in order to populate the constants that are needed to perform the calculations. For a simpler version of the equation, we only need to know the following values: The room temperature in Kelvin The co-efficient of our thermistor (should be on the data sheet) The thermistor resistance at room temperature We will use Geany to write our application, so if you don't have it open, start it up. Writing our application From the File menu in Geany, create a new blank file; this is where we are going to add our Arduino code. If you save the file now, then Geany's syntax highlighting will be triggered making the code easier to read. Open the File menu in Geany and select Save. In the Save dialog box, navigate to the arduPi directory and save your file with the name thermometer.cpp. We will use the arduPi_template.cpp as the base for our project and add our code into it. To start, we will add in the include statements for the libraries and headers we need, as well as define some constants that will be used in our application for storing key values. Add the following block of code to your empty file, thermometer.cpp, in Geany: //Include ArduPi library #include "arduPi.h" //Include the Math library #include <math.h> //Needed for Serial communication SerialPi Serial; //Needed for accessing GPIO (pinMode, digitalWrite, digitalRead, //I2C functions) WirePi Wire; //Needed for SPI SPIPi SPI; // Values need for Steinhart-Hart equation // and calculating resistance. #define TENKRESISTOR 10000 //our 10K resistor #define BETA 4000 // This is the Beta Coefficient of your thermistor #define THERMISTOR 10000 // The resistance of your thermistor at room //temperature #define ROOMTEMPK 298.15 //standard room temperature in Kelvin //(25 Celsius). // Number of readings to take // these will be averaged out to // get a more accurate reading // You can increase/decrease this as needed #define READINGS 7 You will recognize some of the preceding code from the arduPi template, as well as some custom code we have added. This custom code includes a reference to the Math library. The Math library in C++ contains a number of reusable complex mathematical functions that can be called and which would help us avoid writing these from scratch. As you will see later in the program, we have used the logarithm function log() when calculating the temperature in Kelvin. Following are a number of constants; we use the #define statement here to initialize them: TENKRESISTOR: This is the 10K Ohm resistor you added to the circuit board. As you can see, we have assigned the value of 10,000 to it. BETA: This is the beta-coefficient of your thermistor. THERMISTOR: The resistance of your thermometer at room temperature. ROOMTEMPK: The room temperature in Kelvin, this translates to 25 degrees Celsius. READINGS: We will take seven readings from the analog pin and average these out to try and get a more accurate reading. The values we have used previously are for a 10K thermistor with a co-efficient of 4000. These should be updated as necessary to reflect the thermistor you are using in your project. Now that we have defined our constants and included some libraries, we need to add the body of the program. From the arduPi_template.cpp file, we now include the main function that kicks our application off. /********************************************************* * IF YOUR ARDUINO CODE HAS OTHER FUNCTIONS APART FROM * * setup() AND loop() YOU MUST DECLARE THEM HERE * * *******************************************************/ /************************** * YOUR ARDUINO CODE HERE * * ************************/ int main (){ setup(); while(1){ loop(); } return (0); } Remember that you can use both // and /* */ for commenting your code. We have our reference to the setup() function and to the loop() function, so we can now declare these and include the necessary code. Below the main() function, add the following: void setup(void) { printf("Starting up thermometer \n"); Wire.begin(); } The setup() function prints out a message to the screen indicating that the program is starting and then calls Wire.begin(). This will allow us to interact with the analog pins. Next we are going to declare the loop function and define some variables that will be used within it. void loop(void) { float avResistance; float resistance; int combinedReadings[READINGS]; byte val0; byte val1; // Our temperature variables float kelvin; float fahrenheit; float celsius; int channelReading; float analogReadingArduino; As you can see in the preceding code snippet, we have declared a number of variables. These can be broken down into: Resistance readings: These are float avResistance, float resistance, and byte val0 and byte val1. The variables avResistance and resistance will be used during the program's execution for recording resistance calculations. The other two variables val0 and val1 are used to store the readings from analog 0 on the shield. Temperature calculations: The variables float kelvin, float fahrenheit, and float celsius as their names suggest are used for recording temperature in three common formats. After declaring these variables, we need to access our analog pin and start to read data from it. Copy the following code into your loop function: /******************* ADC mappings Pin Address 0 0xDC 1 0x9C 2 0xCC 3 0x8C 4 0xAC 5 0xEC 6 0xBC 7 0xFC *******************/ // 0xDC is our analog 0 pin Wire.beginTransmission(8); Wire.write(byte(0xDC)); Wire.endTransmission(); Here we have code that initializes the analog pin 0. The code comment contains the mappings between the pins and addresses so if you wish, you can run the thermistor off a different analog pin. We are using pin 0, so we can now start to take readings from it. To get the correct data, we need to take two readings of a byte each from the pin. We will do this using a for loop. The Raspberry Pi to Arduino shield does not support the analogRead() and analogWrite() functions from the Arduino programming language. Instead we need to use the Wire commands and addresses from the table provided in the comments for this code. Add the following for loop below your previous block of code: /* Grab the two bytes returned from the analog 0 pin, combine them and write the value to the combinedReadings array */ for(int r=0; r<READINGS; r++){ Wire.requestFrom(8,2); val0 = Wire.read(); val1 = Wire.read(); channelReading = int(val0)*16 + int(val1>>4); analogReadingArduino = channelReading * 1023 /4095; combinedReadings[r] = analogReadingArduino; delay(100); } Here we have a loop that grabs the data from the analog pin so we can process it. In the requestFrom() function, we pass in the declaration for the number of bytes we wish to have returned from the pin. Here we can see we have two — this is the second value in the function call. We will combine these values and then write them to an array; in total, we will do this seven times and then average out the value. You will notice we are applying a calculation on the two combined bytes. This calculation converts the values into a 10-bit Arduino resolution. The value you will see returned after this equation is the same as you would expect to get from the analogRead() function on an Arduino Uno if you had hooked up your circuit to it. After we have done this calculation, we assign the value to our array that stores each of the seven readings. Now that we have this value, we can calculate the average resistance. For this, we will use another for loop that iterates through our array of readings, combines them, and then divides them by the value we set in our READINGS constant. Here is the next for loop you will need to accomplish this: // Grab the average of our 7 readings // in order to get a more accurate value avResistance = 0; for (int r=0; r<READINGS; r++) { avResistance += combinedReadings[r]; } avResistance /= READINGS; So far, we have grabbed our readings and can now use a calculation to work out the resistance. For this, we will need our avResistance reading, the resistance value of our 10K resistor, and our thermistor's resistance at room temperature. Add the following code that performs this calculation: /* We can now calculate the resistance of the readings that have come back from analog 0 */ avResistance = (1023 / avResistance) - 1; avResistance = TENKRESISTOR / avResistance; resistance = avResistance / THERMISTOR; The next part of the program uses the resistance to calculate the temperature. This is the portion of code utilizing the simpler version of the Steinhart-hart equation. The result of this equation will be the ambient temperature in degrees Kelvin. Next add the following block of code: // Calculate the temperature in Kelvin kelvin = log(resistance); kelvin /= BETA; kelvin += 1.0 / ROOMTEMPK; kelvin = 1.0 / kelvin; printf("Temperature in K "); printf("%f \n",kelvin); So we have our temperature in degrees K and also have a printf statement that outputs this value to the screen. It would be nice to also have the temperature in two more common temperature formats, those being Celsius and Fahrenheit. These are simple calculations to perform. Let's start by adding the Celsius code. // Convert from Kelvin to Celsius celsius = kelvin -= 273.15; printf("Temperature in C "); printf("%f \n",celsius); Now that we have the temperature in degrees Celsius, we can print this to the screen. Using this value we can convert Celsius into Fahrenheit. // Convert from Celsius to Fahrenheit fahrenheit = (celsius * 1.8) + 32; printf("Temperature in F "); printf("%f \n",fahrenheit); Great! So now we have the temperature being returned in three formats. Let's finish up the application by adding a delay of 3 seconds before the application takes another temperature reading and close off our loop() function. // Three second delay before taking our next // reading delay(3000); } So there we have it. This small application will use our circuit and return the temperature. We now need to compile the code so we can give it a test. Remember to save your code now so that the changes you have added are included in the thermometer.cpp file. Our next step is to create a Makefile for our thermometer application. If you saved the blink_test Makefile into the arduPi directory, you can re-use this or you can create a new file using the previous steps. Place the following code into your Makefile: Thermo: arduPi.o g++ -lrt -lpthread thermometer.cpp arduPi.o -o thermometer Save the file with the name Makefile. We can now compile and test our application. Compiling and testing When discussing Geany earlier, we demonstrated how to run the make command from inside the IDE. Now that we have our Makefile in place, we can test this out. From the Build menu, select Make. You should see the compilation pane at the bottom of the screen spring to life and providing there are no typos or errors in your code, a file called thermometer will be successful output. The thermometer file is our executable that we will run to view the temperature. From the terminal window, navigate to the arduPi directory and locate your thermometer file. This can be launched using the following command: sudo ./thermometer The application will now be executed and text similar to the following in the screenshot should be visible: Try changing the temperature by blowing on the thermometer, placing it in some cold water if safe to do so, or applying a hair dryer to it. You should see the temperature on the screen change. If you have a thermostat or similar in the room that logs the temperature, try comparing its value to that of your thermometer to see how accurate it is. You can run an application in the background by adding an & after the command, for example, sudo ./thermometer &. In the case of our application, it outputs to the screen, so if you attempt to use the same terminal window your typing will be interrupted! To kill an application running in the background, you can type fg to bring it to the foreground and then press Ctrl + C to cancel it. What if it doesn't work Providing you had no errors when compiling your code, then the chances are that one of your components is not connected properly, is connected to the wrong pin, or may be defective. Try double-checking your circuit to make sure everything is attached and hasn't become accidently dislodged. Also ensure that the components are wired up as suggested at the beginning of this article. If everything seems to be correct, you may have a faulty component. Try substituting each item one at a time in the circuit to see if it is a bad wire or faulty resistor. Up and running If you see your temperature being output successfully, then you are up and running! Congratulations, you now have a basic thermometer. This will form the basis for our next project, which is a thermostat. As you can see, this application is useful. However, returning the output to the screen isn't the best method, it would be better for example, if we could see the results via our web browser or an LCD screen. Now that we have a circuit and an application recording temperature, this opens up a wide variety of things we can do with the data, including logging it or using it to change the heat settings in our homes. This article should have whetted your appetite for bigger projects. Summary In this article, we learned how to wire up two new components — a thermistor and resistor. Our application taught us how to use these components to log a temperature reading, and we also became familiar with Makefiles and the Geany IDE. Resources for Article : Further resources on this subject: Folding @ Home on Ubuntu: Cancer Research Made Easy [Article] Using PVR with Raspbmc [Article] Using ChronoForms to add More Features to your Joomla! Form [Article]
Read more
  • 0
  • 0
  • 2202
article-image-using-pvr-raspbmc
Packt
18 Feb 2013
9 min read
Save for later

Using PVR with Raspbmc

Packt
18 Feb 2013
9 min read
(For more resources related to this topic, see here.) What is PVR? Personal Video Recording (PVR), with a TV tuner, allows you to record as well as watch Live TV. Recordings can be scheduled manually based on a time, or with the help of the TV guide, which can be downloaded from the TV provider (by satellite/aerial or cable), or from a content information provider, such as Radio Times via the Internet. Not only does PVR allow you to watch Live TV, but on capable backends (we'll look at what a backend is in a moment), it allows you to rewind and pause live TV. A single tuner allows you to tune into one channel at once, while two tuners would allow you to tune into two. As such, it's important to note that the capabilities listed earlier are not mutually exclusive, that is, with enough tuners it is possible to record one channel while watching another. This may, depending on the software you use as a backend, be possible on one tuner, if the two channels are on the same multiplexer. Raspbmc's role in PVR Raspbmc can function as both a PVR backend and a frontend. For PVR support in XBMC, it is necessary to have both a backend and one or more frontends. Let's see what a backend and frontend is: Backend: A backend is the part that tunes the channel, records your scheduled programs, and serves those channels and recorded television to the frontends. One backend can serve multiple frontends, if it is sufficiently powerful enough and there are enough tuners available. Frontend: A frontend is the part that receives content from the backend and plays back live television and recorded programs to the user. In the case of Raspbmc, XBMC serves as the frontend and allows us to play back the content. Multiple frontends can connect to one or more backends. This means that we can have several installations of Raspbmc play broadcast content from even a single tuner. As we've now learned, Raspbmc has a built-in PVR frontend in the form of XBMC. However, it also has a built-in backend. This backend is TVHeadend, and we'll look at getting that up and running shortly. Standalone backend versus built-in backend There are cases when it is more favorable to use an external, or standalone, backend rather than the one that ships with Raspbmc itself. Outlined as follows is a comparison: Standalone backend Raspbmc backend (TVHeadend) A better choice if you do not find TVHeadend feature-rich or prefer another backend. If you only intend to have one frontend, it makes sense to run everything off the same device, rather than relying on an external system. If you have a pre-existing backend, it is easier to configure Raspbmc to use that, rather than reconfiguring it completely. The process may be simplified for you as, generally, one can just connect their device and it will be detected in TVHeadend. If you are planning on having multiple frontends, it is more sensible to have a standalone backend. This is to ensure that the computer has enough horsepower, and also, you can serve from the same computer you are serving files from, and thus, only need one device on, rather than two (the streaming machine and the Pi). Raspbmc's auto-update system covers the backend that is included as well. This means you will always have a reliable and stable version of TVHeadend bundled with Raspbmc and you need not worry about having to update it to get new features. If you need to use a PCI or PCI expressbased tuner, you will need to use an external backend due to limitations of the Pi's connectivity. Better for wireless media centers. If you have low bandwidth throughput, then running the tuner locally on the Raspberry Pi makes more sense as it does not rely on any transfers between the network (unless using HDHomeRun). Setting up PVR We will now look at how to set up a PVR. This will include configuring the backend as well as getting it running in XBMC. An external backend The purpose of this title is to focus on the Raspberry Pi, and as there is a great variety of PVR software available, it would be implausible to cover the many options. If you are planning on using an external backend, it is recommended that you thoroughly search for information on the Internet. There are even books for popular and comprehensive PVR packages, such as MythTV. TVHeadend was chosen for Raspbmc because it is lightweight and easy to manage. Raspbmc's XBMC build will support the following backends at the time of writing: MythTV TVHeadend ForTheRecord/Argus TV MediaPortal Njoy N7 NextPVR VU+/Enigma DVBViewer VDR Setting up TVHeadend in Raspbmc It should be noted that not all TV tuners will work on your device. Due to the fact that the list changes frequently, it is not possible to list here the devices that work on the Raspberry Pi. However, the most popular tuners used with Raspbmc are AF90015 based. HDHomerun tuners by SilliconDust are supported as well (note that these tuners do not connect to your Pi directly, but are accessed through the network). With the right kernel modules, TVHeadend can support DVB-T (digital terrestrial), DVB-S (satellite), and DVB-C (cable) based tuners. By default, the TVHeadend service is disabled in Raspbmc. We'll need to enable it as follows: Go to Raspbmc Settings—we did this before by selecting it from the Programs menu. Under the System Configuration tab, check the TVHeadend server radio button found under the Service Management category. Click on OK to save your settings. Now that the TVHeadend is running, we can now access its management page by going to http://192.168.1.5:9981. You should substitute the preceding IP address, 192.168.1.5, with the actual IP address of the Raspberry Pi. You will be greeted with an interface much akin to the following screenshot: In the preceding screenshot we see that there are three main tabs available. They are as follows: Electronic Program Guide: This shows us what is being broadcast on each channel. It is empty in the preceding screenshot because we've not scanned and added any channels. Digital Video Recorder: This will allow you to schedule recordings of TV channels as well as use the Automatic recorder functionality, which can allow you to create powerful rules for automatic recording. You can also schedule recordings in XBMC; however, doing so via the web interface is probably more flexible. Configuration: This is where you can configure the EPG source, choose where recordings are saved, manage access to the backend, and manage tuners. The Electronic Program Guide and Digital Video Recorder tabs are intuitive and simple so we will instead look at the Configuration section. Our first step in configuring a tuner is to head over to TV Adapters: As shown in the preceding screenshot, TV tuners should automatically be detected and selectable in the drop-down menu (highlighted here). On the right, a box entitled Adapter Configuration can be used for adjusting the tuner's parameters. Now, we need to select the Add DVB Network by location option. The following dialog box will appear: Once we have defined the region we are in, TVHeadend will automatically begin scanning for new services on the correct frequencies. These services can be mapped to channels by selecting the Map DVB services to channels button as shown earlier. We are now ready to connect to the backend in XBMC. Connecting to our backend in XBMC Regardless of whether we have used Raspbmc's built-in backend or an external one, the process for connecting to it in XBMC is very much the same. We need to do the following: In XBMC, go to System | Settings | Add-ons | Disabled Add-ons | PVR clients. You will now see the following screenshot: Select the type of backend that you would like to connect to. You will then see a dialog allowing you to configure or enable the add-on. Select Configure and fill out the necessary connection details. Note that if you are connecting to the Raspbmc built-in backend, select the TVHeadend client to configure. The default settings will suffice: Click on OK to save these settings and select Enable. Note that the add-on is now located in System | Settings | Add-ons | Enabled add-ons | PVR clients rather than Disabled add-ons | PVR clients. Now, we need to go into Settings | System | Live TV. This allows you to configure a host of options related to Live TV. The most important one is the Enable Live TV option—be sure to check this box! Now, if we go back to the main menu, we'll see a Live TV option. Your channel information will be there already, although, like the instance shown as follows, it may need a bit of renaming: The following screenshot shows us a sample electronic program guide: Simply select a channel and press Play! The functionality that PVR offers is controlled in a similar manner to XBMC, so this won't be covered in this article. If you have got this far, you've done the hard part already. Summary We've now covered what PVR can do for us, the differences between a frontend and backend, and where a remote backend may be more suitable than the one Raspbmc has built in. We then covered how to connect to that backend in XBMC and play back content from it. Resources for Article : Further resources on this subject: Adding Pages, Image Gallery, and Plugins to a WordPress Blog [Article] Building a CRUD Application with the ZK Framework [Article] Playback Audio with Video and Create a Media Playback Component Using JavaFX [Article]
Read more
  • 0
  • 0
  • 3738

article-image-cups-how-manage-multiple-printers
Packt
23 Oct 2009
7 min read
Save for later

CUPS: How to Manage Multiple Printers

Packt
23 Oct 2009
7 min read
Configuring Printer Classes By default there are no printer classes set up. You will need to define them. The following are some of the criteria you can use to define printer classes: Printer Type: Printer type can be a PostScript or non-PostScript printer. Location: The location can describe the printer's place; for example the printer is placed on the third floor of the building. Department: Printer classes can also be defined on the basis of the department to which the printer belongs. The printer class might contain several printers that are used in a particular order. CUPS always checks for an available printer in the order in which printers were added to a class. Therefore, if you want a high-speed printer to be accessed first, you would add the high-speed printer to the class before you add a low-speed printer. This way, the high-speed printer can handle as many print requests as possible, and the low-speed printer would be reserved as a backup printer when the high-speed printer is in use. It is not compulsory to add printers in classes. There are a few important tasks that you need to do to manage and configure printer classes. Printer classes can themselves be members of other classes. So it is possible for you to define printer classes for high availability for printing. Once you configure the printer class, you can print to the printer class in the same way that you print to a single printer. Features and Advantages Here are some of the features and advantages of printer classes in CUPS: Even if a printer is a member of a class, it can still be accessed directly by users if you allow it. However, you can make individual printers reject jobs while groups accept them. As the system administrator, you have control over how printers in classes can be used. The replacement of printers within the class can easily be done. Let's understand this with the help of an example. You have a network consisting of seven computers running Linux, all having CUPS installed. You want to change printers assigned to the class. You can remove a printer and add a new one to the class in less than a minute. The entire configuration required is done as all other computers get their default printing routes updated in another 30 seconds. It takes less than one minute for the whole change—less time than a laser printer takes to warm up. A company is having the following type of printers with their policy as: A class for B/W laser printers that anybody can print on A class for draft color printers that anybody can print on, but with restrictions on volume A class for precision color printers that is unblocked only under the administrator's supervision CUPS provide the means for centralizing printers, and users will only have to look for a printer in a single place It provides the means for printing on another Ethernet segment without allowing normal Windows to broadcast traffic to get across and clutter up the network bandwidth It makes sure that the person printing from his desk on the second floor of the other building doesn't get stuck because the departmental printer on the ground floor of this building has run out of paper and his print job has got redirected to the standby printer All of these printers hang off Windows machines, and would be available directly for other computers running under Windows. However, we get the following advantages by providing them through CUPS on a central router: Implicit Class CUPS also supports the special type of printer class called as implicit class. These implicit classes work just like printer classes, but they are created automatically based on the available "printers and printer classes" on the network. CUPS identifies printers with identical configurations intelligently, and has the client machines send their print jobs to the first available printer. If one or more printers go down, the jobs are automatically redirected to the servers that are running, providing fail-safe printing. Managing Printer Classes Through Command-Line You can perform this task only by using the lpadmin -c command. Jobs sent to a printer class are forwarded to the first available printer in the printer class. Adding a Printer to a Class You can run the following command with the –p and -c options to add a printer to a class: $sudo lpadmin –p cupsprinter –c cupsclass The above example shows that the printer cupsprinter has been added to printer class cupsclass: You can verify whether the printers are in a printer class: $lpstat -c cupsclass Removing a Printer from a Class You need to run lpadmin command with –p and –r options to remove printer from a class. If all the printers from a class are removed, then that class can get deleted automatically. $sudo lpadmin –p cupsprinter –r cupsclass The above example shows that the printer cupsprinter has been removed from the printer class, cupsclass: Removing a Class To remove a class, you can run the lpadmin command with the –x option: $sudo lpadmin -x cupsclass The above command will remove cupsclass. Managing Printer Classes Through CUPS Web Interface Like printers, and groups of printers, printer classes can also be managed by the CUPS web interface. In the web interface, CUPS displays a tab called Classes, which has all the options to manage the printer classes. You can get this tab directly by visiting the following URL: http://localhost:631/classes If no classes are defined, then the screen will appear as follows which shows the search and sorting options: Adding a New Printer Class A printer class can be added using the Add Class option in the Administration tab. It is useful to have a helpful description in the Name field to identify your class. You can add the additional information regarding the printer class under the Description field that would be seen by users when they select this printer class for a job. The Location field can be used to help you group a set of printers logically and thus help you identify different classes. In the following figure, we are adding all black and white printers into one printer class. The Members box will be pre-populated with a list of all printers that have been added to CUPS. Select the appropriate printers for your class and it will be ready for use. Once your class is added, you can manage it using the Classes tab. Most of the options here are quite similar to the ones for managing individual printers, as CUPS treats each class as a single entity. In the Classes tab, we can see following options with each printer class: Stop Class Clicking on Stop Class changes the status of all the printers in that class to "stop". When a class is stopped, this option changes to Start Class. This changes the status of all of the printers to "idle". Now, they are once again ready to receive print jobs. Reject Jobs Clicking on Reject Jobs changes the status of all the printers in that class to "reject jobs". When a class is in this state, this option changes to Accept Jobs which changes the status of all of the printers to "accept jobs" so that they are once again ready to accept print jobs.    
Read more
  • 0
  • 0
  • 15986