Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Embedded Systems

12 Articles
article-image-amds-293-million-jv-with-chinese-chipmaker-hygon-starts-production-of-x86-cpus
Natasha Mathur
10 Jul 2018
3 min read
Save for later

AMD’s $293 million JV with Chinese chipmaker Hygon starts production of x86 CPUs

Natasha Mathur
10 Jul 2018
3 min read
Chinese chip producer Hygon begins production of China-made x86 processors named “Dhyana”. These processors use AMD’s Zen microarchitecture and are the result of the licensing deal between AMD and its Chinese partners. Hygon has started shipping the new “Dhyana” x86 CPUs. According to the official statements made by AMD, it does not permit the selling of the final chip designs to its partners in China instead it encourages their partners to design their own processors that suit the needs of the Chinese server market. This is an effort to break China’s dependency on the foreign technology market. In 2016, AMD announced that it is working on a joint project in China to develop processors. This provided AMD with a $293 million in cash by Tianjin Haiguang Advanced Technology Investment Co. (THATIC). THATIC includes AMD as well as the Chinese Academy of Sciences. What’s interesting is that AMD managed to establish a license allowing Chinese processor manufacturers to develop and sell x86 processors despite the fact that Intel was blocked from selling Xeon processors to China in 2015 by the Obama administration. This happened over concerns that the chips would help China’s nuclear weapon programs. Dhyana processors are focusing on embedded applications currently. It is a System on chip ( SoC) instead of a socketed chip. But this design doesn’t limit the Dhyana processors from being used in high-performance or data center applications, which usually leverages Intel Xeon and other server processors. Also, Linux kernels developers have stated that the x86 processors are very close in design to that of AMD’s EPYC. In fact, when moving the Linux kernel code for EPYC processors to the Hygon chips, it required fewer than 200 new lines of code, according to a report from Michael Larabel of Phoronix. The only difference between the two is the vendor IDs and family series. Apart from AMD, there are other chip-producing ventures that China is heavily engrossed in. One such venture is Zhaoxin Semiconductor that is working to manufacture x86 chips through a partnership with VIA. China is making continuous efforts to free the country from US interventions and to change their long-term processor market. There are implications that most of the x86 processors are 14nm chips, but there isn’t much information available on the capabilities of the Dhyana family. Also, other details of their manufacturing characteristics are still to be known. Baidu releases Kunlun AI chip, China’s first cloud-to-edge AI chip Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo
Read more
  • 0
  • 0
  • 3612

article-image-learning-beaglebone-python-programming
Packt
10 Jul 2015
15 min read
Save for later

Learning BeagleBone Python Programming

Packt
10 Jul 2015
15 min read
In this In this article by Alexander Hiam, author of the book Learning BeagleBone Python Programming, we will go through the initial steps to get your BeagleBone Black set up. By the end of it, you should be ready to write your first Python program. We will cover the following topics: Logging in to your BeagleBone Connecting to the Internet Updating and installing software The basics of the PyBBIO and Adafruit_BBIO libraries (For more resources related to this topic, see here.) Initial setup If you've never turned on your BeagleBone Black, there will be a bit of initial setup required. You should follow the most up-to-date official instructions found at http://beagleboard.org/getting-started, but to summarize, here are the steps: Install the network-over-USB drivers for your PC's operating system. Plug in the USB cable between your PC and BeagleBone Black. Open Chrome or Firefox and navigate to http://192.168.7.2 (Internet Explorer is not fully supported and might not work properly). If all goes well, you should see a message on the web page served up by the BeagleBone indicating that it has successfully connected to the USB network: If you scroll down a little, you'll see a runnable Bonescript example, as in the following screenshot: If you press the run button you should see the four LEDs next to the Ethernet connector on your BeagleBone light up for 2 seconds and then return to their normal function of indicating system and network activity. What's happening here is the Javascript running in your browser is using the Socket.IO (http://socket.io) library to issue remote procedure calls to the Node.js server that's serving up the web page. The server then calls the Bonescript API (http://beagleboard.org/Support/BoneScript), which controls the GPIO pins connected to the LEDs. Updating your Debian image The GNU/Linux distributions for platforms such as the BeagleBone are typically provided as ISO images, which are single file copies of the flash memory with the distribution installed. BeagleBone images are flashed onto a microSD card that the BeagleBone can then boot from. It is important to update the Debian image on your BeagleBone to ensure that it has all the most up-to-date software and drivers, which can range from important security fixes to the latest and greatest features. First, grab the latest BeagleBone Black Debian image from http://beagleboard.org/latest-images. You should now have a .img.xz file, which is an ISO image with XZ compression. Before the image can be flashed from a Windows PC, you'll have to decompress it. Install 7-Zip (http://www.7-zip.org/), which will let you decompress the file from the context menu by right-clicking on it. You can install Win32 Disk Imager (http://sourceforge.net/projects/win32diskimager/) to flash the decompressed .img file to your microSD card. Plug the microSD card you want your BeagleBone Black to boot from into your PC and launch Win32 Disk Imager. Select the drive letter associated with your microSD card; this process will erase the target device, so make sure the correct device is selected: Next, press the browse button and select the decompressed .img file, then press Write: The image burning process will take a few minutes. Once it is complete, you can eject the microSD card, insert it into the BeagleBone Black and boot it up. You can then return to http://192.168.7.2 to make sure the new image was flashed successfully and the BeagleBone is able to boot. Connecting to your BeagleBone If you're running your BeagleBone with a monitor, keyboard, and mouse connected, you can use it like a standard desktop install of Debian. This book assumes you are running your BeagleBone headless (without a monitor). In that case, we will need a way to remotely connect to it. The Cloud9 IDE The BeagleBone Debian images include an instance of the Cloud9 IDE (https://c9.io) running on port 3000. To access it, simply navigate to your BeagleBone Black's IP address with the port appended after a colon, that is, http://192.168.7.2:3000. If it's your first time using Cloud9, you'll see the welcome screen, which lets you customize the look and feel: The left panel lets you organize, create, and delete files in your Cloud9 workspace. When you open a file for editing, it is shown in the center panel, and the lower panel holds a Bash shell and a Javascript REPL. Files and terminal instances can be opened in both the center and bottom panels. Bash instances start in the Cloud9 workspace, but you can use them to navigate anywhere on the BeagleBone's filesystem. If you've never used the Bash shell I'd encourage you to take a look at the Bash manual (https://www.gnu.org/software/bash/manual/), as well as walk through a tutorial or two. It can be very helpful and even essential at times, to be able to use Bash, especially with a platform such as BeagleBone without a monitor connected. Another great use for the Bash terminal in Cloud9 is for running the Python interactive interpreter, which you can launch in the terminal by running python without any arguments: SSH If you're a Linux user, or if you would prefer not to be doing your development through a web browser, you may want to use SSH to access your BeagleBone instead. SSH, or Secure Shell, is a protocol for securely gaining terminal access to a remote computer over a network. On Windows, you can download PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html, which can act as an SSH client. Run PuTTY, make sure SSH is selected, and enter your BeagleBone's IP address and the default SSH port of 22: When you press Open, PuTTY will open an SSH connection to your BeagleBone and give you a terminal window (the first time you connect to your BeagleBone it will ask you if you trust the SSH key; press Yes). Enter root as the username and press Enter to log in; you will be dropped into a Bash terminal: As in the Cloud9 IDE's terminals, from here, you can use the Linux tools to move around the filesystem, create and edit files, and so on, and you can run the Python interactive interpreter to try out and debug Python code. Connecting to the Internet Your BeagleBone Black won't be able to access the Internet with the default network-over-USB configuration, but there are a couple ways that you can connect your BeagleBone to the Internet. Ethernet The simplest option is to connect the BeagleBone to your network using an Ethernet cable between your BeagleBone and your router or a network switch. When the BeagleBone Black boots with an Ethernet connection, it will use DHCP to automatically request an IP address and register on your network. Once you have your BeagleBone registered on your network, you'll be able to log in to your router's interface from your web browser (usually found at http://192.168.1.1 or http://192.168.2.1) and find out the IP address that was assigned to your BeagleBone. Refer to your router's manual for more information. The current BeagleBone Black Debian images are configured to use the hostname beaglebone, so it should be pretty easy to find in your router's client list. If you are using a network on which you have no way of accessing this information through the router, you could use a tool such as Fing (http://www.overlooksoft.com) for Android or iPhone to scan the network and list the IP addresses of every device on it. Since this method results in your BeagleBone being assigned a new IP address, you'll need to use the new address to access the Getting Started pages and the Cloud9 IDE. Network forwarding If you don't have access to an Ethernet connection, or it's just more convenient to have your BeagleBone connected to your computer instead of your router, it is possible to forward your Internet connection to your BeagleBone over the USB network. On Windows, open your Network Connections window by navigating to it from the Control Panel or by opening the start menu, typing ncpa.cpl, and pressing Enter. Locate the Linux USB Ethernet network interface and take note of the name; in my case, its Local Area Network 4. This is the network interface used to connect to your BeagleBone: First, right-click on the network interface that you are accessing the Internet through, in my case, Wireless Network Connection, and select Properties. On the Sharing tab, check Allow other network users to connect through this computer's Internet connection, and select your BeagleBone's network interface from the dropdown: After pressing OK, Windows will assign the BeagleBone interface a static IP address, which will conflict with the static IP address of http://192.168.7.2 that the BeagleBone is configured to request on the USB network interface. To fix this, you'll want to right-click the Linux USB Ethernet interface and select Properties, then highlight Internet Protocol Version 4 (TCP/IPv4) and click on Properties: Select Obtain IP address automatically and click on OK; Your Windows PC is now forwarding its Internet connection to the BeagleBone, but the BeagleBone is still not configured properly to access the Internet. The problem is that the BeagleBone's IP routing table doesn't include 192.168.7.1 as a gateway, so it doesn't know the network path to the Internet. Access a Cloud9 or SSH terminal, and use the route tool to add the gateway, as shown in the following command: # route add default gw 192.168.7.1 Your BeagleBone should now have Internet access, which you can test by pinging a website: root@beaglebone:/var/lib/cloud9# ping -c 3 graycat.io PING graycat.io (198.100.47.208) 56(84) bytes of data. 64 bytes from 198.100.47.208.static.a2webhosting.com (198.100.47.208): icmp_req=1 ttl=55 time=45.6 ms 64 bytes from 198.100.47.208.static.a2webhosting.com (198.100.47.208): icmp_req=2 ttl=55 time=45.6 ms 64 bytes from 198.100.47.208.static.a2webhosting.com (198.100.47.208): icmp_req=3 ttl=55 time=46.0 ms   --- graycat.io ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 45.641/45.785/46.035/0.248 ms The IP routing will be reset at boot up, so if you reboot your BeagleBone, the Internet connection will stop working. This can be easily solved by using Cron, a Linux tool for scheduling the automatic running of commands. To add the correct gateway at boot, you'll need to edit the crontab file with the following command: # crontab –e This will open the crontab file in nano, which is a command line text editor. We can use the @reboot keyword to schedule the command to run after each reboot: @reboot /sbin/route add default gw 192.168.7.1 Press Ctrl + X to exit nano, then press Y, and then Enter to save the file. Your forwarded Internet connection should now remain after rebooting. Using the serial console If you are unable to use a network connection to your BeagleBone Black; for instance, if your network is too slow for Cloud9 or you can't find the BeagleBone's IP address, there is still hope! The BeagleBone Black includes a 6-pin male connector; labeled J1, right next to the P9 expansion header (we'll learn more about the P8 and P9 expansion headers soon!). You'll need a USB to 3.3 V TTL serial converter, for example, from Adafruit http://www.adafruit.com/products/70 or Logic Supply http://www.logicsupply.com/components/beaglebone/accessories/ls-ttl3vt. You'll need to download and install the FTDI virtual COM port driver for your operating system from http://www.ftdichip.com/Drivers/VCP.htm, then plug the connector into the J1 header such that the black wire lines up with the header's pin 1 indicator, as shown in the following screenshot: You can then use your favorite serial port terminal emulator, such as PuTTY or CoolTerm (http://freeware.the-meiers.org), and configure the serial port for a baud rate of 115200 with 1 stop bit and no parity. Once connected, press Enter and you should see a login prompt. Enter the user name root and you'll drop into a Bash shell. If you only need the console connection to find your IP address, you can do so using the following command: # ip addr Updating your software If this is the first time you've booted your BeagleBone Black, or if you've just flashed a new image, it's best to start by ensuring your installed software packages are all up to date. You can do so using Debian's apt package manager: # apt-get update && apt-get upgrade This process might take a few minutes. Next, use the pip Python package manager to update to the latest versions of the PyBBIO and Adafruit_BBIO libraries: # pip install --upgrade PyBBIO Adafruit_BBIO As both libraries are currently in active development, it's worth running this command from time to time to make sure you have all the latest features. The PyBBIO library The PyBBIO library was developed with Arduino users in mind. It emulates the structure of an Arduino (http://arduino.cc) program, as well as the Arduino API where appropriate. If you've never seen an Arduino program, it consists of a setup() function, which is called once when the program starts, and a loop() function, which is called repeatedly until the end of time (or until you turn off the Arduino). PyBBIO accomplishes a similar structure by defining a run() function that is passed two callable objects, one that is called once when the program starts, and another that is called repeatedly until the program stops. So the basic PyBBIO template looks like this: from bbio import *   def setup(): pinMode(GPIO1_16, OUTPUT)   def loop(): digitalWrite(GPIO1_16, HIGH) delay(500) digitalWrite(GPIO1_16, LOW) delay(500)   run(setup, loop) The first line imports everything from the PyBBIO library (the Python package is installed with the name bbio). Then, two functions are defined, and they are passed to run(), which tells the PyBBIO loop to begin. In this example, setup() will be called once, which configures the GPIO pin GPIO1_16 as a digital output with the pinMode() function. Then, loop() will be called until the PyBBIO loop is stopped, with each digitalWrite() call setting the GPIO1_16 pin to either a high (on) or low (off) state, and each delay() call causing the program to sleep for 500 milliseconds. The loop can be stopped by either pressing Ctrl + C or calling the stop() function. Any other error raised in your program will be caught, allowing PyBBIO to run any necessary cleanup, then it will be reraised. Don't worry if the program doesn't make sense yet, we'll learn about all that soon! Not everyone wants to use the Arduino style loop, and it's not always suitable depending on the program you're writing. PyBBIO can also be used in a more Pythonic way, for example, the above program can be rewritten as follows: import bbio   bbio.pinMode(bbio.GPIO1_16, bbio.OUTPUT) while True: bbio.digitalWrite(bbio.GPIO1_16, bbio.HIGH) bbio.delay(500) bbio.digitalWrite(bbio.GPIO1_16, bbio.LOW) bbio.delay(500) This still allows the bbio API to be used, but it is kept out of the global namespace. The Adafruit_BBIO library The Adafruit_BBIO library is structured differently than PyBBIO. While PyBBIO is structured such that, essentially, the entire API is accessed directly from the first level of the bbio package; Adafruit_BBIO instead has the package tree broken up by a peripheral subsystem. For instance, to use the GPIO API you have to import the GPIO package: from Adafruit_BBIO import GPIO Otherwise, to use the PWM API you would import the PWM package: from Adafruit_BBIO import PWM This structure follows a more standard Python library model, and can also save some space in your program's memory because you're only importing the parts you need (the difference is pretty minimal, but it is worth thinking about). The same program shown above using PyBBIO could be rewritten to use Adafruit_BBIO: from Adafruit_BBIO import GPIO import time   GPIO.setup("GPIO1_16", GPIO.OUT) try: while True:    GPIO.output("GPIO1_16", GPIO.HIGH)    time.sleep(0.5)    GPIO.output("GPIO1_16", GPIO.LOW)    time.sleep(0.5) except KeyboardInterrupt: GPIO.cleanup() Here the GPIO.setup() function is configuring the ping, and GPIO.output() is setting the state. Notice that we needed to import Python's built-in time library to sleep, whereas in PyBBIO we used the built-in delay() function. We also needed to explicitly catch KeyboardInterrupt (the Ctrl + C signal) to make sure all the cleanup is run before the program exits, whereas this is done automatically by PyBBIO. Of course, this means that you have much more control about when things such as initialization and cleanup happen using Adafruit_BBIO, which can be very beneficial depending on your program. There are some trade-offs, and the library you use should be chosen based on which model is better suited for your application. Summary In this article, you learned how to login to the BeagleBone Black, get it connected to the Internet, and update and install the software we need. We also looked at the basic structure of programs using the PyBBIO and Adafruit_BBIO libraries, and talked about some of the advantages of each. Resources for Article: Further resources on this subject: Overview of Chips [article] Getting Started with Electronic Projects [article] Beagle Boards [article]
Read more
  • 0
  • 0
  • 10377

article-image-virtualization
Packt
07 Jul 2015
37 min read
Save for later

Learning Embedded Linux Using Yocto: Virtualization

Packt
07 Jul 2015
37 min read
In this article by Alexandru Vaduva, author of the book Learning Embedded Linux Using the Yocto Project, you will be presented with information about various concepts that appeared in the Linux virtualization article. As some of you might know, this subject is quite vast and selecting only a few components to be explained is also a challenge. I hope my decision would please most of you interested in this area. The information available in this article might not fit everyone's need. For this purpose, I have attached multiple links for more detailed descriptions and documentation. As always, I encourage you to start reading and finding out more, if necessary. I am aware that I cannot put all the necessary information in only a few words. In any Linux environment today, Linux virtualization is not a new thing. It has been available for more than ten years and has advanced in a really quick and interesting manner. The question now does not revolve around virtualization as a solution for me, but more about what virtualization solutions to deploy and what to virtualize. (For more resources related to this topic, see here.) Linux virtualization The first benefit everyone sees when looking at virtualization is the increase in server utilization and the decrease in energy costs. Using virtualization, the workloads available on a server are maximized, which is very different from scenarios where hardware uses only a fraction of the computing power. It can reduce the complexity of interaction with various environments and it also offers an easier-to-use management system. Today, working with a large number of virtual machines is not as complicated as interaction with a few of them because of the scalability most tools offer. Also, the time of deployment has really decreased. In a matter of minutes, you can deconfigure and deploy an operating system template or create a virtual environment for a virtual appliance deploy. One other benefit virtualization brings is flexibility. When a workload is just too big for allocated resources, it can be easily duplicated or moved on another environment that suit its needs better on the same hardware or on a more potent server. For a cloud-based solution regarding this problem, the sky is the limit here. The limit may be imposed by the cloud type on the basis of whether there are tools available for a host operating system. Over time, Linux was able to provide a number of great choices for every need and organization. Whether your task involves server consolidation in an enterprise data centre, or improving a small nonprofit infrastructure, Linux should have a virtualization platform for your needs. You simply need to figure out where and which project you should chose. Virtualization is extensive, mainly because it contains a broad range of technologies, and also since large portions of the terms are not well defined. In this article, you will be presented with only components related to the Yocto Project and also to a new initiative that I personally am interested in. This initiative tries to make Network Function Virtualization (NFV) and Software Defined Networks (SDN) a reality and is called Open Platform for NFV (OPNFV). It will be explained here briefly. SDN and NFV I have decided to start with this topic because I believe it is really important that all the research done in this area is starting to get traction with a number of open source initiatives from all sorts of areas and industries. Those two concepts are not new. They have been around for 20 years since they were first described, but the last few years have made possible it for them to resurface as real and very possible implementations. The focus of this article will be on the NFV article since it has received the most amount of attention, and also contains various implementation proposals. NFV NFV is a network architecture concept used to virtualize entire categories of network node functions into blocks that can be interconnected to create communication services. It is different from known virtualization techniques. It uses Virtual Network Functions (VNF) that can be contained in one or more virtual machines, which execute different processes and software components available on servers, switches, or even a cloud infrastructure. A couple of examples include virtualized load balancers, intrusion detected devices, firewalls, and so on. The development product cycles in the telecommunication industry were very rigorous and long due to the fact that the various standards and protocols took a long time until adherence and quality meetings. This made it possible for fast moving organizations to become competitors and made them change their approach. In 2013, an industry specification group published a white paper on software-defined networks and OpenFlow. The group was part of European Telecommunications Standards Institute (ETSI) and was called Network Functions Virtualisation. After this white paper was published, more in-depth research papers were published, explaining things ranging from terminology definitions to various use cases with references to vendors that could consider using NFV implementations. ETSI NFV The ETSI NFV workgroup has appeared useful for the telecommunication industry to create more agile cycles of development and also make it able to respond in time to any demands from dynamic and fast changing environments. SDN and NFV are two complementary concepts that are key enabling technologies in this regard and also contain the main ingredients of the technology that are developed by both telecom and IT industries. The NFV framework consist of six components: NFV Infrastructure (NFVI): It is required to offer support to a variety of use cases and applications. It comprises of the totality of software and hardware components that create the environment for which VNF is deployed. It is a multitenant infrastructure that is responsible for the leveraging of multiple standard virtualization technologies use cases at the same time. It is described in the following NFV Industry Specification Groups (NFV ISG) documents: NFV Infrastructure Overview NFV Compute NFV Hypervisor Domain NFV Infrastructure Network Domain The following image presents a visual graph of various use cases and fields of application for the NFV Infrastructure NFV Management and Orchestration (MANO): It is the component responsible for the decoupling of the compute, networking, and storing components from the software implementation with the help of a virtualization layer. It requires the management of new elements and the orchestration of new dependencies between them, which require certain standards of interoperability and a certain mapping. NFV Software Architecture: It is related to the virtualization of the already implemented network functions, such as proprietary hardware appliances. It implies the understanding and transition from a hardware implementation into a software one. The transition is based on various defined patterns that can be used in a process. NFV Reliability and Availability: These are real challenges and the work involved in these components started from the definition of various problems, use cases, requirements, and principles, and it has proposed itself to offer the same level of availability as legacy systems. It relates to the reliability component and the documentation only sets the stage for future work. It only identifies various problems and indicates the best practices used in designing resilient NFV systems. NFV Performance and Portability: The purpose of NFV, in general, is to transform the way it works with networks of future. For this purpose, it needs to prove itself as wordy solution for industry standards. This article explains how to apply the best practices related to performance and portability in a general VNF deployment. NFV Security: Since it is a large component of the industry, it is concerned about and also dependent on the security of networking and cloud computing, which makes it critical for NFV to assure security. The Security Expert Group focuses on those concerns. An architectural of these components is presented here: After all the documentation is in place, a number of proof of concepts need to be executed in order to test the limitation of these components and accordingly adjust the theoretical components. They have also appeared to encourage the development of the NFV ecosystem. SDN Software-Defined Networking (SDN) is an approach to networking that offers the possibility to manage various services using the abstraction of available functionalities to administrators. This is realized by decoupling the system into a control plane and data plane and making decisions based on the network traffic that is sent; this represents the control plane realm, and where the traffic is forwarded is represented by the data plane. Of course, some method of communication between the control and data plane is required, so the OpenFlow mechanism entered into the equation at first; however other components could as well take its place. The intention of SDN was to offer an architecture that was manageable, cost-effective, adaptable, and dynamic, as well as suitable for the dynamic and high-bandwidth scenarios that are available today. The OpenFlow component was the foundation of the SDN solution. The SDN architecture permitted the following: Direct programming: The control plane is directly programmable because it is completely decoupled by the data plane. Programmatically configuration: SDN permitted management, configuration, and optimization of resources though programs. These programs could also be written by anyone because they were not dependent on any proprietary components. Agility: The abstraction between two components permitted the adjustment of network flows according to the needs of a developer. Central management: Logical components could be centered on the control plane, which offered a viewpoint of a network to other applications, engines, and so on. Opens standards and vendor neutrality: It is implemented using open standards that have simplified the SDN design and operations because of the number of instructions provided to controllers. This is smaller compared to other scenarios in which multiple vendor-specific protocols and devices should be handled. Also, meeting market requirements with traditional solutions would have been impossible, taking into account newly emerging markets of mobile device communication, Internet of Things (IoT), Machine to Machine (M2M), Industry 4.0, and others, all require networking support. Taking into consideration the available budgets for further development in various IT departments, were all faced to make a decision. It seems that the mobile device communication market all decided to move toward open source in the hope that this investment would prove its real capabilities, and would also lead to a brighter future. OPNFV The Open Platform for the NFV Project tries to offer an open source reference platform that is carrier-graded and tightly integrated in order to facilitate industry peers to help improve and move the NFV concept forward. Its purpose is to offer consistency, interoperability, and performance among numerous blocks and projects that already exist. This platform will also try to work closely with a variety of open source projects and continuously help with integration, and at the same time, fill development gaps left by any of them. This project is expected to lead to an increase in performance, reliability, serviceability, availability, and power efficiency, but at the same time, also deliver an extensive platform for instrumentation. It will start with the development of an NFV infrastructure and a virtualized infrastructure management system where it will combine a number of already available projects. Its reference system architecture is represented by the x86 architecture. The project's initial focus point and proposed implementation can be consulted in the following image. From this image, it can be easily seen that the project, although very young since it was started in November 2014, has had an accelerated start and already has a few implementation propositions. There are already a number of large companies and organizations that have started working on their specific demos. OPNFV has not waited for them to finish and is already discussing a number of proposed project and initiatives. These are intended both to meet the needs of their members as well as assure them of the reliability various components, such as continuous integration, fault management, test-bed infrastructure, and others. The following figure describes the structure of OPNFV: The project has been leveraging as many open source projects as possible. All the adaptations made to these project can be done in two places. Firstly, they can be made inside the project, if it does not require substantial functionality changes that could cause divergence from its purpose and roadmap. The second option complements the first and is necessary for changes that do not fall in the first category; they should be included somewhere in the OPNFV project's codebase. None of the changes that have been made should be up streamed without proper testing within the development cycle of OPNFV. Another important element that needs to be mentioned is that OPNFV does not use any specific or additional hardware. It only uses available hardware resources as long the VI-Ha reference point is supported. In the preceding image, it can be seen that this is already done by having providers, such as Intel for the computing hardware, NetApp for storage hardware, and Mellanox for network hardware components. The OPNFV board and technical steering committee have a quite large palette of open source projects. They vary from Infrastructure as a Service (IaaS) and hypervisor to the SDN controller and the list continues. This only offers the possibility for a large number of contributors to try some of the skills that maybe did not have the time to work on, or wanted to learn but did not have the opportunity to. Also, a more diversified community offers a broader view of the same subject. There are a large variety of appliances for the OPNFV project. The virtual network functions are diverse for mobile deployments where mobile gateways (such as Serving Gateway (SGW), Packet Data Network Gateway (PGW), and so on) and related functions (Mobility Management Entity (MME) and gateways), firewalls or application-level gateways and filters (web and e-mail traffic filters) are used to test diagnostic equipment (Service-Level Agreement (SLA) monitoring). These VNF deployments need to be easy to operate, scale, and evolve independently from the type of VNF that is deployed. OPNFV sets out to create a platform that has to support a set of qualities and use-cases as follows: A common mechanism is needed for the life-cycle management of VNFs, which include deployment, instantiation, configuration, start and stop, upgrade/downgrade, and final decommissioning A consistent mechanism is used to specify and interconnect VNFs, VNFCs, and PNFs; these are indepedant of the physical network infrastructure, network overlays, and so on, that is, a virtual link A common mechanism is used to dynamically instantiate new VNF instances or decommission sufficient ones to meet the current performance, scale, and network bandwidth needs A mechanism is used to detect faults and failure in the NFVI, VIM, and other components of an infrastructure as well as recover from these failures A mechanism is used to source/sink traffic from/to a physical network function to/from a virtual network function NFVI as a Service is used to host different VNF instances from different vendors on the same infrastructure There are some notable and easy-to-grasp use case examples that should be mentioned here. They are organized into four categories. Let's start with the first category: the Residential/Access category. It can be used to virtualize the home environment but it also provides fixed access to NFV. The next one is data center: it has the virtualization of CDN and provides use cases that deal with it. The mobile category consists of the virtualization of mobile core networks and IMS as well as the virtualization of mobile base stations. Lastly, there are cloud categories that include NFVIaaS, VNFaaS, the VNF forwarding graph (Service Chains), and the use cases of VNPaaS. More information about this project and various implementation components is available at https://www.opnfv.org/. For the definitions of missing terminologies, please consult http://www.etsi.org/deliver/etsi_gs/NFV/001_099/003/01.02.01_60/gs_NFV003v010201p.pdf. Virtualization support for the Yocto Project The meta-virtualization layer tries to create a long and medium term production-ready layer specifically for an embedded virtualization. This roles that this has are: Simplifying the way collaborative benchmarking and researching is done with tools, such as KVM/LxC virtualization, combined with advance core isolation and other techniques Integrating and contributing with projects, such as OpenFlow, OpenvSwitch, LxC, dmtcp, CRIU and others, which can be used with other components, such as OpenStack or Carrier Graded Linux. To summarize this in one sentence, this layer tries to provide support while constructing OpenEmbedded and Yocto Project-based virtualized solutions. The packages that are available in this layer, which I will briefly talk about are as follows: CRIU Docker LXC Irqbalance Libvirt Xen Open vSwitch This layer can be used in conjunction with the meta-cloud-services layer that offer cloud agents and API support for various cloud-based solutions. In this article, I am referring to both these layers because I think it is fit to present these two components together. Inside the meta-cloud-services layer, there are also a couple of packages that will be discussed and briefly presented, as follows: openLDAP SPICE Qpid RabbitMQ Tempest Cyrus-SASL Puppet oVirt OpenStack Having mentioned these components, I will now move on with the explanation of each of these tools. Let's start with the content of the meta-virtualization layer, more exactly with CRIU package, a project that implements Checkpoint/Restore In Userspace for Linux. It can be used to freeze an already running application and checkpoint it to a hard drive as a collection of files. These checkpoints can be used to restore and execute the application from that point. It can be used as part of a number of use cases, as follows: Live migration of containers: It is the primary use case for a project. The container is check pointed and the resulting image is moved into another box and restored there, making the whole experience almost unnoticeable by the user. Upgrading seamless kernels: The kernel replacement activity can be done without stopping activities. It can be check pointed, replaced by calling kexec, and all the services can be restored afterwards. Speeding up slow boot services: It is a service that has a slow boot procedure, can be check pointed after the first start up is finished, and for consecutive starts, can be restored from that point. Load balancing of networks: It is a part of the TCP_REPAIR socket option and switches the socket in a special state. The socket is actually put into the state expected from it at the end of the operation. For example, if connect() is called, the socket will be put in an ESTABLISHED state as requested without checking for acknowledgment of communication from the other end, so offloading could be at the application level. Desktop environment suspend/resume: It is based on the fact that the suspend/restore action for a screen session or an X application is by far faster than the close/open operation. High performance and computing issues: It can be used for both load balancing of tasks over a cluster and the saving of cluster node states in case a crash occurs. Having a number of snapshots for application doesn't hurt anybody. Duplication of processes: It is similar to the remote fork() operation. Snapshots for applications: A series of application states can be saved and reversed back if necessary. It can be used both as a redo for the desired state of an application as well as for debugging purposes. Save ability in applications that do not have this option: An example of such an application could be games in which after reaching a certain level, the establishment of a checkpoint is the thing you need. Migrate a forgotten application onto the screen: If you have forgotten to include an application onto the screen and you are already there, CRIU can help with the migration process. Debugging of applications that have hung: For services that are stuck because of git and need a quick restart, a copy of the services can be used to restore. A dump process can also be used and through debugging, the cause of the problem can be found. Application behavior analysis on a different machine: For those applications that could behave differently from one machine to another, a snapshot of the application in question can be used and transferred into the other. Here, the debugging process can also be an option. Dry running updates: Before a system or kernel update on a system is done, its services and critical applications could be duplicated onto a virtual machine and after the system update and all the test cases pass, the real update can be done. Fault-tolerant systems: It can be used successfully for process duplication on other machines. The next element is irqbalance, a distributed hardware interrupt system that is available across multiple processors and multiprocessor systems. It is, in fact, a daemon used to balance interrupts across multiple CPUs, and its purpose is to offer better performances as well as better IO operation balance on SMP systems. It has alternatives, such as smp_affinity, which could achieve maximum performance in theory, but lacks the same flexibility that irqbalance provides. The libvirt toolkit can be used to connect with the virtualization capabilities available in the recent Linux kernel versions that have been licensed under the GNU Lesser General Public License. It offers support for a large number of packages, as follows: KVM/QEMU Linux supervisor Xen supervisor LXC Linux container system OpenVZ Linux container system Open Mode Linux a paravirtualized kernel Hypervisors that include VirtualBox, VMware ESX, GSX, Workstation and player, IBM PowerVM, Microsoft Hyper-V, Parallels, and Bhyve Besides these packages, it also offers support for storage on a large variety of filesystems, such as IDE, SCSI or USB disks, FiberChannel, LVM, and iSCSI or NFS, as well as support for virtual networks. It is the building block for other higher-level applications and tools that focus on the virtualization of a node and it does this in a secure way. It also offers the possibility of a remote connection. For more information about libvirt, take a look at its project goals and terminologies at http://libvirt.org/goals.html. The next is Open vSwitch, a production-quality implementation of a multilayer virtual switch. This software component is licensed under Apache 2.0 and is designed to enable massive network automations through various programmatic extensions. The Open vSwitch package, also abbreviated as OVS, provides a two stack layer for hardware virtualizations and also supports a large number of the standards and protocols available in a computer network, such as sFlow, NetFlow, SPAN, CLI, RSPAN, 802.1ag, LACP, and so on. Xen is a hypervisor with a microkernel design that provides services offering multiple computer operating systems to be executed on the same architecture. It was first developed at the Cambridge University in 2003, and was developed under GNU General Public License version 2. This piece of software runs on a more privileged state and is available for ARM, IA-32, and x86-64 instruction sets. A hypervisor is a piece of software that is concerned with the CPU scheduling and memory management of various domains. It does this from the domain 0 (dom0), which controls all the other unprivileged domains called domU; Xen boots from a bootloader and usually loads into the dom0 host domain, a paravirtualized operating system. A brief look at the Xen project architecture is available here: Linux Containers (LXC) is the next element available in the meta-virtualization layer. It is a well-known set of tools and libraries that offer virtualization at the operating system level by offering isolated containers on a Linux control host machine. It combines the functionalities of kernel control groups (cgroups) with the support for isolated namespaces to provide an isolated environment. It has received a fair amount of attention mostly due to Docker, which will be briefly mentioned a bit later. Also, it is considered a lightweight alternative to full machine virtualization. Both of these options, containers and machine virtualization, have a fair amount of advantages and disadvantages. If the first option, containers offer low overheads by sharing certain components, and it may turn out that it does not have a good isolation. Machine virtualization is exactly the opposite of this and offers a great solution to isolation at the cost of a bigger overhead. These two solutions could also be seen as complementary, but this is only my personal view of the two. In reality, each of them has its particular set of advantages and disadvantages that could sometimes be uncomplementary as well. More information about Linux containers is available at https://linuxcontainers.org/. The last component of the meta-virtualization layer that will be discussed is Docker, an open source piece of software that tries to automate the method of deploying applications inside Linux containers. It does this by offering an abstraction layer over LXC. Its architecture is better described in this image: As you can see in the preceding diagram, this software package is able to use the resources of the operating system. Here, I am referring to the functionalities of the Linux kernel and have isolated other applications from the operating system. It can do this either through LXC or other alternatives, such as libvirt and systemd-nspawn, which are seen as indirect implementations. It can also do this directly through the libcontainer library, which has been around since the 0.9 version of Docker. Docker is a great component if you want to obtain automation for distributed systems, such as large-scale web deployments, service-oriented architectures, continuous deployment systems, database clusters, private PaaS, and so on. More information about its use cases is available at https://www.docker.com/resources/usecases/. Make sure you take a look at this website; interesting information is often here. After finishing with the meta-virtualization layer, I will move next to the meta-cloud-services layer that contains various elements. I will start with Simple Protocol for Independent Computing Environments (Spice). This can be translated into a remote-display system for virtualized desktop devices. It initially started as a closed source software, and in two years it was decided to make it open source. It then became an open standard to interaction with devices, regardless of whether they are virtualized one not. It is built on a client-server architecture, making it able to deal with both physical and virtualized devices. The interaction between backend and frontend is realized through VD-Interfaces (VDI), and as shown in the following diagram, its current focus is the remote access to QEMU/KVM virtual machines: Next on the list is oVirt, a virtualization platform that offers a web interface. It is easy to use and helps in the management of virtual machines, virtualized networks, and storages. Its architecture consists of an oVirt Engine and multiple nodes. The engine is the component that comes equipped with a user-friendly interface to manage logical and physical resources. It also runs the virtual machines that could be either oVirt nodes, Fedora, or CentOS hosts. The only downfall of using oVirt is that it only offers support for a limited number of hosts, as follows: Fedora 20 CentOS 6.6, 7.0 Red Hat Enterprise Linux 6.6, 7.0 Scientific Linux 6.6, 7.0 As a tool, it is really powerful. It offers integration with libvirt for Virtual Desktops and Servers Manager (VDSM) communications with virtual machines and also support for SPICE communication protocols that enable remote desktop sharing. It is a solution that was started and is mainly maintained by Red Hat. It is the base element of their Red Hat Enterprise Virtualization (RHEV), but one thing is interesting and should be watched out for is that Red Hat now is not only a supporter of projects, such as oVirt and Aeolus, but has also been a platinum member of the OpenStack foundation since 2012. For more information on projects, such as oVirt, Aeolus, and RHEV, the following links can be useful to you: http://www.redhat.com/promo/rhev3/?sc_cid=70160000000Ty5wAAC&offer_id=70160000000Ty5NAAS, http://www.aeolusproject.org/ and http://www.ovirt.org/Home. I will move on to a different component now. Here, I am referring to the open source implementation of the Lightweight Directory Access Protocol, simply called openLDAP. Although it has a somewhat controverted license called OpenLDAP Public License, which is similar in essence to the BSD license, it is not recorded at opensource.org, making it uncertified by Open Source Initiative (OSI). This software component comes as a suite of elements, as follows: A standalone LDAP daemon that plays the role of a server called slapd A number of libraries that implement the LDAP protocol Last but not the least, a series of tools and utilities that also have a couple of clients samples between them There are also a number of additions that should be mentioned, such as ldapc++ and libraries written in C++, JLDAP and the libraries written in Java; LMDB, a memory mapped database library; Fortress, a role-based identity management; SDK, also written in Java; and a JDBC-LDAP Bridge driver that is written in Java and called JDBC-LDAP. Cyrus-SASL is a generic client-server library implementation for Simple Authentication and Security Layer (SASL) authentication. It is a method used for adding authentication support for connection-based protocols. A connection-based protocol adds a command that identifies and authenticates a user to the requested server and if negotiation is required, an additional security layer is added between the protocol and the connection for security purposes. More information about SASL is available in the RFC 2222, available at http://www.ietf.org/rfc/rfc2222.txt. For a more detailed description of Cyrus SASL, refer to http://www.sendmail.org/~ca/email/cyrus/sysadmin.html. Qpid is a messaging tool developed by Apache, which understands Advanced Message Queueing Protocol (AMQP) and has support for various languages and platforms. AMQP is an open source protocol designed for high-performance messaging over a network in a reliable fashion. More information about AMQP is available at http://www.amqp.org/specification/1.0/amqp-org-download. Here, you can find more information about the protocol specifications as well as about the project in general. Qpid projects push the development of AMQP ecosystems and this is done by offering message brokers and APIs that can be used in any developer application that intends to use AMQP messaging part of their product. To do this, the following can be done: Letting the source code open source. Making AMQP available for a large variety of computing environments and programming languages. Offering the necessary tools to simplify the development process of an application. Creating a messaging infrastructure to make sure that other services can integrate well with the AMQP network. Creating a messaging product that makes integration with AMQP trivial for any programming language or computing environment. Make sure that you take a look at Qpid Proton at http://qpid.apache.org/proton/overview.html for this. More information about the the preceding functionalities can be found at http://qpid.apache.org/components/index.html#messaging-apis. RabbitMQ is another message broker software component that implements AMQP, which is also available as open source. It has a number of components, as follows: The RabbitMQ exchange server Gateways for HTTP, Streaming Text Oriented Message Protocol (STOMP) and Message Queue Telemetry Transport (MQTT) AMQP client libraries for a variety of programming languages, most notably Java, Erlang, and .Net Framework A plugin platform for a number of custom components that also offer a collection of predefined one: Shovel: It is a plugin that executes the copy/move operation for messages between brokers Management: It enables the control and monitoring of brokers and clusters of brokers Federation: It enables sharing at the exchange level of messages between brokers You can find out more information regarding RabbitMQ by referring to the RabbitMQ documentation article at http://www.rabbitmq.com/documentation.html. Comparing the two, Qpid and RabbitMQ, it can be concluded that RabbitMQ is better and also that it has a fantastic documentation. This makes it the first choice for the OpenStack Foundation as well as for readers interested in benchmarking information for more than these frameworks. It is also available at http://blog.x-aeon.com/2013/04/10/a-quick-message-queue-benchmark-activemq-rabbitmq-hornetq-qpid-apollo/. One such result is also available in this image for comparison purposes: The next element is puppet, an open source configuration management system that allows IT infrastructure to have certain states defined and also enforce these states. By doing this, it offers a great automation system for system administrators. This project is developed by the Puppet Labs and was released under GNU General Public License until version 2.7.0. After this, it moved to the Apache License 2.0 and is now available in two flavors: The open source puppet version: It is mostly similar to the preceding tool and is capable of configuration management solutions that permit for definition and automation of states. It is available for both Linux and UNIX as well as Max OS X and Windows. The puppet enterprise edition: It is a commercial version that goes beyond the capabilities of the open source puppet and permits the automation of the configuration and management process. It is a tool that defines a declarative language for later use for system configuration. It can be applied directly on the system or even compiled as a catalogue and deployed on a target using a client-server paradigm, which is usually the REST API. Another component is an agent that enforces the resources available in the manifest. The resource abstraction is, of course, done through an abstraction layer that defines the configuration through higher lever terms that are very different from the operating system-specific commands. If you visit http://docs.puppetlabs.com/, you will find more documentation related to Puppet and other Puppet Lab tools. With all this in place, I believe it is time to present the main component of the meta-cloud-services layer, called OpenStack. It is a cloud operating system that is based on controlling a large number of components and together it offers pools of compute, storage, and networking resources. All of them are managed through a dashboard that is, of course, offered by another component and offers administrators control. It offers users the possibility of providing resources from the same web interface. Here is an image depicting the Open Source Cloud operating System, which is actually OpenStack: It is primarily used as an IaaS solution, its components are maintained by the OpenStack Foundation, and is available under Apache License version 2. In the Foundation, today, there are more than 200 companies that contribute to the source code and general development and maintenance of the software. At the heart of it, all are staying its components Also, each component has a Python module used for simple interaction and automation possibilities: Compute (Nova): It is used for the hosting and management of cloud computing systems. It manages the life cycles of the compute instances of an environment. It is responsible for the spawning, decommissioning, and scheduling of various virtual machines on demand. With regard to hypervisors, KVM is the preferred option but other options such as Xen and VMware are also viable. Object Storage (Swift): It is used for storage and data structure retrieval via RESTful and the HTTP API. It is a scalable and fault-tolerant system that permits data replication with objects and files available on multiple disk drives. It is developed mainly by an object storage software company called SwiftStack. Block Storage (Cinder): It provides persistent block storage for OpenStack instances. It manages the creation and attach and detach actions for block devices. In a cloud, a user manages its own devices, so a vast majority of storage platforms and scenarios should be supported. For this purpose, it offers a pluggable architecture that facilitates the process. Networking (Neutron): It is the component responsible for network-related services, also known as Network Connectivity as a Service. It provides an API for network management and also makes sure that certain limitations are prevented. It also has an architecture based on pluggable modules to ensure that as many networking vendors and technologies as possible are supported. Dashboard (Horizon): It provides web-based administrators and user graphical interfaces for interaction with the other resources made available by all the other components. It is also designed keeping extensibility in mind because it is able to interact with other components responsible for monitoring and billing as well as with additional management tools. It also offers the possibility of rebranding according to the needs of commercial vendors. Identity Service (Keystone): It is an authentication and authorization service It offers support for multiple forms of authentication and also existing backend directory services such as LDAP. It provides a catalogue for users and the resources they can access. Image Service (Glance): It is used for the discovery, storage, registration, and retrieval of images of virtual machines. A number of already stored images can be used as templates. OpenStack also provides an operating system image for testing purposes. Glance is the only module capable of adding, deleting, duplicating, and sharing OpenStack images between various servers and virtual machines. All the other modules interact with the images using the available APIs of Glance. Telemetry (Ceilometer): It is a module that provides billing, benchmarking, and statistical results across all current and future components of OpenStack with the help of numerous counters that permit extensibility. This makes it a very scalable module. Orchestrator (Heat): It is a service that manages multiple composite cloud applications with the help of various template formats, such as Heat Orchestration Templates (HOT) or AWS CloudFormation. The communication is done both on a CloudFormation compatible Query API and an Open Stack REST API. Database (Trove): It provides Cloud Database as service functionalities that are both reliable and scalable. It uses relational and nonrelational database engines. Bare Metal Provisioning (Ironic): It is a components that provides virtual machine support instead of bare metal machines support. It started as a fork of the Nova Baremetal driver and grew to become the best solution for a bare-metal hypervisor. It also offers a set of plugins for interaction with various bare-metal hypervisors. It is used by default with PXE and IPMI, but of course, with the help of the available plugins it can offer extended support for various vendor-specific functionalities. Multiple Tenant Cloud Messaging (Zaqar): It is, as the name suggests, a multitenant cloud messaging service for the web developers who are interested in Software as a Service (SaaS). It can be used by them to send messages between various components by using a number of communication patterns. However, it can also be used with other components for surfacing events to end users as well as communication in the over-cloud layer. Its former name was Marconi and it also provides the possibility of scalable and secure messaging. Elastic Map Reduce (Sahara): It is a module that tries to automate the method of providing the functionalities of Hadoop clusters. It only requires the defines for various fields, such as Hadoop versions, various topology nodes, hardware details, and so on. After this, in a few minutes, a Hadoop cluster is deployed and ready for interaction. It also offers the possibility of various configurations after deployment. Having mentioned all this, maybe you would not mind if a conceptual architecture is presented in the following image to present to you with ways in which the above preceding components are interacted with. To automate the deployment of such an environment in a production environment, automation tools, such as the previously mentioned Puppet tool, can be used. Take a look at this diagram: Now, let's move on and see how such a system can be deployed using the functionalities of the Yocto Project. For this activity to start, all the required metadata layers should be put together. Besides the already available Poky repository, other ones are also required and they are defined in the layer index on OpenEmbedded's website because this time, the README file is incomplete: git clone –b dizzy git://git.openembedded.org/meta-openembedded git clone –b dizzy git://git.yoctoproject.org/meta-virtualization git clone –b icehouse git://git.yoctoproject.org/meta-cloud-services source oe-init-build-env ../build-controller After the appropriate controller build is created, it needs to be configured. Inside the conf/layer.conf file, add the corresponding machine configuration, such as qemux86-64, and inside the conf/bblayers.conf file, the BBLAYERS variable should be defined accordingly. There are extra metadata layers, besides the ones that are already available. The ones that should be defined in this variable are: meta-cloud-services meta-cloud-services/meta-openstack-controller-deploy meta-cloud-services/meta-openstack meta-cloud-services/meta-openstack-qemu meta-openembedded/meta-oe meta-openembedded/meta-networking meta-openembedded/meta-python meta-openembedded/meta-filesystem meta-openembedded/meta-webserver meta-openembedded/meta-ruby After the configuration is done using the bitbake openstack-image-controller command, the controller image is built. The controller can be started using the runqemu qemux86-64 openstack-image-controller kvm nographic qemuparams="-m 4096" command. After finishing this activity, the deployment of the compute can be started in this way: source oe-init-build-env ../build-compute With the new build directory created and also since most of the work of the build process has already been done with the controller, build directories such as downloads and sstate-cache, can be shared between them. This information should be indicated through DL_DIR and SSTATE_DIR. The difference between the two conf/bblayers.conf files is that the second one for the build-compute build directory replaces meta-cloud-services/meta-openstack-controller-deploy with meta-cloud-services/meta-openstack-compute-deploy. This time the build is done with bitbake openstack-image-compute and should be finished faster. Having completed the build, the compute node can also be booted using the runqemu qemux86-64 openstack-image-compute kvm nographic qemuparams="-m 4096 –smp 4" command. This step implies the image loading for OpenStack Cirros as follows: wget download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img scp cirros-0.3.2-x86_64-disk.img root@<compute_ip_address>:~ ssh root@<compute_ip_address> ./etc/nova/openrc glance image-create –name "TestImage" –is=public true –container-format bare –disk-format qcow2 –file /home/root/cirros-0.3.2-x86_64-disk.img Having done all of this, the user is free to access the Horizon web browser using http://<compute_ip_address>:8080/ The login information is admin and the password is password. Here, you can play and create new instances, interact with them, and, in general, do whatever crosses your mind. Do not worry if you've done something wrong to an instance; you can delete it and start again. The last element from the meta-cloud-services layer is the Tempest integration test suite for OpenStack. It is represented through a set of tests that are executed on the OpenStack trunk to make sure everything is working as it should. It is very useful for any OpenStack deployments. More information about Tempest is available at https://github.com/openstack/tempest. Summary In this article, you were not only presented with information about a number of virtualization concepts, such as NFV, SDN, VNF, and so on, but also a number of open source components that contribute to everyday virtualization solutions. I offered you examples and even a small exercise to make sure that the information remains with you even after reading this book. I hope I made some of you curious about certain things. I also hope that some of you documented on projects that were not presented here, such as the OpenDaylight (ODL) initiative, that has only been mentioned in an image as an implementation suggestion. If this is the case, I can say I fulfilled my goal. Resources for Article: Further resources on this subject: Veil-Evasion [article] Baking Bits with Yocto Project [article] An Introduction to the Terminal [article]
Read more
  • 0
  • 0
  • 7005
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-installing-and-configuring-network-monitoring-software
Packt
02 Jun 2015
9 min read
Save for later

Installing and Configuring Network Monitoring Software

Packt
02 Jun 2015
9 min read
This article written by Bill Pretty, Glenn Vander Veer, authors of the book Building Networks and Servers Using BeagleBone will serve as an installation guide for the software that will be used to monitor the traffic on your local network. These utilities can help determine which devices on your network are hogging the bandwidth, which slows down the network for other devices on your network. Here are the topics that we are going to cover: Installing traceroute and My Trace Route (MTR or Matt's Traceroute): These utilities will give you a real-time view of the connection between one node and another Installing Nmap: This utility is a network scanner that can list all the hosts on your network and all the services available on those hosts Installing iptraf-ng: This utility gathers various network traffic information and statistics (For more resources related to this topic, see here.) Installing Traceroute Traceroute is a tool that can show the path from one node on a network to another. This can help determine the ideal placement of a router to maximize wireless bandwidth in order to stream music and videos from the BeagleBone server to remote devices. Traceroute can be installed with the following command: apt-get install traceroute   Once Traceroute is installed, it can be run to find the path from the BeagleBone to any server anywhere in the world. For example, here's the route from my BeagelBone to the Canadian Google servers: Now, it is time to decipher all the information that is presented. This first command line tells traceroute the parameters that it must use: traceroute to google.ca (74.125.225.23), 30 hops max, 60 byte packets This gives the hostname, the IP address returned by the DNS server, the maximum number of hops to be taken, and the size of the data packet to be sent. The maximum number of hops can be changed with the –m flag and can be up to 255. In the context of this book, this will not have to be changed. After the first line, the next few lines show the trip from the BeagleBone, through the intermediate hosts (or hops), to the Google.ca server. Each line follows the following format: hop_number host_name (host IP_address) packet_round_trip_times From the command that was run previously (specifically hop number 4): 2 10.149.206.1 (10.149.206.1) 15.335 ms 17.319 ms 17.232 ms Here's a breakdown of the output: The hop number 2: This is a count of the number of hosts between this host and the originating host. The higher the number, the greater is the number of computers that the traffic has to go through to reach its destination. 10.149.206.1: This denotes the hostname. This is the result of a reverse DNS lookup on the IP address. If no information is returned from the DNS query (as in this case), the IP address of the host is given instead. (10.149.206.1): This is the actual host IP address. Various numbers: This is the round-trip time for a packet to go from the BeagleBone to the server and back again. These numbers will vary depending on network traffic, and lower is better. Sometimes, the traceroute will return some asterisks (*). This indicates that the packet has not been acknowledged by the host. If there are consecutive asterisks and the final destination is not reached, then there may be a routing problem. In a local network trace, it most likely is a firewall that is blocking the data packet. Installing My Traceroute My Traceroute (MTR) is an extension of traceroute, which probes the routers on the path from the packet source and destination, and keeps track of the response times of the hops. It does this repeatedly so that the response times can be averaged. Now, install mtr with the following command: sudo apt-get install mtr After it is run, mtr will provide quite a bit more information to look at, which would look like the following: While the output may look similar, the big advantage over traceroute is that the output is constantly updated. This allows you to accumulate trends and averages and also see how network performance varies over time. When using traceroute, there is a possibility that the packets that were sent to each hop happened to make the trip without incident, even in a situation where the route is suffering from intermittent packet loss. The mtr utility allows you to monitor this by gathering data over a wider range of time. Here's an mtr trace from my Beaglebone to my Android smartphone: Here's another trace, after I changed the orientation of the antennae of my router: As you can see, the original orientation was almost 100 milliseconds faster for ping traffic. Installing Nmap Nmap is designed to allow the scanning of networks in order to determine which hosts are up and what services are they offering. Nmap supports a large number of scanning options, which are overkill for what will be done in this book. Nmap is installed with the following command: sudo apt-get install nmap Answer Yes to install nmap and its dependent packages. Using Nmap After it is installed, run the following command to see all the hosts that are currently on the network: nmap –T4 –F <your_local_ip_range> The option -T4 sets the timing template to be used, and the -F option is for fast scanning. There are other options that can be used and found via the nmap manpage. Here, your_local_ip_range is within the range of addresses assigned by your router. Here's a node scan of my local network. If you have a lot of devices on your local network, this command may take a long time to complete. Now, I know that I have more nodes on my network, but they don't show up. This is because the command we ran didn't tell nmap to explicitly query each IP address to see whether the host responds but to query common ports that may be open to traffic. Instead, only use the -Pn option in the command to tell nmap to scan all the ports for every address in the range. This will scan more ports on each address to determine whether the host is active or not. Here, we can see that there are definitely more hosts registered in the router device table. This scan will attempt to scan a host IP address even if the device is powered off. Resetting the router and running the same scan will scan the same address range, but it will not return any device names for devices that are not powered at the time of the scan. You will notice that after scanning, nmap reports that some IP addresses' ports are closed and some are filtered. Closed ports are usually maintained on the addresses of devices that are locked down by their firewall. Filtered ports are on the addresses that will be handled by the router because there actually isn't a node assigned to these addresses. Here's a part of the output from an nmap scan of my Windows machine: Here's a part of the output of a scan of the BeagleBone: Installing iptraf-ng Iptraf-ng is a utility that monitors traffic on any of the interfaces or IP addresses on your network via custom filters. Because iptraf-ng is based on the ncurses libraries, we will have to install them first before downloading and compiling the actual iptraf-ng package. To install ncurses, run the following command: sudo apt-get install libncurses5-dev Here's how you will install ncurses and its dependent packages: Once ncurses is installed, download and extract the iptraf-ng tarball so that it can be built. At the time of writing this book, iptrf-ng's version 1.1.4 was available. This will change over time, and a quick search on Google will give you the latest and greatest version to download. You can download this version with the following command: wget https://fedorahosted.org/releases/i/p/iptraf-ng/iptraf-ng- <current_version_number>.tar.gz The following screenshot shows how to download the iptraf-ng tarball: After we have completed the downloading, extract the tarball using the following command: tar –xzf iptraf-ng-<current_version_number>.tar.gz Navigate to the iptraf-ng directory created by the tar command and issue the following commands: ./configure make sudo make install After these commands are complete, iptraf-ng is ready to run, using the following command: sudo iptraf-ng When the program starts, you will be presented with the following screen: Configuring iptraf-ng As an example, we are going to monitor all incoming traffic to the BeagleBone. In order to do this, iptraf-ng should be configured. Selecting the Configure... menu item will show you the following screen: Here, settings can be changed by highlighting an option in the left-hand side window and pressing Enter to select a new value, which will be shown in the Current Settings window. In this case, I have enabled all the options except Logging. Exit the configuration screen and enter the Filter Status screen. This is where we will set up the filter to only monitor traffic coming to the BeagleBone and from it. Then, the following screen will be presented: Selecting IP... will create an IP filter, and the following subscreen will pop up: Selecting Define new filter... will allow the creation and saving of a filter that will only display traffic for the IP address and the IP protocols that are selected, as shown in the following screenshot: Here, I have put in the BeagleBone's IP address, and to match all IP protocols. Once saved, return to the main menu and select IP traffic monitor. Here, you will be able to select the network interfaces to be monitored. Because my BeagleBone is connected to my wired network, I have selected eth0. The following screenshot should shows us the options: If all went well with your filter, you should see traffic to your BeagleBone and from it. Here are the entries for my PuTTy session; 192.168.17.2 is my Windows 8 machine, and 192.168.17.15 is my BeagleBone: Here's an image of the traffic generated by browsing the DLNA server from the Windows Explorer: Moreover, here's the traffic from my Android smartphone running a DLNA player, browsing the shared directories that were set up: Summary In this article, you saw how to install and configure the software that will be used to monitor the traffic on your local network. With these programs and a bit of experience, you can determine which devices on your network are hogging the bandwidth and find out whether you have any unauthorized users. Resources for Article: Further resources on this subject: Learning BeagleBone [article] Protecting GPG Keys in BeagleBone [article] Home Security by BeagleBone [article]
Read more
  • 0
  • 0
  • 2629

article-image-bsp-layer
Packt
02 Apr 2015
14 min read
Save for later

The BSP Layer

Packt
02 Apr 2015
14 min read
In this article by Alex González, author of the book Embedded LinuxProjects Using Yocto Project Cookbook, we will see how the embedded Linux projects require both custom hardware and software. An early task in the development process is to test different hardware reference boards and the selection of one to base our design on. We have chosen the Wandboard, a Freescale i.MX6-based platform, as it is an affordable and open board, which makes it perfect for our needs. On an embedded project, it is usually a good idea to start working on the software as soon as possible, probably before the hardware prototypes are ready, so that it is possible to start working directly with the reference design. But at some point, the hardware prototypes will be ready and changes will need to be introduced into Yocto to support the new hardware. This article will explain how to create a BSP layer to contain those hardware-specific changes, as well as show how to work with the U-Boot bootloader and the Linux kernel, components which are likely to take most of the customization work. (For more resources related to this topic, see here.) Creating a custom BSP layer These custom changes are kept on a separate Yocto layer, called a Board Support Package (BSP) layer. This separation is best for future updates and patches to the system. A BSP layer can support any number of new machines and any new software feature that is linked to the hardware itself. How to do it... By convention, Yocto layer names start with meta, short for metadata. A BSP layer may then add a bsp keyword, and finally a unique name. We will call our layer meta-bsp-custom. There are several ways to create a new layer: Manually, once you know what is required By copying the meta-skeleton layer included in Poky By using the yocto-layer command-line tool You can have a look at the meta-skeleton layer in Poky and see that it includes the following elements: A layer.conf file, where the layer configuration variables are set A COPYING.MIT license file Several directories named with the recipes prefix with example recipes for BusyBox, the Linux kernel and an example module, an example service recipe, an example user management recipe, and a multilib example. How it works... We will cover some of the use cases that appear in the available examples, so for our needs, we will use the yocto-layer tool, which allows us to create a minimal layer. Open a new terminal and change to the fsl-community-bsp directory. Then set up the environment as follows: $ source setup-environment wandboard-quad Note that once the build directory has been created, the MACHINE variable has already been configured in the conf/local.conf file and can be omitted from the command line. Change to the sources directory and run: $ yocto-layer create bsp-custom Note that the yocto-layer tool will add the meta prefix to your layer, so you don't need to. It will prompt a few questions: The layer priority which is used to decide the layer precedence in cases where the same recipe (with the same name) exists in several layers simultaneously. It is also used to decide in what order bbappends are applied if several layers append the same recipe. Leave the default value of 6. This will be stored in the layer's conf/layer.conf file as BBFILE_PRIORITY. Whether to create example recipes and append files. Let's leave the default no for the time being. Our new layer has the following structure: meta-bsp-custom/    conf/layer.conf    COPYING.MIT    README There's more... The first thing to do is to add this new layer to your project's conf/bblayer.conf file. It is a good idea to add it to your template conf directory's bblayers.conf.sample file too, so that it is correctly appended when creating new projects. The highlighted line in the following code shows the addition of the layer to the conf/bblayers.conf file: LCONF_VERSION = "6"   BBPATH = "${TOPDIR}" BSPDIR := "${@os.path.abspath(os.path.dirname(d.getVar('FILE', "" True)) + '/../..')}"   BBFILES ?= "" BBLAYERS = " ${BSPDIR}/sources/poky/meta ${BSPDIR}/sources/poky/meta-yocto ${BSPDIR}/sources/meta-openembedded/meta-oe ${BSPDIR}/sources/meta-openembedded/meta-multimedia ${BSPDIR}/sources/meta-fsl-arm ${BSPDIR}/sources/meta-fsl-arm-extra ${BSPDIR}/sources/meta-fsl-demos ${BSPDIR}/sources/meta-bsp-custom " Now, BitBake will parse the bblayers.conf file and find the conf/layers.conf file from your layer. In it, we find the following line: BBFILES += "${LAYERDIR}/recipes-*/*/*.bb        ${LAYERDIR}/recipes-*/*/*.bbappend" It tells BitBake which directories to parse for recipes and append files. You need to make sure your directory and file hierarchy in this new layer matches the given pattern, or you will need to modify it. BitBake will also find the following: BBPATH .= ":${LAYERDIR}" The BBPATH variable is used to locate the bbclass files and the configuration and files included with the include and require directives. The search finishes with the first match, so it is best to keep filenames unique. Some other variables we might consider defining in our conf/layer.conf file are: LAYERDEPENDS_bsp-custom = "fsl-arm" LAYERVERSION_bsp-custom = "1" The LAYERDEPENDS literal is a space-separated list of other layers your layer depends on, and the LAYERVERSION literal specifies the version of your layer in case other layers want to add a dependency to a specific version. The COPYING.MIT file specifies the license for the metadata contained in the layer. The Yocto project is licensed under the MIT license, which is also compatible with the General Public License (GPL). This license applies only to the metadata, as every package included in your build will have its own license. The README file will need to be modified for your specific layer. It is usual to describe the layer and provide any other layer dependencies and usage instructions. Adding a new machine When customizing your BSP, it is usually a good idea to introduce a new machine for your hardware. These are kept under the conf/machine directory in your BSP layer. The usual thing to do is to base it on the reference design. For example, wandboard-quad has the following machine configuration file: include include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_config"   KERNEL_DEVICETREE = "imx6q-wandboard.dtb"   MACHINE_FEATURES += "bluetooth wifi"   MACHINE_EXTRA_RRECOMMENDS += " bcm4329-nvram-config bcm4330-nvram-config " A machine based on the Wandboard design could define its own machine configuration file, wandboard-quad-custom.conf, as follows: include conf/machine/include/wandboard.inc   SOC_FAMILY = "mx6:mx6q:wandboard"   UBOOT_MACHINE = "wandboard_quad_custom_config"   KERNEL_DEVICETREE = "imx6q-wandboard-custom.dtb"   MACHINE_FEATURES += "wifi" The wandboard.inc file now resides on a different layer, so in order for BitBake to find it, we need to specify the full path from the BBPATH variable in the corresponding layer. This machine defines its own U-Boot configuration file and Linux kernel device tree in addition to defining its own set of machine features. Adding a custom device tree to the Linux kernel To add this device tree file to the Linux kernel, we need to add the device tree file to the arch/arm/boot/dts directory under the Linux kernel source and also modify the Linux build system's arch/arm/boot/dts/Makefile file to build it as follows: dtb-$(CONFIG_ARCH_MXC) += +imx6q-wandboard-custom.dtb This code uses diff formatting, where the lines with a minus prefix are removed, the ones with a plus sign are added, and the ones without a prefix are left as reference. Once the patch is prepared, it can be added to the meta-bsp-custom/recipes-kernel/linux/linux-wandboard-3.10.17/ directory and the Linux kernel recipe appended adding a meta-bsp-custom/recipes-kernel/linux/linux-wandboard_3.10.17.bbappend file with the following content: SRC_URI_append = " file://0001-ARM-dts-Add-wandboard-custom-dts- "" file.patch" Adding a custom U-Boot machine In the same way, the U-Boot source may be patched to add a new custom machine. Bootloader modifications are not as likely to be needed as kernel modifications though, and most custom platforms will leave the bootloader unchanged. The patch would be added to the meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc-v2014.10/ directory and the U-Boot recipe appended with a meta-bsp-custom/recipes-bsp/u-boot/u-boot-fslc_2014.10.bbappend file with the following content: SRC_URI_append = " file://0001-boards-Add-wandboard-custom.patch" Adding a custom formfactor file Custom platforms can also define their own formfactor file with information that the build system cannot obtain from other sources, such as defining whether a touchscreen is available or defining the screen orientation. These are defined in the recipes-bsp/formfactor/ directory in our meta-bsp-custom layer. For our new machine, we could define a meta-bsp-custom/recipes-bsp/formfactor/formfactor_0.0.bbappend file to include a formfactor file as follows: FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:" And the machine-specific meta-bsp-custom/recipes-bsp/formfactor/formfactor/wandboard-quadcustom/machconfig file would be as follows: HAVE_TOUCHSCREEN=1 Debugging the Linux kernel booting process We have seen the most general techniques for debugging the Linux kernel. However, some special scenarios require the use of different methods. One of the most common scenarios in embedded Linux development is the debugging of the booting process. This section will explain some of the techniques used to debug the kernel's booting process. How to do it... A kernel crashing on boot usually provides no output whatsoever on the console. As daunting as that may seem, there are techniques we can use to extract debug information. Early crashes usually happen before the serial console has been initialized, so even if there were log messages, we would not see them. The first thing we will show is how to enable early log messages that do not need the serial driver. In case that is not enough, we will also show techniques to access the log buffer in memory. How it works... Debugging booting problems have two distinctive phases, before and after the serial console is initialized. After the serial is initialized and we can see serial output from the kernel, debugging can use the techniques described earlier. Before the serial is initialized, however, there is a basic UART support in ARM kernels that allows you to use the serial from early boot. This support is compiled in with the CONFIG_DEBUG_LL configuration variable. This adds supports for a debug-only series of assembly functions that allow you to output data to a UART. The low-level support is platform specific, and for the i.MX6, it can be found under arch/arm/include/debug/imx.S. The code allows for this low-level UART to be configured through the CONFIG_DEBUG_IMX_UART_PORT configuration variable. We can use this support directly by using the printascii function as follows: extern void printascii(const char *); printascii("Literal stringn"); However, much more preferred would be to use the early_print function, which makes use of the function explained previously and accepts formatted input in printf style; for example: early_print("%08xt%sn", p->nr, p->name); Dumping the kernel's printk buffer from the bootloader Another useful technique to debug Linux kernel crashes at boot is to analyze the kernel log after the crash. This is only possible if the RAM memory is persistent across reboots and does not get initialized by the bootloader. As U-Boot keeps the memory intact, we can use this method to peek at the kernel login memory in search of clues. Looking at the kernel source, we can see how the log ring buffer is set up in kernel/printk/printk.c and also note that it is stored in __log_buf. To find the location of the kernel buffer, we will use the System.map file created by the Linux build process, which maps symbols with virtual addresses using the following command: $grep __log_buf System.map 80f450c0 b __log_buf To convert the virtual address to physical address, we look at how __virt_to_phys() is defined for ARM: x - PAGE_OFFSET + PHYS_OFFSET The PAGE_OFFSET variable is defined in the kernel configuration as: config PAGE_OFFSET        hex        default 0x40000000 if VMSPLIT_1G        default 0x80000000 if VMSPLIT_2G        default 0xC0000000 Some of the ARM platforms, like the i.MX6, will dynamically patch the __virt_to_phys() translation at runtime, so PHYS_OFFSET will depend on where the kernel is loaded into memory. As this can vary, the calculation we just saw is platform specific. For the Wandboard, the physical address for 0x80f450c0 is 0x10f450c0. We can then force a reboot using a magic SysRq key, which needs to be enabled in the kernel configuration with CONFIG_MAGIC_SYSRQ, but is enabled in the Wandboard by default: $ echo b > /proc/sysrq-trigger We then dump that memory address from U-Boot as follows: > md.l 0x10f450c0 10f450c0: 00000000 00000000 00210038 c6000000   ........8.!..... 10f450d0: 746f6f42 20676e69 756e694c 6e6f2078   Booting Linux on 10f450e0: 79687020 61636973 5043206c 78302055     physical CPU 0x 10f450f0: 00000030 00000000 00000000 00000000   0............... 10f45100: 009600a8 a6000000 756e694c 65762078   ........Linux ve 10f45110: 6f697372 2e33206e 312e3031 2e312d37   rsion 3.10.17-1. 10f45120: 2d322e30 646e6177 72616f62 62672b64   0.2-wandboard+gb 10f45130: 36643865 62323738 20626535 656c6128   e8d6872b5eb (ale 10f45140: 6f6c4078 696c2d67 2d78756e 612d7068   x@log-linux-hp-a 10f45150: 7a6e6f67 20296c61 63636728 72657620   gonzal) (gcc ver 10f45160: 6e6f6973 392e3420 2820312e 29434347   sion 4.9.1 (GCC) 10f45170: 23202920 4d532031 52502050 504d4545     ) #1 SMP PREEMP 10f45180: 75532054 6546206e 35312062 3a323120   T Sun Feb 15 12: 10f45190: 333a3733 45432037 30322054 00003531   37:37 CET 2015.. 10f451a0: 00000000 00000000 00400050 82000000   ........P.@..... 10f451b0: 3a555043 4d524120 50203776 65636f72   CPU: ARMv7 Proce There's more... Another method is to store the kernel log messages and kernel panics or oops into persistent storage. The Linux kernel's persistent store support (CONFIG_PSTORE) allows you to log in to the persistent memory kept across reboots. To log panic and oops messages into persistent memory, we need to configure the kernel with the CONFIG_PSTORE_RAM configuration variable, and to log kernel messages, we need to configure the kernel with CONFIG_PSTORE_CONSOLE. We then need to configure the location of the persistent storage on an unused memory location, but keep the last 1 MB of memory free. For example, we could pass the following kernel command-line arguments to reserve a 128 KB region starting at 0x30000000: ramoops.mem_address=0x30000000 ramoops.mem_size=0x200000 We would then mount the persistent storage by adding it to /etc/fstab so that it is available on the next boot as well: /etc/fstab: pstore /pstore pstore defaults 0 0 We then mount it as follows: # mkdir /pstore # mount /pstore Next, we force a reboot with the magic SysRq key: # echo b > /proc/sysrq-trigger On reboot, we will see a file inside /pstore: -r--r--r-- 1 root root 4084 Sep 16 16:24 console-ramoops This will have contents such as the following: SysRq : Resetting CPU3: stopping CPU: 3 PID: 0 Comm: swapper/3 Not tainted 3.14.0-rc4-1.0.0-wandboard-37774-g1eae [<80014a30>] (unwind_backtrace) from [<800116cc>] (show_stack+0x10/0x14) [<800116cc>] (show_stack) from [<806091f4>] (dump_stack+0x7c/0xbc) [<806091f4>] (dump_stack) from [<80013990>] (handle_IPI+0x144/0x158) [<80013990>] (handle_IPI) from [<800085c4>] (gic_handle_irq+0x58/0x5c) [<800085c4>] (gic_handle_irq) from [<80012200>] (__irq_svc+0x40/0x70) Exception stack(0xee4c1f50 to 0xee4c1f98) We should move it out of /pstore or remove it completely so that it doesn't occupy memory. Summary This article guides you through the customization of the BSP for your own product. It then explains how to debug the Linux kernel booting process. Resources for Article: Further resources on this subject: Baking Bits with Yocto Project [article] An Introduction to the Terminal [article] Linux Shell Scripting – various recipes to help you [article]
Read more
  • 0
  • 0
  • 5663

article-image-dealing-interrupts
Packt
02 Mar 2015
19 min read
Save for later

Dealing with Interrupts

Packt
02 Mar 2015
19 min read
This article is written by Francis Perea, the author of the book Arduino Essentials. In all our previous projects, we have been constantly looking for events to occur. We have been polling, but looking for events to occur supposes a relatively big effort and a waste of CPU cycles to only notice that nothing happened. In this article, we will learn about interrupts as a totally new way to deal with events, being notified about them instead of looking for them constantly. Interrupts may be really helpful when developing projects in which fast or unknown events may occur, and thus we will see a very interesting project which will lead us to develop a digital tachograph for a computer-controlled motor. Are you ready? Here we go! (For more resources related to this topic, see here.) The concept of an interruption As you may have intuited, an interrupt is a special mechanism the CPU incorporates to have a direct channel to be noticed when some event occurs. Most Arduino microcontrollers have two of these: Interrupt 0 on digital pin 2 Interrupt 1 on digital pin 3 But some models, such as the Mega2560, come with up to five interrupt pins. Once an interrupt has been notified, the CPU completely stops what it was doing and goes on to look at it, by running a special dedicated function in our code called Interrupt Service Routine (ISR). When I say that the CPU completely stops, I mean that even functions such as delay() or millis() won't be updated while the ISR is being executed. Interrupts can be programmed to respond on different changes of the signal connected to the corresponding pin and thus the Arduino language has four predefined constants to represent each of these four modes: LOW: It will trigger the interrupt whenever the pin gets a LOW value CHANGE: The interrupt will be triggered when the pins change their values from HIGH to LOW or vice versa RISING: It will trigger the interrupt when signal goes from LOW to HIGH FALLING: It is just the opposite of RISING; the interrupt will be triggered when the signal goes from HIGH to LOW The ISR The function that the CPU will call whenever an interrupt occurs is so important to the micro that it has to accomplish a pair of rules: They can't have any parameter They can't return anything The interrupts can be executed only one at a time Regarding the first two points, they mean that we can neither pass nor receive any data from the ISR directly, but we have other means to achieve this communication with the function. We will use global variables for it. We can set and read from a global variable inside an ISR, but even so, these variables have to be declared in a special way. We have to declare them as volatile as we will see this later on in the code. The third point, which specifies that only one ISR can be attended at a time, is what makes the function millis() not being able to be updated. The millis() function relies on an interrupt to be updated, and this doesn't happen if another interrupt is already being served. As you may understand, ISR is critical to the correct code execution in a microcontroller. As a rule of thumb, we will try to keep our ISRs as simple as possible and leave all heavy weight processing that occurs outside of it, in the main loop of our code. The tachograph project To understand and manage interrupts in our projects, I would like to offer you a very particular one, a tachograph, a device that is present in all our cars and whose mission is to account for revolutions, normally the engine revolutions, but also in brake systems such as Anti-lock Brake System (ABS) and others. Mechanical considerations Well, calling it mechanical perhaps is too much, but let's make some considerations regarding how we are going to make our project account for revolutions. For this example project, I have used a small DC motor driven through a small transistor and, like in lots of industrial applications, an encoded wheel is a perfect mechanism to read the number of revolutions. By simply attaching a small disc of cardboard perpendicularly to your motor shaft, it is very easy to achieve it. By using our old friend, the optocoupler, we can sense something between its two parts, even with just a piece of cardboard with a small slot in just one side of its surface. Here, you can see the template I elaborated for such a disc, the cross in the middle will help you position the disc as perfectly as possible, that is, the cross may be as close as possible to the motor shaft. The slot has to be cut off of the black rectangle as shown in the following image: The template for the motor encoder Once I printed it, I glued it to another piece of cardboard to make it more resistant and glued it all to the crown already attached to my motor shaft. If yours doesn't have a surface big enough to glue the encoder disc to its shaft, then perhaps you can find a solution by using just a small piece of dough or similar to it. Once the encoder disc is fixed to the motor and spins attached to the motor shaft, we have to find a way to place the optocoupler in a way that makes it able to read through the encoder disc slot. In my case, just a pair of drops of glue did the trick, but if your optocoupler or motor doesn't allow you to apply this solution, I'm sure that a pair of zip ties or a small piece of dough can give you another way to fix it to the motor too. In the following image, you can see my final assembled motor with its encoder disc and optocoupler ready to be connected to the breadboard through alligator clips: The complete assembly for the motor encoder Once we have prepared our motor encoder, let's perform some tests to see it working and begin to write code to deal with interruptions. A simple interrupt tester Before going deep inside the whole code project, let's perform some tests to confirm that our encoder assembly is working fine and that we can correctly trigger an interrupt whenever the motor spins and the cardboard slot passes just through the optocoupler. The only thing you have to connect to your Arduino at the moment is the optocoupler; we will now operate our motor by hand and in a later section, we will control its speed from the computer. The test's circuit schematic is as follows: A simple circuit to test the encoder Nothing new in this circuit, it is almost the same as the one used in the optical coin detector, with the only important and necessary difference of connecting the wire coming from the detector side of the optocoupler to pin 2 of our Arduino board, because, as said in the preceding text, the interrupt 0 is available only through that pin. For this first test, we will make the encoder disc spin by hand, which allows us to clearly perceive when the interrupt triggers. For the rest of this example, we will use the LED included with the Arduino board connected to pin 13 as a way to visually indicate that the interrupts have been triggered. Our first interrupt and its ISR Once we have connected the optocoupler to the Arduino and prepared things to trigger some interrupts, let's see the code that we will use to test our assembly. The objective of this simple sketch is to commute the status of an LED every time an interrupt occurs. In the proposed tester circuit, the LED status variable will be changed every time the slot passes through the optocoupler: /*  Chapter 09 - Dealing with interrupts  A simple tester  By Francis Perea for Packt Publishing */   // A LED will be used to notify the change #define ledPin 13   // Global variables we will use // A variable to be used inside ISR volatile int status = LOW;   // A function to be called when the interrupt occurs void revolution(){   // Invert LED status   status=!status; }   // Configuration of the board: just one output void setup() {   pinMode(ledPin, OUTPUT);   // Assign the revolution() function as an ISR of interrupt 0   // Interrupt will be triggered when the signal goes from   // LOW to HIGH   attachInterrupt(0, revolution, RISING); }   // Sketch execution loop void loop(){    // Set LED status   digitalWrite(ledPin, status); } Let's take a look at its most important aspects. The LED pin apart, we declare a variable to account for changes occurring. It will be updated in the ISR of our interrupt; so, as I told you earlier, we declare it as follows: volatile int status = LOW; Following which we declare the ISR function, revolution(), which as we already know doesn't receive any parameter nor return any value. And as we said earlier, it must be as simple as possible. In our test case, the ISR simply inverts the value of the global volatile variable to its opposite value, that is, from LOW to HIGH and from HIGH to LOW. To allow our ISR to be called whenever an interrupt 0 occurs, in the setup() function, we make a call to the attachInterrupt() function by passing three parameters to it: Interrupt: The interrupt number to assign the ISR to ISR: The name without the parentheses of the function that will act as the ISR for this interrupt Mode: One of the following already explained modes that define when exactly the interrupt will be triggered In our case, the concrete sentence is as follows: attachInterrupt(0, revolution, RISING); This makes the function revolution() be the ISR of interrupt 0 that will be triggered when the signal goes from LOW to HIGH. Finally, in our main loop there is little to do. Simply update the LED based on the current value of the status variable that is going to be updated inside the ISR. If everything went right, you should see the LED commute every time the slot passes through the optocoupler as a consequence of the interrupt being triggered and the revolution() function inverting the value of the status variable that is used in the main loop to set the LED accordingly. A dial tachograph For a more complete example in this section, we will build a tachograph, a device that will present the current revolutions per minute of the motor in a visual manner by using a dial. The motor speed will be commanded serially from our computer by reusing some of the codes in our previous projects. It is not going to be very complicated if we include some way to inform about an excessive number of revolutions and even cut the engine in an extreme case to protect it, is it? The complete schematic of such a big circuit is shown in the following image. Don't get scared about the number of components as we have already seen them all in action before: The tachograph circuit As you may see, we will use a total of five pins of our Arduino board to sense and command such a set of peripherals: Pin 2: This is the interrupt 0 pin and thus it will be used to connect the output of the optocoupler. Pin 3: It will be used to deal with the servo to move the dial. Pin 4: We will use this pin to activate sound alarm once the engine current has been cut off to prevent overcharge. Pin 6: This pin will be used to deal with the motor transistor that allows us to vary the motor speed based on the commands we receive serially. Remember to use a PWM pin if you choose to use another one. Pin 13: Used to indicate with an LED an excessive number of revolutions per minute prior to cutting the engine off. There are also two more pins which, although not physically connected, will be used, pins 0 and 1, given that we are going to talk to the device serially from the computer. Breadboard connections diagram There are some wires crossed in the previous schematic, and perhaps you can see the connections better in the following breadboard connection image: Breadboard connection diagram for the tachograph The complete tachograph code This is going to be a project full of features and that is why it has such a number of devices to interact with. Let's resume the functioning features of the dial tachograph: The motor speed is commanded from the computer via a serial communication with up to five commands: Increase motor speed (+) Decrease motor speed (-) Totally stop the motor (0) Put the motor at full throttle (*) Reset the motor after a stall (R) Motor revolutions will be detected and accounted by using an encoder and an optocoupler Current revolutions per minute will be visually presented with a dial operated with a servomotor It gives visual indication via an LED of a high number of revolutions In case a maximum number of revolutions is reached, the motor current will be cut off and an acoustic alarm will sound With such a number of features, it is normal that the code for this project is going to be a bit longer than our previous sketches. Here is the code: /*  Chapter 09 - Dealing with interrupt  Complete tachograph system  By Francis Perea for Packt Publishing */   #include <Servo.h>   //The pins that will be used #define ledPin 13 #define motorPin 6 #define buzzerPin 4 #define servoPin 3   #define NOTE_A4 440 // Milliseconds between every sample #define sampleTime 500 // Motor speed increment #define motorIncrement 10 // Range of valir RPMs, alarm and stop #define minRPM  0 #define maxRPM 10000 #define alarmRPM 8000 #define stopRPM 9000   // Global variables we will use // A variable to be used inside ISR volatile unsigned long revolutions = 0; // Total number of revolutions in every sample long lastSampleRevolutions = 0; // A variable to convert revolutions per sample to RPM int rpm = 0; // LED Status int ledStatus = LOW; // An instace on the Servo class Servo myServo; // A flag to know if the motor has been stalled boolean motorStalled = false; // Thr current dial angle int dialAngle = 0; // A variable to store serial data int dataReceived; // The current motor speed int speed = 0; // A time variable to compare in every sample unsigned long lastCheckTime;   // A function to be called when the interrupt occurs void revolution(){   // Increment the total number of   // revolutions in the current sample   revolutions++; }   // Configuration of the board void setup() {   // Set output pins   pinMode(motorPin, OUTPUT);   pinMode(ledPin, OUTPUT);   pinMode(buzzerPin, OUTPUT);   // Set revolution() as ISR of interrupt 0   attachInterrupt(0, revolution, CHANGE);   // Init serial communication   Serial.begin(9600);   // Initialize the servo   myServo.attach(servoPin);   //Set the dial   myServo.write(dialAngle);   // Initialize the counter for sample time   lastCheckTime = millis(); }   // Sketch execution loop void loop(){    // If we have received serial data   if (Serial.available()) {     // read the next char      dataReceived = Serial.read();      // Act depending on it      switch (dataReceived){        // Increment speed        case '+':          if (speed<250) {            speed += motorIncrement;          }          break;        // Decrement speed        case '-':          if (speed>5) {            speed -= motorIncrement;          }          break;                // Stop motor        case '0':          speed = 0;          break;            // Full throttle           case '*':          speed = 255;          break;        // Reactivate motor after stall        case 'R':          speed = 0;          motorStalled = false;          break;      }     //Only if motor is active set new motor speed     if (motorStalled == false){       // Set the speed motor speed       analogWrite(motorPin, speed);     }   }   // If a sample time has passed   // We have to take another sample   if (millis() - lastCheckTime > sampleTime){     // Store current revolutions     lastSampleRevolutions = revolutions;     // Reset the global variable     // So the ISR can begin to count again     revolutions = 0;     // Calculate revolution per minute     rpm = lastSampleRevolutions * (1000 / sampleTime) * 60;     // Update last sample time     lastCheckTime = millis();     // Set the dial according new reading     dialAngle = map(rpm,minRPM,maxRPM,180,0);     myServo.write(dialAngle);   }   // If the motor is running in the red zone   if (rpm > alarmRPM){     // Turn on LED     digitalWrite(ledPin, HIGH);   }   else{     // Otherwise turn it off     digitalWrite(ledPin, LOW);   }   // If the motor has exceed maximum RPM   if (rpm > stopRPM){     // Stop the motor     speed = 0;     analogWrite(motorPin, speed);     // Disable it until a 'R' command is received     motorStalled = true;     // Make alarm sound     tone(buzzerPin, NOTE_A4, 1000);   }   // Send data back to the computer   Serial.print("RPM: ");   Serial.print(rpm);   Serial.print(" SPEED: ");   Serial.print(speed);   Serial.print(" STALL: ");   Serial.println(motorStalled); } It is the first time in this article that I think I have nothing to explain regarding the code that hasn't been already explained before. I have commented everything so that the code can be easily read and understood. In general lines, the code declares both constants and global variables that will be used and the ISR for the interrupt. In the setup section, all initializations of different subsystems that need to be set up before use are made: pins, interrupts, serials, and servos. The main loop begins by looking for serial commands and basically updates the speed value and the stall flag if command R is received. The final motor speed setting only occurs in case the stall flag is not on, which will occur in case the motor reaches the stopRPM value. Following with the main loop, the code looks if it has passed a sample time, in which case the revolutions are stored to compute real revolutions per minute (rpm), and the global revolutions counter incremented inside the ISR is set to 0 to begin again. The current rpm value is mapped to an angle to be presented by the dial and thus the servo is set accordingly. Next, a pair of controls is made: One to see if the motor is getting into the red zone by exceeding the max alarmRPM value and thus turning the alarm LED on And another to check if the stopRPM value has been reached, in which case the motor will be automatically cut off, the motorStalled flag is set to true, and the acoustic alarm is triggered When the motor has been stalled, it won't accept changes in its speed until it has been reset by issuing an R command via serial communication. In the last action, the code sends back some info to the Serial Monitor as another way of feedback with the operator at the computer and this should look something like the following screenshot: Serial Monitor showing the tachograph in action Modular development It has been quite a complex project in that it incorporates up to six different subsystems: optocoupler, motor, LED, buzzer, servo, and serial, but it has also helped us to understand that projects need to be developed by using a modular approach. We have worked and tested every one of these subsystems before, and that is the way it should usually be done. By developing your projects in such a submodular way, it will be easy to assemble and program the whole of the system. As you may see in the following screenshot, only by using such a modular way of working will you be able to connect and understand such a mess of wires: A working desktop may get a bit messy Summary I'm sure you have got the point regarding interrupts with all the things we have seen in this article. We have met and understood what an interrupt is and how does the CPU attend to it by running an ISR, and we have even learned about their special characteristics and restrictions and that we should keep them as little as possible. On the programming side, the only thing necessary to work with interrupts is to correctly attach the ISR with a call to the attachInterrupt() function. From the point of view of hardware, we have assembled an encoder that has been attached to a spinning motor to account for its revolutions. Finally, we have the code. We have seen a relatively long sketch, which is a sign that we are beginning to master the platform, are able to deal with a bigger number of peripherals, and that our projects require more complex software every time we have to deal with these peripherals and to accomplish all the other necessary tasks to meet what is specified in the project specifications. Resources for Article: Further resources on this subject: The Arduino Mobile Robot? [article] Using the Leap Motion Controller with Arduino [article] Android and Udoo Home Automation [article]
Read more
  • 0
  • 0
  • 3784
article-image-android-and-udoo-home-automation
Packt
20 Feb 2015
6 min read
Save for later

Android and UDOO for Home Automation

Packt
20 Feb 2015
6 min read
This article written by Emanuele Palazzetti, the author of Getting Started with UDOO, will teach us about home automation and using sensors to monitor the CO2 emission by the wood heater in our home. (For more resources related to this topic, see here.) During the last few years, the maker culture greatly improved the way hobbyists, students, and, more in general, technology enthusiasts create new hardware devices. The advent of prototyping boards such as Arduino together with a widespread open source philosophy, changed how and where ideas are realized. If you're a maker or you have a friend that really like building prototypes and devices, probably both of you have already transformed the garage or the personal studio in a home lab. This process is so spontaneous that nowadays newcomers in the Makers Community begin to create their first device at home. UDOO and Home Automation Like other communities, the makers family self-sustained their ideas joining their passion with the Do It Yourself (DIY) philosophy, which led makers to build and use creative devices in their everyday life. The DIY movement was the key factor that triggered the Home Automation made with open source platforms, making this process funnier and maker-friendly. Indeed, if we take a look at some projects released on the Internet we can find thousands of prototypes, usually composed by an Arduino together with a Raspberry Pi, or any other combinations of similar prototyping boards. However, in this scenario, we have a new alternative proposed by the UDOO board—a stand alone computer that provides a multidevelopment-platform solution for Linux and Android operating systems with an integrated Arduino Due. Thanks to its flexibility, UDOO combines the best from two different worlds: the hardware makers and the software programmers. Applying the UDOO board to home automation is a natural process because of its power and versatility. Furthermore, the capability to install the Android operating system increases the capabilities of this board, because it offers out-of-the-box mobile applications ecosystem that can be easily reused in our appliances, enhancing the user experience of our prototype. Since 2011, Android supports the interaction with an Arduino compatible device through the Android Accessory Development Kit (ADK) reference. Designed to implement compelling accessories that could be connected to Android-powered devices, this reference defines the Android Open Accessory (AOA) protocol used to communicate with an Arduino device through an USB On-The-Go (OTG) port or via Bluetooth. UDOO makes use of this protocol and opens a serial communication between two processors soldered in the same board, which run the Android operating system and the microcontroller actions respectively. Mastering the ADK is not so easy, especially during the first approach. For this reason, the community has developed an Android library that simplifies a lot the use of the ADK: the Android ADKToolkit (http://adktoolkit.org). To put our hands-on code, we can imagine a scenario in which we want to monitor the carbon dioxide emissions (CO2) produced by the wood heater in our home. The first step is to connect the CO2 sensor to our board and use the manufacturer library, if available, in the Arduino sketch to retrieve the detected CO2. Using the ADK implementation, we can send this value back to the main UDOO's processor that runs the Android operating system, so it can use its powerful APIs, and a more powerful processor, to visualize or compute the collected data. The following example is a part of an Arduino sketch used to send sensor data back to the Android application using the ADK implementation: uint8_t buffer[128]; int co2Sensor = 0; void loop() { Usb.Task(); if (adk.isReady()) {    // Hypothetical function that    // gets the value from the sensor    co2Sensor = readFromCO2Sensor();    buffer[0] = co2Sensor;    // Sends the value to Android    adk.write(1, buffer);    delay(1000); } } On the Android side, using a traditional AsyncTask method or a more powerful ExecutorService method, we should read this value from the buffer. This task is greatly simplified through the ADKToolkit library that requires the initialization of the AdkManager class that holds the connection and exposes a handler to open or close the communication with Arduino. This instance could be initialized during the onCreate() activity callback with the following code: private AdkManager mAdkManager;@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.hello_world); mAdkManager = new AdkManager(this); mAdkManager.open(); } In a separated thread, we can use the mAdkManager instance to read data from the ADK buffer. The following is an example that reads the message from the buffer and uses the getInt() method to parse the value: private class SensorThread implements Runnable { @Override public void run() {    // Reads detected CO2 from ADK    AdkMessage response = mAdkManager.read();    int co2 = response.getInt();    // Continues the execution    doSomething(co2); } } Once we retrieve the detected CO2 value from the sensor, we can use it in our application. For instance, we may plot this value using an Android open source chart library or send collected data to an external web service. These preceding snippets are just examples to show how it's easy to implement the communication in UDOO between the Android operating system and the onboard Arduino. With a powerful operating system such as Android and one of the most widespread prototyping platform provided in a single board, UDOO can play a key role in our home automation projects. Summary As makers, if we manage to make enough experience into the home automation field, chances are that we will be able to develop and build a high-end system for our own house, flexible enough to be easily extended without any further knowledge. When I wrote Getting started with UDOO, the idea was to make a comprehensive guide and a collection of examples, to help developers grabbing quickly the key concepts of this prototyping board, focusing in the Android development to bring back to light the advantages provided by this widespread platform, if used in our prototypes and not only in our mobile devices. Resources for Article: Further resources on this subject: Android Virtual Device Manager [Article] Writing Tag Content [Article] The Arduino Mobile Robot [Article]
Read more
  • 0
  • 0
  • 1806

article-image-getting-started-electronic-projects
Packt
13 Jan 2015
7 min read
Save for later

Getting Started with Electronic Projects

Packt
13 Jan 2015
7 min read
Welcome to my second book produced by the good folks at Packt Publishing LLC. This book is somewhat different from my other book in that, instead of one large project this book is a collection of several small, medium, and large projects. While the name of the book is called Getting Started with Electronics Projects, I convinced the folks at Packt to let me write a book with several projects for the electronics hacker and experimenter groups. The first few projects do not even involve a BeagleBone, something which had my reviewers shaking their heads at first. So what follows is a brief taste of what you can look forward to, in this book. Before we go any further I should explain who this book is for. If you are a software person who has never heated up a soldering iron before, you might want to practice a bit before attempting the more difficult assembly (electronics assembly, not assembly language programming) projects. If you are a hardware guy, who just wants it to work out of the box, then I suggest you download the image and burn yourself a microSD card. If you feel adventurous, you can always play with the code sections. If you succeed in giving the Kernel a heart attack( also known as Kernel Panic), no worries. Just burn the image again. The book is divided into eight chapters and seven different projects. The first four don't involve a BeagleBone at all. (For more resources related to this topic, see here.) Chapter 1 – Introduction – Our First Project This chapter is for the hardware guys and the adventurous programmers. In this chapter, you will build your own infrared flashlight. If you can use a soldering iron and a solder sucker you can build this project. IR flashlight Chapter 2 – Infrared Beacon In this chapter, we continue with the theme of infrared devices, by building a somewhat more challenging project from a construction prospective. Files for the PCB are available for download from the Packt site, if you bought the book of course. What this beacon does is flash two infrared LED's on and off at a rate that can be selected by the builder. The beacon is only visible when viewed through night-vision goggles or on a black-and-white video camera. IR beacon While it may not be obvious from the preceding image, the case is actually made from ABS water pipe I purchased from a local hardware store. I like ABS pipe because it is so easy to work with. Chapter 3 – Motion Alarm Once again we will be using ABS pipe to construct a cool project. This time we will be building a motion sensor. Most alarm sensors use some sort of Passive Infrared (PIR) sensor or a millimetre wave radar to detect motion. This project uses a simple (cheap) mercury switch to detect motion. How you reset the alarm is a carefully guarded secret, so you will have to by the book to learn the secret! Motion sensor Notice the ring at the right end of the tube? That is so you can hang it up like a Christmas ornament! As with the last chapter, the PCB files are available for download from the Packt site. Chapter 4 – Sound Card-based Oscilloscope This chapter uses a USB soundcard connected to a PC, because the software I found appears to only run on a PC. If you can find a MAC version of the software, go for it. This project will work for MAC or Linux users too. By the way, I tested all of the software in this chapter on a Pentium 4 class machine running Windows XP, so here is an opportunity to recycle/repurpose that old PC you were going to junk! Soundblaster oscilloscope The title of the chapter is somewhat misleading, because the project also includes plans for building a sound card-based audio signal generator. There are a number of commercial and freeware versions of software that take advantage of this hardware. Soundblaster software on PC There are a number of commercial software packages that have a freeware version available for download. The preceding screenshot shows one of the better ones I found running under Windows XP. Chapter 5 – Calibrated RF Source In this chapter we will be building a clean calibrated RF signal source. In addition to being of use to ham radio enthusiasts, it will also be used the chapters that follow. Clean 50MHzsignal This is the first project that actually makes use of the BeagleBone Black. The BeagleBone is used to control a digitally controlled step attenuator. This allows us to output a calibrated signal level from our 50MHz source. In addition to its use in upcoming chapters, once again ham radio enthusiasts will no doubt find a clean RF source with a calibrated output which is selectable in .5dB steps. GUI running on BeagleBone Black Chapter 6 – RF Power Meter – Hardware In this chapter we will be building and RF power meter capable of measuring RF power from 40MHz to over 6GHz. The circuit is based on the Linear Technology LTC5582 RMS power detector. The beauty of this device is that it outputs a DC voltage proportional to the RMS power it detects. There is no need for conversion as there is with other detectors. RMS power is the AC power measured by your digital voltmeter when you have it set to AC. RF detector mounter on protoboard The connector near the notch in the protoboard allows the BeagleBone to both read the RF power and control the step attenuator mentioned earlier. Chapter 7 – RF Power Meter – Software In this chapter we will be building a development system based on Ubuntu and using a docking station available from https://specialcomp.com/beaglebone/index.htm This could be considered the "deluxe" version. It is also possible to complete the next two chapters using the debug port on the BeagleBone and a communications program like PuTTY. BeagleBone development system This configuration also contains the hardware to build a combination wired and wireless alarm system. More on that is in the following chapter. Chapter 8 – Creating a ZigBee Network of Sensors This is the longest and by far the most complex chapter in the book. In this chapter we will learn how to configure XBee modules from Digi International Inc. using the XCTU Windows application. We will then build a standalone wireless alarm system. This alarm system will be based on hardware developed and presented in my previous book: http://www.packtpub.com/building-a-home-security-system-with-beaglebone/book If you purchased my previous book and build any of the alarm system hardware, you can also use it in this chapter to convert your wired alarm system to wireless! The following image is of the XBee module mounted on top of the alarm boards. Each wireless remote module has two alarm zone inputs and four isolated alarm outputs. Completed wireless alarm remote module Summary This book will hopefully have something of interest to a large variety of electronics enthusiasts, from hams to hackers. I would say that, as long as you have at least intermediate programming and construction skills, you should have no problem completing the projects in this book. All the projects use through-hole parts to make assembly easier. Resources for Article: Further resources on this subject: Building robots that can walk [article] Beagle Boards [article] Protecting GPG Keys in BeagleBone [article]
Read more
  • 0
  • 0
  • 2498

article-image-protecting-gpg-keys-beaglebone
Packt
24 Sep 2014
23 min read
Save for later

Protecting GPG Keys in BeagleBone

Packt
24 Sep 2014
23 min read
In this article by Josh Datko, author of BeagleBone for Secret Agents, you will learn how to use the BeagleBone Black to safeguard e-mail encryption keys. (For more resources related to this topic, see here.) After our investigation into BBB hardware security, we'll now use that technology to protect your personal encryption keys for the popular GPG software. GPG is a free implementation of the OpenPGP standard. This standard was developed based on the work of Philip Zimmerman and his Pretty Good Privacy (PGP) software. PGP has a complex socio-political backstory, which we'll briefly cover before getting into the project. For the project, we'll treat the BBB as a separate cryptographic co-processor and use the CryptoCape, with a keypad code entry device, to protect our GPG keys when they are not in use. Specifically, we will do the following: Tell you a little about the history and importance of the PGP software Perform basic threat modeling to analyze your project Create a strong PGP key using the free GPG software Teach you to use the TPM to protect encryption keys History of PGP The software used in this article would have once been considered a munition by the U.S. Government. Exporting it without a license from the government, would have violated the International Traffic in Arms Regulations (ITAR). As late as the early 1990s, cryptography was heavily controlled and restricted. While the early 90s are filled with numerous accounts by crypto-activists, all of which are well documented in Steven Levy's Crypto, there is one man in particular who was the driving force behind the software in this project: Philip Zimmerman. Philip Zimmerman had a small pet project around the year 1990, which he called Pretty Good Privacy. Motivated by a strong childhood passion for codes and ciphers, combined with a sense of political activism against a government capable of strong electronic surveillance, he set out to create a strong encryption program for the people (Levy 2001). One incident in particular helped to motivate Zimmerman to finish PGP and publish his work. This was the language that the then U.S. Senator Joseph Biden added to Senate Bill #266, which would mandate that: "Providers of electronic communication services and manufacturers of electronic communications service equipment shall ensure that communication systems permit the government to obtain the plaintext contents of voice, data, and other communications when appropriately authorized by law." In 1991, in a rush to release PGP 1.0 before it was illegal, Zimmerman released his software as a freeware to the Internet. Subsequently, after PGP spread, the U.S. Government opened a criminal investigation on Zimmerman for the violation of the U.S. export laws. Zimmerman, in what is best described as a legal hack, published the entire source code of PGP, including instructions on how to scan it back into digital form, as a book. As Zimmerman describes: "It would be politically difficult for the Government to prohibit the export of a book that anyone may find in a public library or a bookstore."                                                                                                                           (Zimmerman, 1995) A book published in the public domain would no longer fall under ITAR export controls. The genie was out of the bottle; the government dropped its case against Zimmerman in 1996. Reflecting on the Crypto Wars Zimmerman's battle is considered a resilient victory. Many other outspoken supporters of strong cryptography, known as cypherpunks, also won battles popularizing and spreading encryption technology. But if the Crypto Wars were won in the early nineties, why hasn't cryptography become ubiquitous? Well, to a degree, it has. When you make purchases online, it should be protected by strong cryptography. Almost nobody would insist that their bank or online store not use cryptography and most probably feel more secure that they do. But what about personal privacy protecting software? For these tools, habits must change as the normal e-mail, chat, and web browsing tools are insecure by default. This change causes tension and resistance towards adoption. Also, security tools are notoriously hard to use. In the seminal paper on security usability, researchers conclude that the then PGP version 5.0, complete with a Graphical User Interface (GUI), was not able to prevent users, who were inexperienced with cryptography but all of whom had at least some college education, from making catastrophic security errors (Whitten 1999). Glenn Greenwald delayed his initial contact with Edward Snowden for roughly two months because he thought GPG was too complicated to use (Greenwald, 2014). Snowden absolutely refused to share anything with Greenwald until he installed GPG. GPG and PGP enable an individual to protect their own communications. Implicitly, you must also trust the receiving party not to forward your plaintext communication. GPG expects you to protect your private key and does not rely on a third party. While this adds some complexity and maintenance processes, trusting a third party with your private key can be disastrous. In August of 2013, Ladar Levison decided to shut down his own company, Lavabit, an e-mail provider, rather than turn over his users' data to the authorities. Levison courageously pulled the plug on his company rather then turn over the data. The Lavabit service generated and stored your private key. While this key was encrypted to the user's password, it still enabled the server to have access to the raw key. Even though the Lavabit service alleviated users from managing their private key themselves, it enabled the awkward position for Levison. To use GPG properly, you should never turn over your private key. For a complete analysis of Lavabit, see Moxie Marlinspike's blog post at http://www.thoughtcrime.org/blog/lavabit-critique/. Given the breadth and depth of state surveillance capabilities, there is a re-kindled interest in protecting one's privacy. Researchers are now designing secure protocols, with these threats in mind (Borisov, 2014). Philip Zimmerman ended the chapter on Why Do You Need PGP? in the Official PGP User's Guide with the following statement, which is as true today as it was when first inked: "PGP empowers people to take their privacy into their own hands. There's a growing social need for it." Developing a threat model We introduced the concept of a threat model. A threat model is an analysis of the security of the system that identifies assets, threats, vulnerabilities, and risks. Like any model, the depth of the analysis can vary. In the upcoming section, we'll present a cursory analysis so that you can start thinking about this process. This analysis will also help us understand the capabilities and limitations of our project. Outlining the key protection system The first step of our analysis is to clearly provide a description of the system we are trying to protect. In this project, we'll build a logical GPG co-processor using the BBB and the CryptoCape. We'll store the GPG keys on the BBB and then connect to the BBB over Secure Shell (SSH) to use the keys and to run GPG. The CryptoCape will be used to encrypt your GPG key when not in use, known as at rest. We'll add a keypad to collect a numeric code, which will be provided to the TPM. This will allow the TPM to unwrap your GPG key. The idea for this project was inspired by Peter Gutmann's work on open source cryptographic co-processors (Gutmann, 2000). The BBB, when acting as a co-processor to a host, is extremely flexible, and considering the power usage, relatively high in performance. By running sensitive code that will have access to cleartext encryption keys on a separate hardware, we gain an extra layer of protection (or at the minimum, a layer of indirection). Identifying the assets we need to protect Before we can protect anything, we must know what to protect. The most important assets are the GPG private keys. With these keys, an attacker can decrypt past encrypted messages, recover future messages, and use the keys to impersonate you. By protecting your private key, we are also protecting your reputation, which is another asset. Our decrypted messages are also an asset. An attacker may not care about your key if he/she can easily access your decrypted messages. The BBB itself is an asset that needs protecting. If the BBB is rendered inoperable, then an attacker has successfully prevented you from accessing your private keys, which is known as a Denial-Of-Service (DOS). Threat identification To identify the threats against our system, we need to classify the capabilities of our adversaries. This is a highly personal analysis, but we can generalize our adversaries into three archetypes: a well funded state actor, a skilled cracker, and a jealous ex-lover. The state actor has nearly limitless resources both from a financial and personnel point of view. The cracker is a skilled operator, but lacks the funding and resources of the state actor. The jealous ex-lover is not a sophisticated computer attacker, but is very motivated to do you harm. Unfortunately, if you are the target of directed surveillance from a state actor, you probably have much bigger problems than your GPG keys. This actor can put your entire life under monitoring and why go through the trouble of stealing your GPG keys when the hidden video camera in the wall records everything on your screen. Also, it's reasonable to assume that everyone you are communicating with is also under surveillance and it only takes one mistake from one person to reveal your plans for world domination. The adage by Benjamin Franklin is apropos here: Three may keep a secret if two of them are dead. However, properly using GPG will protect you from global passive surveillance. When used correctly, neither your Internet Service Provider, nor your e-mail provider, or any passive attacker would learn the contents of your messages. The passive adversary is not going to engage your system, but they could monitor a significant amount of Internet traffic in an attempt to collect it all. Therefore, the confidentiality of your message should remain protected. We'll assume the cracker trying to harm you is remote and does not have physical access to your BBB. We'll also assume the worst case that the cracker has compromised your host machine. In this scenario there is, unfortunately, a lot that the cracker can perform. He can install a key logger and capture everything, including the password that is typed on your computer. He will not be able to get the code that we'll enter on the BBB; however, he would be able to log in to the BBB when the key is available. The jealous ex-lover doesn't understand computers very well, but he doesn't need to, because he knows how to use a golf club. He knows that this BBB connected to your computer is somehow important to you because you've talked his ear off about this really cool project that you read in a book. He physically can destroy the BBB and with it, your private key (and probably the relationship as well!). Identifying the risks How likely are the previous risks? The risk of active government surveillance in most countries is fortunately low. However, the consequences of this attack are very damaging. The risk of being caught up in passive surveillance by a state actor, as we have learned from Edward Snowden, is very likely. However, by using GPG, we add protection against this threat. An active cracker seeking you harm is probably unlikely. Contracting keystroke-capturing malware, however, is probably not an unreasonable event. A 2013 study by Microsoft concluded that 8 out of every 1,000 computers were infected with malware. You may be tempted to play these odds but let's rephrase this statement: in a group of 125 computers, one is infected with malware. A school or university easily has more computers than this. Lastly, only you can assess the risk of a jealous ex-lover. For the full Microsoft report, refer to http://blogs.technet.com/b/security/archive/2014/03/31/united-states-malware-infection-rate-more-than-doubles-in-the-first-half-of-2013.aspx. Mitigating the identified risks If you find yourself the target of a state, this project alone is not going to help much. We can protect ourselves somewhat from the cracker with two strategies. The first is instead of connecting the BBB to your laptop or computer, you can use the BBB as a standalone machine and transfer files via a microSD card. This is known as an air-gap. With a dedicated monitor and keyboard, it is much less likely for software vulnerabilities to break the gap and infect the BBB. However, this comes as a high level of personal inconvenience, depending on how often you encrypt files. If you consider the risk of running the BBB attached to your computer too high, create an air-gapped BBB for maximum protection. If you deem the risk low, because you've hardened your computer and have other protection mechanism, then keep the BBB attached to the computer. An air-gapped computer can still be compromised. In 2010, a highly specialized worm known as Stuxnet was able to spread to networked isolated machines through USB flash drives. The second strategy is to somehow enter the GPG passphrase directly into the BBB without using the host's keyboard. After we complete the project, we'll suggest a mechanism to do this, but it is slightly more complicated. This would eliminate the threat of the key logger since the pin is directly entered. The mitigation against the ex-lover is to treat your BBB as you would your own wallet, and don't leave it out of your sight. It's slightly larger than you would want, but it's certainly small enough to fit in a small backpack or briefcase. Summarizing our threat model Our threat model, while cursory, illustrates the thought process one should go through before using or developing security technologies. The term threat model is specific to the security industry, but it's really just proper planning. The purpose of this analysis is to find logic bugs and prevent you from spending thousands of dollars on high-tech locks for your front door when you keep your backdoor unlocked. Now that we understand what we are trying to protect and why it is important to use GPG, let's build the project. Generating GPG keys First, we need to install GPG on the BBB. It is mostly likely already installed, but you can check and install it with the following command: sudo apt-get install gnupg gnupg-curl Next, we need to add a secret key. For those that already have a secret key, you can import your secret key ring, secring.gpg, to your ~/.gnupg folder. For those that want to create a new key, on the BBB, proceed to the upcoming section. This project assumes some familiarity with GPG. If GPG is new to you, the Free Software Foundation maintains the Email Self-Defense guide which is a very approachable introduction to the software and can be found at https://emailselfdefense.fsf.org/en/index.html. Generating entropy If you decided to create a new key on the BBB, there are a few technicalities we must consider. First of all, GPG will need a lot of random data to generate the keys. The amount of random data available in the kernel is proportional to the amount of entropy that is available. You can check the available entropy with the following command: cat /proc/sys/kernel/random/entropy_avail If this command returns a relatively low number, under 200, then GPG will not have enough entropy to generate a key. On a PC, one can increase the amount of entropy by interacting with the computer such as typing on the keyboard or moving the mouse. However, such sources of entropy are difficult for embedded systems, and in our current setup, we don't have the luxury of moving a mouse. Fortunately, there are a few tools to help us. If your BBB is running kernel version 3.13 or later, we can use the hardware random number generator on the AM3358 to help us out. You'll need to install the rng-tools package. Once installed, you can edit /etc/default/rng-tools and add the following line to register the hardware random number generated for rng-tools: HRNGDEVICE=/dev/hwrng After this, you should start the rng-tools daemon with: /etc/init.d/rng-tools start If you don't have /dev/hwrng—and currently, the chips on the CryptoCape do not yet have character device support and aren't available to /dev/hwrng—then you can install haveged. This daemon implements the Hardware Volatile Entropy Gathering and Expansion (HAVEGE) algorithm, the details of which are available at http://www.irisa.fr/caps/projects/hipsor/. This daemon will ensure that the BBB maintains a pool of entropy, which will be sufficient for generating a GPG key on the BBB. Creating a good gpg.conf file Before you generate your key, we need to establish some more secure defaults for GPG. As we discussed earlier, it is still not as easy as it should be to use e-mail encryption. Riseup.net, an e-mail provider with a strong social cause, maintains an OpenPGP best practices guide at https://help.riseup.net/en/security/message-security/openpgp/best-practices. This guide details how to harden your GPG configuration and provides the motivation behind each option. It is well worth a read to understand the intricacies of GPG key management. Jacob Applebaum maintains an implementation of these best practices, which you should download from https://github.com/ioerror/duraconf/raw/master/configs/gnupg/gpg.conf and save as your ~/.gnupg/gpg.conf file. The configuration is well commented and you can refer to the best practices guide available at Riseup.net for more information. There are three entries, however, that you should modify. The first is default-key, which is the fingerprint of your primary GPG key. Later in this article, we'll show you how to retrieve that fingerprint. We can't perform this action now because we don't have a key yet. The second is keyserver-options ca-cert-file, which is the certificate authority for the keyserver pool. Keyservers host your public keys and a keyserver pool is a redundant collection of keyservers. The instructions on Riseup.net gives the details on how to download and install that certificate. Lastly, you can use Tor to fetch updates on your keys. The act of you requesting a public key from a keyserver signals that you have a potential interest in communicating with the owner of that key. This metadata might be more interesting to a passive adversary than the contents of your message, since it reveals your social network. Tor is apt at protecting traffic analysis. You probably don't want to store your GPG keys on the same BBB as your bridge, so a second BBB would help here. On your GPG BBB, you need to only run Tor as a client, which is its default configuration. Then you can update keyserver-options http-proxy to point to your Tor SOCKS proxy running on localhost. The Electronic Frontier Foundation (EFF) provides some hypothetical examples on the telling nature of metadata, for example, They (the government) know you called the suicide prevention hotline from the Golden Gate Bridge. But the topic of the call remains a secret. Refer to the EFF blog post at https://www.eff.org/deeplinks/2013/06/why-metadata-matters for more details. Generating the key Now you can generate your GPG key. Follow the on screen instructions and don't include a comment. Depending on your entropy source, this could take a while. This example took 10 minutes using haveged as the entropy collector. There are various opinions on what to set as the expiration date. If this is your first GPG, try one year at first. You can always make a new key or extend the same one. If you set the key to never expire and you lose the key, by forgetting the passphrase, people will still think it's valid unless you revoke it. Also, be sure to set the user ID to a name that matches some sort of identification, which will make it easier for people to verify that the holder of the private key is the same person as a certified piece of paper. The command to create a new key is gpg –-gen-key: Please select what kind of key you want:    (1) RSA and RSA (default)    (2) DSA and Elgamal    (3) DSA (sign only)    (4) RSA (sign only) Your selection? 1 RSA keys may be between 1024 and 4096 bits long. What keysize do you want? (2048) 4096 Requested keysize is 4096 bits Please specify how long the key should be valid.          0 = key does not expire      <n> = key expires in n days      <n>w = key expires in n weeks      <n>m = key expires in n months      <n>y = key expires in n years Key is valid for? (0) 1y Key expires at Sat 06 Jun 2015 10:07:07 PM UTC Is this correct? (y/N) y   You need a user ID to identify your key; the software constructs the user ID from the Real Name, Comment and Email Address in this form:    "Heinrich Heine (Der Dichter) <heinrichh@duesseldorf.de>"   Real name: Tyrone Slothrop Email address: tyrone.slothrop@yoyodyne.com Comment: You selected this USER-ID:    "Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>"   Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O You need a Passphrase to protect your secret key.   We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. ......+++++ ..+++++   gpg: key 0xABD9088171345468 marked as ultimately trusted public and secret key created and signed.   gpg: checking the trustdb gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model gpg: depth: 0 valid:   1 signed:   0 trust: 0-, 0q, 0n, 0m, 0f, 1u gpg: next trustdb check due at 2015-06-06 pub   4096R/0xABD9088171345468 2014-06-06 [expires: 2015-06-06]      Key fingerprint = CBF9 1404 7214 55C5 C477 B688 ABD9 0881 7134 5468 uid                 [ultimate] Tyrone Slothrop <tyrone.slothrop@yoyodyne.com> sub   4096R/0x9DB8B6ACC7949DD1 2014-06-06 [expires: 2015-06-06]   gpg --gen-key 320.62s user 0.32s system 51% cpu 10:23.26 total From this example, we know that our secret key is 0xABD9088171345468. If you end up creating multiple keys, but use just one of them more regularly, you can edit your gpg.conf file and add the following line: default-key 0xABD9088171345468 Postgeneration maintenance In order for people to send you encrypted messages, they need to know your public key. Having your public key server can help distribute your public key. You can post your key as follows, and replace the fingerprint with your primary key ID: gpg --send-keys 0xABD9088171345468 GPG does not rely on third parties and expects you to perform key management. To ease this burden, the OpenPGP standards define the Web-of-Trust as a mechanism to verify other users' keys. Details on how to participate in the Web-of-Trust can be found in the GPG Privacy Handbook at https://www.gnupg.org/gph/en/manual/x334.html. You are also going to want to create a revocation certificate. A revocation certificate is needed when you want to revoke your key. You would do this when the key has been compromised, say if it was stolen. Or more likely, if the BBB fails and you can no longer access your key. Generate the certificate and follow the ensuing prompts replacing the ID with your key ID: gpg --output revocation-certificate.asc --gen-revoke 0xABD9088171345468   sec 4096R/0xABD9088171345468 2014-06-06 Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>   Create a revocation certificate for this key? (y/N) y Please select the reason for the revocation: 0 = No reason specified 1 = Key has been compromised 2 = Key is superseded 3 = Key is no longer used Q = Cancel (Probably you want to select 1 here) Your decision? 0 Enter an optional description; end it with an empty line: >  Reason for revocation: No reason specified (No description given) Is this okay? (y/N) y   You need a passphrase to unlock the secret key for user: "Tyrone Slothrop <tyrone.slothrop@yoyodyne.com>" 4096-bit RSA key, ID 0xABD9088171345468, created 2014-06-06   ASCII armored output forced. Revocation certificate created.   Please move it to a medium which you can hide away; if Mallory gets access to this certificate he can use it to make your key unusable. It is smart to print this certificate and store it away, just in case your media become unreadable. But have some caution: The print system of your machine might store the data and make it available to others! Do take the advice and move this file off the BeagleBone. Printing it out and storing it somewhere safe is a good option, or burn it to a CD. The lifespan of a CD or DVD may not be as long as you think. The United States National Archives Frequently Asked Questions (FAQ) page on optical storage media states that: "CD/DVD experiential life expectancy is 2 to 5 years even though published life expectancies are often cited as 10 years, 25 years, or longer." Refer to their website http://www.archives.gov/records-mgmt/initiatives/temp-opmedia-faq.html for more details. Lastly, create an encrypted backup of your encryption key and consider storing that in a safe location on durable media. Using GPG With your GPG private key created or imported, you can now use GPG on the BBB as you would on any other computer. You may have already installed Emacs on your host computer. If you follow the GNU/Linux instructions, you can also install Emacs on the BBB. If you do, you'll enjoy automatic GPG encryption and decryption for files that end in the .gpg extension. For example, suppose you want to send a message to your good friend, Pirate Prentice, whose GPG key you already have. Compose your message in Emacs, and then save it with a .gpg extension. Emacs will prompt you to select the public keys for encryption and will automatically encrypt the buffer. If a GPG-encrypted message is encrypted to a public key, with which you have the corresponding private key, Emacs will automatically decrypt the message if it ends with .gpg. When using Emacs from the terminal, the prompt for encryption should look like the following screenshot: Summary This article covered and taught you about how GPG can protect e-mail confidentiality Resources for Article: Further resources on this subject: Making the Unit Very Mobile - Controlling Legged Movement [Article] Pulse width modulator [Article] Home Security by BeagleBone [Article]
Read more
  • 0
  • 0
  • 4873
article-image-baking-bits-yocto-project
Packt
07 Jul 2014
3 min read
Save for later

Baking Bits with Yocto Project

Packt
07 Jul 2014
3 min read
(For more resources related to this topic, see here.) Understanding the BitBake tool The BitBake task scheduler started as a fork from Portage, which is the package management system used in the Gentoo distribution. However, nowadays the two projects have diverged a lot due to the different usage focusses. The Yocto Project and the OpenEmbedded Project are the most known and intensive users of BitBake, which remains a separated and independent project with its own development cycle and mailing list (bitbake-devel@lists.openembedded.org). BitBake is a task scheduler that parses Python and the Shell Script mixed code. The code parsed generates and runs tasks that may have a complex dependency chain, which is scheduled to allow a parallel execution and maximize the use of computational resources. BitBake can be understood as a tool similar to GNU Make in some aspects. Exploring metadata The metadata used by BitBake can be classified into three major areas: Configuration (the .conf files) Classes (the .bbclass files) Recipes (the .bb and and .bbappend files) The configuration files define the global content, which is used to provide information and configure how the recipes will work. One common example of the configuration file is the machine file that has a list of settings, which describes the board hardware. The classes are used by the whole system and can be inherited by recipes, according to their needs or by default, as in this case with classes used to define the system behavior and provide the base methods. For example, kernel.bbclass helps the process of build, install, and package of the Linux kernel, independent of version and vendor. The recipes and classes are written in a mix of Python and Shell Scripting code. The classes and recipes describe the tasks to be run and provide the needed information to allow BitBake to generate the needed task chain. The inheritance mechanism that permits a recipe to inherit one or more classes is useful to reduce code duplication, and eases the maintenance. A Linux kernel recipe example is linux-yocto_3.14.bb, which inherits a set of classes, including kernel.bbclass. BitBake's most commonly used aspects across all types of metadata (.conf, .bb, and .bbclass). Summary In this article, we studied about BitBake and the classification of metadata. Resources for Article: Further resources on this subject: Installation and Introduction of K2 Content Construction Kit [article] Linux Shell Script: Tips and Tricks [article] Customizing a Linux kernel [article]
Read more
  • 0
  • 0
  • 2312

article-image-bitcoins-pools-and-mining
Packt
06 Feb 2014
6 min read
Save for later

Bitcoins – Pools and Mining

Packt
06 Feb 2014
6 min read
(For more resources related to this topic, see here.) Installing Bitcoind Bitcoind is the software that connects you to other nodes in the network. There is no single point of failure, and the more nodes there are the faster and more secure the network becomes. Peers are rewarded with a transaction fee for the first validation of a transaction and are assigned randomly. After installing Bitcoind by using the following command, we have to synchronize it with the network, which means downloading millions of transactions exceeding a gigabyte in size to your Pi. sudo apt-get install bitcoind Before running the background daemon, you need to consider changing the data directory as the database files can take up many gigabytes of space. Mount a new storage place and always start Bitcoind with the following line: bitcoind –datadir /mnt/HDD/bitcoin –daemon After a few minutes, you can type in the following basic commands. You will need to wait until the entire block chain data is downloaded before you can do anything useful. During this time, the Pi might become less responsive, use a lot of bandwidth, and create a large amount of data in the data directory. This can take up to 12 hours. bitcoind getinfobitcoind getblockcount If you already have most of the block chain data downloaded on another computer, you can just copy all the data to the Pi. You can even have the same wallet and addresses in both locations. It is not recommended though, as that means it's twice as easy for hackers to get your wallet. Always encrypt your wallet with a very strong password and keep the password and wallet backup in a safe, offline place—like CDs or USB drives. If you lose the password, you lose access to the wallet, forever. Bitcoin wallet You need to generate addresses where other people can send you funds. Theoretically, you can generate an unlimited number of addresses, but practically managing all those addresses would become difficult. Bitcoin, by default, allows you to have a hundred addresses. You always get a default wallet address, which counts as the first address, and this address will receive network fees if you process any transactions for the first time. Cryptocurrency transactions cannot be reversed. Sending coins to the incorrect address means your coins will be lost forever. On the other hand, it also protects you if you are receiving coins from somebody else, as they cannot reverse the transaction. These addresses are globally unique, and can never be generated by anybody else in the universe or beyond it. It can get stolen by hackers or viruses, though. Imagine losing your wallet in real life. Unless a kind person returns it to you, the contents of that wallet will be lost forever. You can back up your wallet file, but you should store it on any two USB flash drives or CDs, and store it in a safety deposit box at home or in the bank. You should never be tempted to copy your wallet to any kind of online storage or backup services, especially if it's unencrypted. Creating a Bitcoin address These examples will work with the command line, as you can re-use them to display data on your web server. All online wallets use the Bitcoind API to generate addresses and manage your wallet. As a word of warning, you should never use online wallet services as almost all of them have been hacked, resulting in massive numbers of coins getting stolen. You can download the Bitcoin-QT client and run it in X if you prefer an easy way to manage your wallet. The following command lines will create a new receiving address and then list all your addresses: bitcoind getnewaddressbitcoind listaccounts Receiving Bitcoins As long as you know your address, people can send you Bitcoins. You do not need to have Bitcoind running all the time as the transactions are processes and are stored in the network. It is just very important to keep a backup of your wallet.dat file if you want to send the Bitcoins somewhere else, like an online exchange. Sending Bitcoins As long as you have coins in your wallet, you can easily send coins to another address by using the sendtoaddress command followed by the address and the amount. Here is an example sending 0.01 Bitcoins to the author's tip jar: bitcoind sendtoaddress 126HA8L1dP7fr7ECu2oEMtz6JpDRnzqp9N 0.01 You will get a response, which is a transaction ID. If you do not, the transaction has failed. You need to have at least 0.0001 Bitcoins reserved for the transaction fees. If you are sending large amounts of coins, this transaction fee will also become more expensive. The value of Bitcoins In July 2010, the value of Bitcoin was below 1 USD. Investing in Bitcoins was very controversial and a huge risk, since it was difficult to predict if the currency would be accepted or rejected by the community. As of writing this article, the Bitcoin value has exceeded 1100 USD. You can search the Internet and find interesting articles on early adopters mining Bitcoins, or buying a few hundred dollars and leaving their wallets lying around. Just like one person in the UK who threw away his hard drive with 7500 BTC stored on it. After going public, thousands of people flocked to the public dump ground to search for the hard drive. Other stories include a student who invested a few hundred dollars in 2010, and sold them for hundreds of thousands of dollars. With such large amounts, it is important to consult your local TAX authority or council. The trend seems to be that the more people find out about Bitcoins and the more media publicity it gets, the higher the value of the currency rises. This also looks like it applies to alternative currencies such as Litecoin. Summary We have seen what Bitcoin is. We have seen what are the uses of Bitcoins and also it's value. We have also seen how to send and receive Bitcoins. I hope this has given you a better idea about what Bitcoins really are. Resources for Article: Further resources on this subject: Clusters, Parallel Computing, and Raspberry Pi – A Brief Background [Article] Creating a file server (Samba) [Article] Our First Project – A Basic Thermometer [Article]
Read more
  • 0
  • 0
  • 1474

article-image-instant-optimizing-embedded-systems-using-busybox
Packt
25 Nov 2013
9 min read
Save for later

Instant Optimizing Embedded Systems Using BusyBox

Packt
25 Nov 2013
9 min read
(For more resources related to this topic, see here.) BusyBox Compiling BusyBox, the Swiss Army Knife of Embedded Linux, it can be compiled into a single binary for different architectures. Before compiling software, we must get a compiler and the corresponding libraries, a build host and a target platform, the build host is the one running the compiler, the target platform is the one running the target binary. Herein, the desktop development system is a 64 bit X86 Ubuntu systems, it will be used as our build host, and an ARM Android system will be used as our target platform. To compile BusyBox on X86-64 Ubuntu system for ARM, we need a cross compiler. The gcc-arm-linux-gnueabi cross compiler can be installed directly on Ubuntu: $ sudo apt-get install gcc-arm-linux-gnueabi On the other Linux distributions, Google's official NDK is a good choice if want to share Android's Bionic C library, but since Bionic C library lacks lots of POSIX C header files, if want to get most of BusyBox applets building, the prebuilt version of Linaro GCC with Glibc is preferable, we can download it from http://www.linaro.org/downloads/, for example: http://releases.linaro.org/13.04/components/toolchain/binaries/gcc-linaro-aarch64-none-elf-4.7-2013.04-20130415_linux.tar.bz2. Before compiling, to simplify the configuration, enable the largest generic configuration with make defconfig and configure cross compiler: arm-linux-gnueabi-gcc with: make menuconfig. $ make defconfig $ make menuconfig Busybox Settings ---> Build Options ---> (arm-linux-gnueabi-) Cross Compiler prefix After configuration, we can simply compile it with: $ make Then, a BusyBox binary will be compiled for ARM: $ file busybox busybox: ELF 32-bit LSB executable, ARM, version 1(SYSV), dynamically linked (uses shared libs), stripped To list the shared libraries required by BusyBox binary for ARM, a command arm-linux-gnueabi-readelf should be used: $ arm-linux-gnueabi-readelf -d ./busybox grep "Shared library:" | cut -d'[' -f2 | tr -d ']'| libm.so.6 libc.so.6 ld-linux.so.3 To get the full path, we can get the library search path at first: $ arm-linux-gnueabi-ld --verbose grep SEARCH | tr ';' 'n' | cut -d'"' -f2 | tr -d '"'| /lib/arm-linux-gnueabi /usr/lib/arm-linux-gnueabi /usr/arm-linux-gnueabi/lib Then, we can find out that /usr/arm-linux-gnueabi/lib is the real search path in our platform and we can get the full path of the libraries as below: $ ls /usr/arm-linux-gnueabi/lib/{libm.so.6,libc.so.6,ld-linux.so.3} /usr/arm-linux-gnueabi/lib/ld-linux.so.3 /usr/arm-linux-gnueabi/lib/libc.so.6/usr/arm-linux-gnueabi/lib/libm.so.6 By default, the binary is dynamically linked, to enable static linking, configure BusyBox as following: Busybox Settings ---> Build Options ---> [*] Build BusyBox as a static binary (no shared libs) If using a new Glibc to compile BusyBox with static linking, to avoid such error: inetd.c:(.text.prepare_socket_fd+0x7e): undefined reference to `bindresvport' We need to disable CONFIG_FEATURE_INETD_RPC: Networking Utilities ---> [*] inetd [ ] Support RPC services Then, recompile it with make. BusyBox Installation This section shows how to install the above compiled BusyBox binaries on an ARM Android system. The installation of BusyBox means to create soft links for all of its built-in applets, use wc applet as an example: $ ln -s busybox wc $ echo "Hello, Busybox." ./wc -w| 2 BusyBox can be installed at the earlier compiling stage or at run-time. To build a minimal embedded file system with BusyBox, we'd better install it at the compiling stage with 'make install' for it helps to create the basic directory architecture of a standard Linux root file system and create soft links to BusyBox under the corresponding directories. With this installation method, we need to configure the target installation directory as following, use ~/busybox-ramdisk/ directory as an example: Busybox Settings ---> Installation Options ("make install" behavior) ---> (~/busybox-ramdisk/) BusyBox installation prefix After installation, we may get such a list of the file and directories: $ ls ~/busybox-ramdisk/ bin linuxrc sbin usr But to install it on an existing ARM Android system, it may be easier to install BusyBox at run-time with its --install option. With --install, by default, hard links will be created, to create soft links (symbolic links), -s option should be appended. If want to create links across different file systems (E.g. in Android system, to install BusyBox to /system/bin but BusyBox itself is put in the /data directory), -s must be used. To use the -s option, BusyBox should be configured as below: Busybox Settings ---> General Configuration ---> [*] Support --install [-s] to install applet links at runtime Now, let’s introduce how to install the above compiled BusyBox binaries to an existing ARM Android system. To do so, the Android system must be rooted to make sure the /data and / directories are writable. We will not show how to root an Android device, please get help from your product maker. Or if no real rooted Android device available, the Android emulator (emulator) provided by ADT (Android Development Toolkit, http://developer.android.com/sdk/index.html) can be used to start a rooted Android system on a virtual ARM Android device. To create a virtual ARM Android device and to use the Android emulator, please read the online documents provided by Google on http://developer.android.com/tools/help/android.html and http://developer.android.com/tools/help/emulator.html. Now, let's assume a rooted Android system is already running there with the USB debugging option enabled for Android Debug Bridge (adb, http://developer.android.com/tools/help/adb.html) support, for example, to check if such a device is started, we can run: $ adb devices List of devices attached emulator-5554 device As we can see, a virtual Android device running on Android emulator: emulator-5554 is there. Now, we are able to show how to install the BusyBox binaries on the existing Android system. Since the dynamically linked and statically linked BusyBox binaries are different, we will introduce how to install them respectively. Install the statically linked Busybox binary To install the statically installed Busybox binary, we only need to upload the BusyBox binary itself: $ adb push busybox /data/ Afterwards, install it with the --install option, for example, install it to the /bin directory of the Andriod system: $ adb shell root@android:/ # mount -o remount,rw / root@android:/ # mkdir /bin/ root@android:/ # /data/busybox --install -s /bin To be able to create the /bin directory for installation, the / directory is remounted to be writable. After installation, soft links are created under /bin for all of the built-in applets. If the -s option is not used, it will fail to create the hard links across the /bin and /data directories, that's why we must use -s option, the failure log looks like: busybox: /bin/[: Invalid cross-device link busybox: /bin/[[: Invalid cross-device link busybox: /bin/acpid: Invalid cross-device link busybox: /bin/add-shell: Invalid cross-device link (...truncated...) To execute the just installed applets, use md5sum as an example: $ /bin/md5sum /bin/ls 19994347b06d5ef7dbcbce0932960395 /bin/ls To run the applets without the full path, the /bin directory should be appended to the PATH variable: $ export PATH=$PATH:/bin Then, all of the BusyBox applets can be executed directly, that means we have installed Busybox successfully. To make the settings permanently, the above commands can be added to a script and such a script can be run as an Android service. Install the statically linked Busybox binary To install the statically installed Busybox binary, we only need to upload the BusyBox binary itself: $ adb push busybox /data/ Afterwards, install it with the --install option, for example, install it to the /bin directory of the Andriod system: $ adb shell root@android:/ # mount -o remount,rw / root@android:/ # mkdir /bin/ root@android:/ # /data/busybox --install -s /bin To be able to create the /bin directory for installation, the / directory is remounted to be writable. After installation, soft links are created under /bin for all of the built-in applets. If the -s option is not used, it will fail to create the hard links across the /bin and /data directories, that's why we must use -s option, the failure log looks like: busybox: /bin/[: Invalid cross-device link busybox: /bin/[[: Invalid cross-device link busybox: /bin/acpid: Invalid cross-device link busybox: /bin/add-shell: Invalid cross-device link (...truncated...) To execute the just installed applets, use md5sum as an example: $ /bin/md5sum /bin/ls 19994347b06d5ef7dbcbce0932960395 /bin/ls To run the applets without the full path, the /bin directory should be appended to the PATH variable: $ export PATH=$PATH:/bin Then, all of the BusyBox applets can be executed directly, that means we have installed Busybox successfully. To make the settings permanently, the above commands can be added to a script and such a script can be run as an Android service. Install the dynamically linked BusyBox binary For a dynamically linked BusyBox, to install it, besides the installation of the BusyBox binary itself, the required dynamic linker/loader (ld-linux.so) and the dependent shared libraries (libc.so and libm.so) should be installed too. For the basic installation procedure are the same as the one for statically linked BusyBox, herein, we only introduce how to install the required ld-linux.so.3, libc.so.6 and libm.so.6. Without the above dynamic linker/loader and libraries, we may get such error while running the dynamically linked BusyBox: $ /data/busybox --install -s /bin /system/bin/sh: /data/busybox: No such file or directory Before installation, create another /lib directory on target Android system and then upload the above files to it: $ adb shell mkdir /lib $ adb push /usr/arm-linux-gnueabi/lib/ld-linux.so.3 /lib/ $ adb push /usr/arm-linux-gnueabi/lib/libc.so.6 /lib/ $ adb push /usr/arm-linux-gnueabi/lib/libm.so.6 /lib/ With the above installation, the dynamically linked BusyBox binary should also be able to run and we are able to install and execute its applets as before: $ /data/busybox --install -s /bin As we can see, the installation of the dynamically linked BusyBox binary requires extra installation of the dependent linker/loader and libraries. For real embedded system development, if only the BusyBox binary itself uses the shared libraries, the static linking should be used instead at compiling to avoid the extra installation and the time-cost run-time linking, otherwise, if many binaries share the same libraries, to reduce the total size cost for the size critical embedded systems, dynamic linking may be preferable. Summary In this article we learned about BusyBox compiling and its installation. BusyBox is a free GPL-licensed toolbox aimed at the embedded world. It is a collection of the tiny versions of many common Unix utilities. Resources for Article : Further resources on this subject: Embedding Doctests in Python Docstrings [Article] The Business Layer (Java EE 7 First Look) [Article] Joomla! 1.6: Organizing and Managing Content [Article]
Read more
  • 0
  • 0
  • 4870