Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Clusters, Parallel Computing, and Raspberry Pi – A Brief Background

Save for later
  • 12 min read
  • 14 Nov 2013

article-image

(For more resources related to this topic, see here.)

So what is a cluster? Each device on this network is often referred to as a node.

Thanks to the Raspberry Pi's low cost and small physical footprint, building a cluster to explore parallel computing has become far cheaper and easier for users at home to implement. Not only does it allow you to explore the software side, but also the hardware as well.

While Raspberry Pis wouldn't be suitable for a fully-fledged production system, they provide a great tool for learning the technologies that professional clusters are built upon. For example, they allow you to work with industry standards, such as MPI and cutting edge open source projects such as Hadoop.

This article will provide you with a basic background to parallel computing and the technologies associated with it. It will also provide you with an introduction to using the Raspberry Pi.

A very short history of parallel computing

The basic assumption behind parallel computing is that a larger problem can be divided into smaller chunks, which can then be operated on separately at the same time.

Related to parallelism is the concept of concurrency, but the two terms should not be confused.

Parallelism can be thought of as simultaneous execution and concurrency as the composition of independent processes. You will encounter both of these approaches in this article.

You can find out more about the differences between the two at the following site:

http://blog.golang.org/concurrency-is-not-parallelism

Parallel computing and related concepts have been in use by capital-intensive industries, such as Aircraft design and Defense, since the late 1950's and early 1960's. With the cost of hardware having dropped rapidly over the past five decades and the birth of open source operating systems and applications; home enthusiasts, students, and small companies now have the ability to leverage these technologies for their own uses.

Traditionally parallel computing was found within High Performance Computing (HPC) architectures, those being systems categorized by high speed and density of calculations. The term you will probably be most familiar with in this context is, of course, supercomputers, which we shall look at next.

Supercomputers

The genesis of supercomputing can be found in the 1960's with a company called Control Data Corporation(CDC). Seymour Cray was an electrical engineer working for CDC who became known as the father of supercomputing due to his work on the CDC 6600, generally considered to be the first supercomputer. The CDC 6600 was the fastest computer in operation between 1964 and 1969.

In 1972 Cray left CDC and formed his own company, Cray Research. In 1975 Cray Research announced the Cray-1 supercomputer. The Cray-1 would go on to be one of the most successful supercomputers in history and was still in use among some institutions until the late 1980's.

The 1980's also saw a number of other players enter the market including Intel via the Caltech Concurrent Computation project, which contained 64 Intel 8086/8087 CPU's and Thinking Machines Corporation's CM-1 Connection Machine.

This preceded an explosion in the 1990's with regards to the number of processors being included in supercomputing machines. It was in this decade, thanks to brute-force computing power that IBM infamously beat world chess master Garry Kasparov with the Deep Blue supercomputer.

The Deep Blue machine contained some 30 nodes each including IBM RS6000/SP parallel processors and numerous "chess chips".

By the 2000's the number of processors had blossomed to tens of thousands working in parallel. As of June 2013 the fastest supercomputer title was held by the Tianhe-2, which contains 3,120,000 cores and is capable of running at 33.86 petaflops per second.

Parallel computing is not just limited to the realm of supercomputing. Today we see these concepts present in multi-core and multiprocessor desktop machines. As well as single devices we also have clusters of independent devices, often containing a single core, that can be connected up to work together over a network.

Since multi-core machines can be found in consumer electronic shops all across the world we will look at these next.

Multi-core and multiprocessor machines

Machines packing multiple cores and processors are no longer just the domain of supercomputing. There is a good chance that your laptop or mobile phone contains more than one processing core, so how did we reach this point?

The mainstream adoption of parallel computing can be seen as a result of the cost of components dropping due to Moore's law. The essence of Moore's law is that the number of transistors in integrated circuits doubles roughly every 18 to 24 months.

This in turn has consistently pushed down the cost of hardware such as CPU's. As a result, manufacturers such as Dell and Apple have produced even faster machines for the home market that easily outperform the supercomputers of old that once took a room to house.

Computers such as the 2013 Mac Pro can contain up to twelve cores, that is a CPU that duplicates some of its key computational components twelve times. These cost a fraction of the price that the Cray-1 did at its launch.

Devices that contain multiple cores allow us to explore parallel-based programming on a single machine. One method that allows us to leverage multiple cores is threads.

Threads can be thought of as a sequence of instructions usually contained within a single lightweight process that the operating system can then schedule to run. From a programming perspective this could be a separate function that runs independently from the main core of the program.

Thanks to the ability to use threads in application development, by the 1990's a set of standards had come to dominate the area of shared memory multiprocessor devices, these were POSIX Threads(Pthreads) and OpenMP.

POSIX threads is a standardized C language interface specified in the IEEE POSIX 1003.1c standard for programming threads, that can be used to implement parallelism.

The other standard specified is OpenMP. To quote the OpenMP website, it can be described as:

OpenMP is a specification for a set of compiler directives, library routines, and environment variables that can be used to specify shared memory parallelism in Fortran and C/C++ programs.

http://openmp.org/

What this means in practice is that OpenMP is a standard that provides an API that helps to deal with problems, such as multi-threading and memory sharing. By including OpenMP in your project, you can write multithreaded applications without having to take care of many of the low-level implementation details as with writing an application purely using Pthreads.

Commodity hardware clusters

As with single devices with many CPU's, we also have groups of commodity off the shelf(COTS) computers, which can be networked together into a Local Area Network(LAN). These used to be commonly referred to as Beowulf clusters.

In the late 1990's, thanks to the drop in the cost of computer hardware, the implementation of Beowulf clusters became a popular topic, with Wired magazine publishing a how-to guide in 2000:

http://www.wired.com/wired/archive/8.12/beowulf.html

The Beowulf cluster has its origin in NASA in the early 1990's, with Beowulf being the name given to the concept of a Network Of Workstations(NOW) for scientific computing devised by Donald J. Becker and Thomas Sterling.

The implementation of commodity hardware clusters running technologies such as MPI lies behind the Raspberry Pi-based projects we will be building in this article.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

Cloud computing

The next topic we will look at is cloud computing. You have probably heard the term before, as it is something of a buzzword at the moment.

At the core of the term is a set of technologies that are distributed, scalable, metered (as with utilities), can be run in parallel, and often contain virtual hardware. Virtual hardware is software that mimics the role of a real hardware device and can be programmed as if it were in fact a physical machine.

Examples of virtual machine software include VirtualBox, Red Hat Enterprise Virtualization, and parallel virtual machine(PVM). You can learn more about PVM here:

http://www.csm.ornl.gov/pvm/

Over the past decade, many large Internet-based companies have invested in cloud technologies, the most famous perhaps being Amazon. Having realized they were under utilizing a large proportion of their data centers, Amazon implemented a cloud computing-based architecture which eventually resulted in a platform open to the public known as Amazon Web Services(AWS).

Products such as Amazon's AWS Elastic Compute Cloud(EC2) have opened up cloud computing to small businesses and home consumers by allowing them to rent virtual computers to run their own applications and services. This is especially useful for those interested in building their own virtual computing clusters.

Due to the elasticity of cloud computing services such as EC2, it is easy to spool up many server instances and link these together to experiment with technologies such as Hadoop.

One area where cloud computing has become of particular use, especially when implementing Hadoop, is in the processing of big data.

Big data

The term big data has come to refer to data sets spanning terabytes or more. Often found in fields ranging from genomics to astrophysics, big data sets are difficult to work with and require huge amount of memory and computational power to query.

These data sets obviously need to be mined for information. Using parallel technologies such as MapReduce, as realized in Apache Hadoop, have provided a tool for dividing a large task such as this amongst multiple machines. Once divided, tasks are run to locate and compile the needed data.

Another Apache application is Hive, a data warehouse system for Hadoop that allows the use of a SQL-like language called HiveQL to query the stored data.

As more data is produced year-on-year by more computational devices ranging from sensors to cameras, the ability to handle large datasets and process them in parallel to speed up queries for data will become ever more important.

These big data problems have in-turn helped push the boundaries of parallel computing further as many companies have come into being with the purpose of helping to extract information from the sea of data that now exists.

Raspberry Pi and parallel computing

Having reviewed some of the key terms of High Performance Computing, it is now time to turn our attention to the Raspberry Pi and how and why we intend to implement many of the ideas explained so far.

This article assumes that you are familiar with the basics of the Raspberry Pi and how it works, and have a basic understanding of programming. Throughout this article when using the term Raspberry Pi, it will be in reference to the Model B version.

For those of you new to the device, we recommend reading a little more about it at the official Raspberry Pi home page:

http://www.raspberrypi.org/

Other topics covered in this article, such as Apache Hadoop, will also be accompanied with links to information that provides a more in-depth guide to the topic at hand.

Due to the Raspberry Pi's small size and low cost, it makes a good alternative to building a cluster in the cloud on Amazon, or similar providers which can be expensive or using desktop PC's.

The Raspberry Pi comes with a built-in Ethernet port, which allows you to connect it to a switch, router, or similar device. Multiple Raspberry Pi devices connected to a switch can then be formed into a cluster; this model will form the basis of our hardware configuration in the article.

Unlike your laptop or PC, which may contain more than one CPU, the Raspberry Pi contains just a single ARM processor; however, multiple Raspberry Pi's combined give us more CPU's to work with.

One benefit of the Raspberry Pi is that it also uses SD cards as secondary storage,

which can easily be copied, allowing you to create an image of the Raspberry Pi's operating system and then clone it for re-use on multiple machines. When starting out with the Raspberry Pi this is a useful feature.

The Model B contains two USB ports allowing us to expand the device's storage capacity (and the speed of accessing the data) by using a USB hard drive instead of the SD card.

From the perspective of writing software, the Raspberry Pi can run various versions of the Linux operating system as well as other operating systems, such as FreeBSD and the software and tools associated with development on it. This allows us to implement the types of technology found in Beowulf clusters and other parallel systems. We shall provide an overview of these development tools next.

Programming languages and frameworks

A number of programming languages including Fortran, C/C++, and Java are available on the Raspberry Pi, including via the standard repositories. These can be used for writing parallel applications using implementations of MPI, Hadoop, and the other frameworks we discussed earlier in this article.

Fortran, C, and C++ have a long history with parallel computing and will all be examined to varying degrees throughout the article. We will also be installing Java in order to write Hadoop-based MapReduce applications.

Fortran, due to its early implementation on supercomputing projects is still popular today for parallel computing application development, as a large body of code that performs specific scientific calculations exists.

Apache Hadoop is an open source Java-based MapReduce framework designed for distributed parallel application development.

A MapReduce framework allows an application to take, for example, a number of data sets, divide them up, and mine each data set independently. This can take place on separate devices and then the results are combined into a single data set from which we finally extract a meaningful value.

Summary

This concludes our short introduction to parallel computing and the tools we will be using on Raspberry Pi.

You should now have a basic idea of some of the terms related to parallel computing and why using the Raspberry Pi is a fun and cheap way to build your own computing cluster.

Our next task will be to set up our first Raspberry Pi, including installing its operating system. Once set up is complete, we can then clone its SD card and re-use it for future machines.

Resources for Article :


Further resources on this subject: