Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering C++ Multithreading

You're reading from   Mastering C++ Multithreading Write robust, concurrent, and parallel applications

Arrow left icon
Product type Paperback
Published in Jul 2017
Publisher Packt
ISBN-13 9781787121706
Length 244 pages
Edition 1st Edition
Languages
Concepts
Arrow right icon
Author (1):
Arrow left icon
Maya Posch Maya Posch
Author Profile Icon Maya Posch
Maya Posch
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. Revisiting Multithreading 2. Multithreading Implementation on the Processor and OS FREE CHAPTER 3. C++ Multithreading APIs 4. Thread Synchronization and Communication 5. Native C++ Threads and Primitives 6. Debugging Multithreaded Code 7. Best Practices 8. Atomic Operations - Working with the Hardware 9. Multithreading with Distributed Computing 10. Multithreading with GPGPU

MPI versus threads


One might think that it would be easiest to use MPI to allocate one instance of the MPI application to a single CPU core on each cluster node, and this would be true. It would, however, not be the fastest solution.

Although for communication between processes across a network MPI is likely the best choice in this context, within a single system (single or multi-CPU system) using multithreading makes a lot of sense.

The main reason for this is simply that communication between threads is significantly faster than inter-process communication, especially when using a generalized communication layer such as MPI.

One could write an application that uses MPI to communicate across the cluster's network, whereby one allocates one instance of the application to each MPI node. The application itself would detect the number of CPU cores on that system, and create one thread for each core. Hybrid MPI, as it's often called, is therefore commonly used, for the advantages it provides:

  • Faster...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image