Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
C++ Programming for Linux Systems

You're reading from   C++ Programming for Linux Systems Create robust enterprise software for Linux and Unix-based operating systems

Arrow left icon
Product type Paperback
Published in Sep 2023
Publisher Packt
ISBN-13 9781805129004
Length 288 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Stanimir Lukanov Stanimir Lukanov
Author Profile Icon Stanimir Lukanov
Stanimir Lukanov
Desislav Andreev Desislav Andreev
Author Profile Icon Desislav Andreev
Desislav Andreev
Arrow right icon
View More author details
Toc

Table of Contents (15) Chapters Close

Preface 1. Part 1:Securing the Fundamentals
2. Chapter 1: Getting Started with Linux Systems and the POSIX Standard FREE CHAPTER 3. Chapter 2: Learning More about Process Management 4. Chapter 3: Navigating through the Filesystems 5. Chapter 4: Diving Deep into the C++ Object 6. Chapter 5: Handling Errors with C++ 7. Part 2:Advanced Techniques for System Programming
8. Chapter 6: Concurrent System Programming with C++ 9. Chapter 7: Proceeding with Inter-Process Communication 10. Chapter 8: Using Clocks, Timers, and Signals in Linux 11. Chapter 9: Understanding the C++ Memory Model 12. Chapter 10: Using Coroutines in C++ for System Programming 13. Index 14. Other Books You May Enjoy

What is concurrency?

Modern cars have become highly intricate machines that provide not only transportation but also various other functionalities. These functionalities include infotainment systems, which allow users to play music and videos, and heating and air conditioning systems, which regulate the temperature for passengers. Consider a scenario in which these features did not work simultaneously. In such a case, the driver would have to choose between driving the car, listening to music, or staying in a comfortable climate. This is not what we expect from a car, right? We expect all of these features to be available at the same time, enhancing our driving experience and providing a comfortable trip. To achieve this, these features must operate in parallel.

But do they really run in parallel, or do they just run concurrently? Is there any difference?

In computer systems, concurrency and parallelism are similar in certain ways, but they are not the same. Imagine you have some work to do, but this work can be done in separate smaller chunks. Concurrency refers to the situation where multiple chunks of the work begin, execute, and finish during overlapping time intervals, without a guaranteed specific order of execution. On the other hand, parallelism is an execution policy where these chunks execute simultaneously on hardware with multiple computing resources, such as a multi-core processor.

Concurrency happens when multiple chunks of work, which we call tasks, are executed in an unspecified order for a certain period of time. The operating system could run some of the tasks and force the rest to wait. In concurrent execution, the task continuously strives for an execution slot because the operating system does not guarantee that it will execute all of them at once. Furthermore, it is highly possible that while a task is being executed, it is suddenly suspended, and another task starts executing. This is called preemption. It is clear that in concurrent task execution, the order of how the tasks will be executed is not guaranteed.

Let’s get back to our car example. In modern cars, the infotainment system is responsible for performing many activities simultaneously. For example, it can run the navigation part while allowing you to listen to music. This is possible because the system runs these tasks concurrently. It runs the tasks related to route calculation while processing the music content. If the hardware system has a single core, then these tasks should run concurrently:

Figure 6.1 – Concurrent task execution

Figure 6.1 – Concurrent task execution

From the preceding figure, you can see that each task gets a non-deterministic execution time in an unpredictable order. In addition, there is no guarantee that your task will be finished before the next one is started. This is where the preemption happens. While your task is running, it is suddenly suspended, and another task is scheduled for execution. Keep in mind that task switching is not a cheap process. The system consumes the processor’s computation resource to perform this action – to make the context switch. The conclusion should be the following: we have to design our systems to respect these limitations.

On the other hand, parallelism is a form of concurrency that involves executing multiple operations simultaneously on separate processing units. For example, a computer with multiple CPUs can execute multiple tasks in parallel, which can lead to significant performance improvements. You don’t have to worry about the context switching and the preemption. It has its drawbacks, though, and we will discuss them thoroughly.

Figure 6.2 – Parallel task execution

Figure 6.2 – Parallel task execution

Going back to our car example, if the CPU of the infotainment system is multi-core, then the tasks related to the navigation system could be executed on one core, and the tasks for the music processing on some of the other cores. Therefore, you don’t have to take any action to design your code to support preemption. Of course, this is only true if you are sure that your code will be executed in such an environment.

The fundamental connection between concurrency and parallelism lies in the fact that parallelism can be applied to concurrent computations without affecting the accuracy of the outcome, but the presence of concurrency alone does not guarantee parallelism.

In summary, concurrency is an important concept in computing that allows multiple tasks to be executed simultaneously, even though that is not guaranteed. This could lead to improved performance and efficient resource utilization but at the cost of more complicated code respecting the pitfalls that concurrency brings. On the other hand, truly parallel execution of code is easier to handle from a software perspective but must be supported by the underlying system.

In the next section, we will get familiar with the difference between execution threads and processes in Linux.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image