Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Asynchronous Programming with C++

You're reading from   Asynchronous Programming with C++ Build blazing-fast software with multithreading and asynchronous programming for ultimate efficiency

Arrow left icon
Product type Paperback
Published in Nov 2024
Publisher Packt
ISBN-13 9781835884249
Length 424 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Javier Reguera Salgado Javier Reguera Salgado
Author Profile Icon Javier Reguera Salgado
Javier Reguera Salgado
Juan Rufes Juan Rufes
Author Profile Icon Juan Rufes
Juan Rufes
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Part 1:Foundations of Parallel Programming and Process Management FREE CHAPTER
2. Chapter 1: Parallel Programming Paradigms 3. Chapter 2: Processes, Threads, and Services 4. Part 2: Advanced Thread Management and Synchronization Techniques
5. Chapter 3: How to Create and Manage Threads in C++ 6. Chapter 4: Thread Synchronization with Locks 7. Chapter 5: Atomic Operations 8. Part 3: Asynchronous Programming with Promises, Futures, and Coroutines
9. Chapter 6: Promises and Futures 10. Chapter 7: The Async Function 11. Chapter 8: Asynchronous Programming Using Coroutines 12. Part 4: Advanced Asynchronous Programming with Boost Libraries
13. Chapter 9: Asynchronous Programming Using Boost.Asio 14. Chapter 10: Coroutines with Boost.Cobalt 15. Part 5: Debugging, Testing, and Performance Optimization in Asynchronous Programming
16. Chapter 11: Logging and Debugging Asynchronous Software 17. Chapter 12: Sanitizing and Testing Asynchronous Software 18. Chapter 13: Improving Asynchronous Software Performance 19. Index 20. Other Books You May Enjoy

Threads

Processes and threads represent two fundamental ways of executing code concurrently, but they differ significantly in their operation and resource management. A process is an instance of a running program that owns its private set of resources, including memory, file descriptors, and execution context. Processes are isolated from each other, providing robust stability across the system since the failure of one process generally does not affect others.

Threads are a fundamental concept in computer science, representing a lightweight and efficient way to execute multiple tasks within a single process. In contrast to processes, which are independent entities with their own private memory space and resources, threads are closely intertwined with the process they belong to. This intimate relationship allows threads to share the same memory space and resources, including file descriptors, heap memory, and any other global data structures allocated by the process.

One of the key advantages of threads is their ability to communicate and share data efficiently. Since all threads within a process share the same memory space, they can directly access and modify common variables without the need for complex IPC mechanisms. This shared environment enables rapid data exchange and facilitates the implementation of concurrent algorithms and data structures.

However, sharing the same memory space also introduces the challenge of managing access to shared resources. To prevent data corruption and ensure the integrity of shared data, threads must employ synchronization mechanisms such as locks, semaphores, or mutexes. These mechanisms enforce rules and protocols for accessing shared resources, ensuring that only one thread can access a particular resource at any given time.

Effective synchronization is crucial in multithreaded programming to avoid race conditions, deadlocks, and other concurrency-related issues.

To address these challenges, various synchronization primitives and techniques have been developed. These include mutexes, which provide exclusive access to a shared resource, semaphores, which allow for controlled access to a limited number of resources, and condition variables, which enable threads to wait for specific conditions to be met before proceeding.

By carefully managing synchronization and employing appropriate concurrency patterns, developers can harness the power of threads to achieve high performance and scalability in their applications. Threads are particularly well-suited for tasks that can be parallelized, such as image processing, scientific simulations, and web servers, where multiple independent computations can be executed concurrently.

Threads, as described previously, are system threads. This means that they are created and managed by the kernel. However, there are scenarios, which we will explore in depth in Chapter 8, where we will require a multitude of threads. In such cases, the system might not have sufficient resources to create numerous system threads. The solution to this problem is the use of user threads. One approach to implementing user threads is through coroutines, which have been included in the C++ standard since C++20.

Coroutines are a relatively new feature in C++. Coroutines can be defined as functions that can be paused and resumed at specific points, allowing for cooperative multitasking within a single thread. Unlike standard functions that run from start to finish without interruption, coroutines can suspend their execution and yield control back to the caller, which can later resume the coroutine from the point it was paused.

Coroutines are much more lightweight than system threads. This means that they can be created and destroyed much more quickly, and that they require less overhead.

Coroutines are cooperative, which means that they must explicitly yield control to the caller in order to switch execution context. This can be a disadvantage in some cases, but it can also be an advantage, as it gives the user program more control over the execution of coroutines.

Coroutines can be used to create a variety of different concurrency patterns. For example, coroutines can be used to implement tasks, which are lightweight work units that can be scheduled and run concurrently. Coroutines can also be used to implement channels, which are communication channels that can pass data between them.

Coroutines can be classified into stackful and stackless categories. C++20 coroutines are stackless. We will see these concepts in depth in Chapter 8.

Overall, coroutines are a powerful tool for creating concurrent programs in C++. They are lightweight, cooperative, and can be used to implement a variety of different concurrency patterns. They cannot be used to implement parallelism entirely because coroutines still need CPU execution context, which can be only provided by a thread.

Thread life cycle

The life cycle of a system thread, often referred to as a lightweight process, encompasses the stages from its creation until its termination. Each stage plays a crucial role in managing and utilizing threads in a concurrent programming environment:

  1. Creation: This phase begins when a new thread is created in the system. The creation process involves using the function, which takes several parameters. One critical parameter is the thread’s attributes, such as its scheduling policy, stack size, and priority. Another essential parameter is the function that the thread will execute, known as the start routine. Upon its successful creation, the thread is allocated its own stack and other resources.
  2. Execution: After creation, the thread starts executing its assigned start routine. During execution, the thread can perform various tasks independently or interact with other threads if necessary. Threads can also create and manage their own local variables and data structures, making them self-contained and capable of performing specific tasks concurrently.
  3. Synchronization: To ensure orderly access to shared resources and prevent data corruption, threads employ synchronization mechanisms. Common synchronization primitives include locks, semaphores, and barriers. Proper synchronization allows threads to coordinate their activities, avoiding race conditions, deadlocks, and other issues that can arise in concurrent programming.
  4. Termination: A thread can terminate in several ways. It can explicitly call the function to terminate itself. It can also terminate by returning from its start routine. In some cases, a thread can be canceled by another thread using the function. Upon termination, the system reclaims the resources allocated to the thread, and any pending operations or locks held by the thread are released.

Understanding the life cycle of a system thread is essential for designing and implementing concurrent programs. By carefully managing thread creation, execution, synchronization, and termination, developers can create efficient and scalable applications that leverage the benefits of concurrency.

Thread scheduling

System threads, managed by the operating system kernel’s scheduler, are scheduled preemptively. The scheduler decides when to switch execution between threads based on factors such as thread priority, allocated time, or mutex blocking. This context switch, controlled by the kernel, can incur significant overhead. The high cost of context switches, coupled with the resource usage of each thread (such as its own stack), makes coroutines a more efficient alternative for some applications because we can run more than one coroutine in a single thread.

Coroutines offer several advantages. First, they reduce the overhead associated with context switches. Since context switching on coroutine yield or await is handled by the user space code rather than the kernel, the process is more lightweight and efficient. This results in significant performance gains, especially in scenarios where frequent context switching occurs.

Coroutines also provide greater control over thread scheduling. Developers can define custom scheduling policies based on the specific requirements of their application. This flexibility allows for fine-tuned thread management, resource utilization optimization, and desired performance characteristics achievement.

Another important feature of coroutines is that they are generally more lightweight compared to system threads. Coroutines don’t maintain their own stack, which is a great resource consumption advantage, making them suitable for resource-constrained environments.

Overall, coroutines offer a more efficient and flexible approach to thread management, particularly in situations where frequent context switching is required or where fine-grained control over thread scheduling is essential. Threads can access the memory process and this memory is shared among all the threads, so we need to be careful and control memory access. This control is achieved by different mechanisms called synchronization primitives.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image