Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Asynchronous Programming with C++

You're reading from   Asynchronous Programming with C++ Build blazing-fast software with multithreading and asynchronous programming for ultimate efficiency

Arrow left icon
Product type Paperback
Published in Nov 2024
Publisher Packt
ISBN-13 9781835884249
Length 424 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Javier Reguera Salgado Javier Reguera Salgado
Author Profile Icon Javier Reguera Salgado
Javier Reguera Salgado
Juan Rufes Juan Rufes
Author Profile Icon Juan Rufes
Juan Rufes
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Part 1:Foundations of Parallel Programming and Process Management FREE CHAPTER
2. Chapter 1: Parallel Programming Paradigms 3. Chapter 2: Processes, Threads, and Services 4. Part 2: Advanced Thread Management and Synchronization Techniques
5. Chapter 3: How to Create and Manage Threads in C++ 6. Chapter 4: Thread Synchronization with Locks 7. Chapter 5: Atomic Operations 8. Part 3: Asynchronous Programming with Promises, Futures, and Coroutines
9. Chapter 6: Promises and Futures 10. Chapter 7: The Async Function 11. Chapter 8: Asynchronous Programming Using Coroutines 12. Part 4: Advanced Asynchronous Programming with Boost Libraries
13. Chapter 9: Asynchronous Programming Using Boost.Asio 14. Chapter 10: Coroutines with Boost.Cobalt 15. Part 5: Debugging, Testing, and Performance Optimization in Asynchronous Programming
16. Chapter 11: Logging and Debugging Asynchronous Software 17. Chapter 12: Sanitizing and Testing Asynchronous Software 18. Chapter 13: Improving Asynchronous Software Performance 19. Index 20. Other Books You May Enjoy

Processes in Linux

A process can be defined as an instance of a running program. It includes the program’s code, all the threads belonging to this process (which are represented by the program counter), the stack (which is an area of memory containing temporary data such as function parameters, return addresses, and local variables), the heap, for memory allocated dynamically, and its data section containing global variables and initialized variables. Each process operates within its own virtual address space and is isolated from other processes, ensuring that its operations do not interfere directly with those of others.

Process life cycle – creation, execution, and termination

The life cycle of a process can be broken down into three primary stages: creation, execution, and termination:

  • Creation: A new process is created using the fork() system call, which creates a new process by duplicating an existing one. The parent process is the one that calls fork(), and the newly created process is the child. This mechanism is essential for the execution of new programs within the system and is a precursor to executing different tasks concurrently.
  • Execution: After creation, the child process may execute the same code as the parent or use the exec() family of system calls to load and run a different program.

    If the parent process has more than one thread of execution, only the thread calling fork() is duplicated in the child process. Consequently, the child process contains a single thread: the one that executed the fork() system call.

    Since only the thread that called fork() is copied to the child, any Mutual Exclusions (mutexes), condition variables, or other synchronization primitives that were held by other threads at the time of the fork remain in their then-current state in the parent but do not carry over to the child. This can lead to complex synchronization issues, as mutexes that were locked by other threads (which do not exist in the child) might remain in a locked state, potentially causing deadlocks if the child tries to unlock or wait on these primitives.

    At this stage, the process performs its designated operations such as reading from or writing to files and communicating with other processes.

  • Termination: A process terminates either voluntarily, by calling the exit() system call, or involuntarily, due to receiving a signal from another process that causes it to stop. Upon termination, the process returns an exit status to its parent process and releases its resources back to the system.

The process life cycle is integral to asynchronous operations as it enables the concurrent execution of multiple tasks.

Each process is uniquely identified by a Process ID (PID), an integer that the kernel uses to manage processes. PIDs are used to control and monitor processes. Parent processes also use PIDs to communicate with or control the execution of child processes, such as waiting for them to terminate or sending signals.

Linux provides mechanisms for process control and signaling, allowing processes to be managed and communicated with asynchronously. Signals are one of the primary means of IPC, enabling processes to interrupt or to be notified of events. For example, the kill command can send signals to stop a process or to prompt it to reload its configuration files.

Process scheduling is how the Linux kernel allocates CPU time to processes. The scheduler determines which process runs at any given time, based on scheduling algorithms and policies that aim to optimize for factors such as responsiveness and efficiency. Processes can be in various states, such as running, waiting, or stopped, and the scheduler transitions them between these states to manage execution efficiently.

Exploring IPC

In the Linux operating system, processes operate in isolation, meaning that they cannot directly access the memory space of other processes. This isolated nature of processes presents challenges when multiple processes need to communicate and synchronize their actions. To address these challenges, the Linux kernel provides a versatile set of IPC mechanisms. Each IPC mechanism is tailored to suit different scenarios and requirements, enabling developers to build complex, high-performance applications that leverage asynchronous processing effectively.

Understanding these IPC techniques is crucial for developers aiming to create scalable and efficient applications. IPC allows processes to exchange data, share resources, and coordinate their activities, facilitating smooth and reliable communication between different components of a software system. By utilizing the appropriate IPC mechanism, developers can achieve improved throughput, reduced latency, and enhanced concurrency in their applications, leading to better performance and user experiences.

In a multitasking environment, where multiple processes run concurrently, IPC plays a vital role in enabling the efficient and coordinated execution of tasks. For example, consider a web server application that handles multiple concurrent requests from clients. The web server process might use IPC to communicate with the child processes responsible for processing each request. This approach allows the web server to handle multiple requests simultaneously, improving the overall performance and scalability of the application.

Another common scenario where IPC is essential is in distributed systems or microservice architectures. In such environments, multiple independent processes or services need to communicate and collaborate to achieve a common goal. IPC mechanisms such as message queues and sockets or Remote Procedure Calls (RPCs) enable these processes to exchange messages, invoke methods on remote objects, and synchronize their actions, ensuring seamless and reliable IPC.

By leveraging the IPC mechanisms provided by the Linux kernel, developers can design systems where multiple processes can work together harmoniously. This enables the creation of complex, high-performance applications that utilize system resources efficiently, handle concurrent tasks effectively, and scale to meet increasing demands effortlessly.

IPC mechanisms in Linux

Linux supports several IPC mechanisms, each with its unique characteristics and use cases.

The fundamental IPC mechanisms supported by the Linux operating system include shared memory, which is commonly employed for process communication on a single server, and sockets, which facilitate inter-server communication. There are other mechanisms (which are briefly described here), but shared memory and sockets are the most commonly used:

  • Pipes and named pipes: Pipes are one of the simplest forms of IPC, allowing for unidirectional communication between processes. A named pipe, or First-in-First-out (FIFO), extends this concept by providing a pipe that is accessible via a name in the filesystem, allowing unrelated processes to communicate.
  • Signals: Signals are a form of software interrupt that can be sent to a process to notify it of events. While they are not a method for transferring data, signals are useful for controlling process behavior and triggering actions within processes.
  • Message queues: Message queues allow processes to exchange messages in a FIFO manner. Unlike pipes, message queues support asynchronous communication, whereby messages are stored in a queue and can be retrieved by the receiving process at its convenience.
  • Semaphores: Semaphores are used for synchronization, helping processes manage access to shared resources. They prevent race conditions by ensuring that only a specified number of processes can access a resource at any given time.
  • Shared memory: Shared memory is a fundamental concept in IPC that enables multiple processes to access and manipulate the same segment of physical memory. It offers a blazing-fast method for exchanging data between different processes, reducing the need for time-consuming data copying operations. This technique is particularly advantageous when dealing with large datasets or requiring high-speed communication. The mechanism of shared memory involves creating a shared memory segment, which is a dedicated portion of physical memory accessible by multiple processes. This shared memory segment is treated as a common workspace, allowing processes to read, write, and collaboratively modify data. To ensure data integrity and prevent conflicts, shared memory requires synchronization mechanisms such as semaphores or mutexes. These mechanisms regulate access to the shared memory segment, preventing multiple processes from simultaneously modifying the same data. This coordination is crucial to maintain data consistency and avoid overwriting or corruption.

    Shared memory is often the preferred IPC mechanism in single-server environments where performance is paramount. Its primary advantage lies in its speed. Since data is directly shared in physical memory without the need for intermediate copying or context switching, it significantly reduces communication overhead and minimizes latency.

    However, shared memory also comes with certain considerations. It requires careful management to prevent race conditions and memory leaks. Processes accessing shared memory must adhere to well-defined protocols to ensure data integrity and avoid deadlocks. Additionally, shared memory is typically implemented as a system-level feature, requiring specific operating system support and potentially introducing platform-specific dependencies.

    Despite these considerations, shared memory remains a powerful and widely used IPC technique, particularly in applications where speed and performance are critical factors.

  • Sockets: Sockets are a fundamental mechanism for IPC in operating systems. They provide a way for processes to communicate with each other, either within the same machine or across networks. Sockets are used to establish and maintain connections between processes, and they support both connection-oriented and connectionless communication.

    Connection-oriented communication is a type of communication in which a reliable connection is established between two processes before any data is transferred. This type of communication is often used for applications such as file transfer and remote login, where it is important to ensure that all data is delivered reliably and in the correct order. Connectionless communication is a type of communication in which no reliable connection is established between two processes before data is transferred. This type of communication is often used for applications such as streaming media and real-time gaming, where it is more important to have low latency than to guarantee reliable delivery of all data.

    Sockets are the backbone of networked applications. They are used by a wide variety of applications, including web browsers, email clients, and file-sharing applications. Sockets are also used by many operating system services, such as the Network File System (NFS) and the Domain Name System (DNS).

    Here are some of the key benefits of using sockets:

    • Reliability: Sockets provide a reliable way to communicate between processes, even when those processes are located on different machines.
    • Scalability: Sockets can be used to support a large number of concurrent connections, making them ideal for applications that need to handle a lot of traffic.
    • Flexibility: Sockets can be used to implement a wide variety of communication protocols, making them suitable for a wide range of applications.
    • Use in IPC: Sockets are a powerful tool for IPC. They are used by a wide variety of applications and are essential for building scalable, reliable, and flexible networked applications.

Microservices-based applications are an example of asynchronous programming using different processes communicating between them in an asynchronous way. A simple example would be a log processor. Different processes generate log entries and send them to another process for further processing such as special formatting, deduplication, and statistics. The producers just send the lines of the log without waiting for any reply from the process they are sending to the log.

In this section, we saw processes in Linux, their life cycles, and how IPC is implemented by the operating system. In the next section, we will introduce a special kind of Linux process called daemons.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image