Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Concurrency programming 101: Why do programmers hang by a thread?

Save for later
  • 4 min read
  • 03 Apr 2018

article-image
A thread can be defined as an ordered stream of instructions that can be scheduled to run as such by operating systems. These threads, typically, live within processes, and consist of a program counter, a stack, and a set of registers as well as an identifier. These threads are the smallest unit of execution to which a processor can allocate time.

Threads are able to interact with shared resources, and communication is possible between multiple threads. They are also able to share memory, and read and write different memory addresses, but therein lies an issue. When two threads start sharing memory, and you have no way to guarantee the order of a thread's execution, you could start seeing issues or minor bugs that give you the wrong values or crash your system altogether. These issues are, primarily, caused by race conditions, an important topic for another post.

The following figure shows how multiple threads can exist on multiple different CPUs:

concurrency-programming-101-why-do-programmers-hang-by-a-thread-img-0

Types of threads


Within a typical operating system, we, typically, have two distinct types of threads:

  • User-level threads: Threads that we can actively create, run, and kill for all of our various tasks
  • Kernel-level threads: Very low-level threads acting on behalf of the operating system


Python works at the user-level, and thus, everything we cover here will be, primarily, focused on these user-level threads.

What is multithreading?


When people talk about multithreaded processors, they are typically referring to a processor that can run multiple threads simultaneously, which they are able to do by utilizing a single core that is able to very quickly switch context between multiple threads. This switching context takes place in such a small amount of time that we could be forgiven for thinking that multiple threads are running in parallel when, in fact, they are not.

When trying to understand multithreading, it's best if you think of a multithreaded program as an office. In a single-threaded program, there would only be one person working in this office at all times, handling all of the work in a sequential manner. This would become an issue if we consider what happens when this solitary worker becomes bogged down with administrative paperwork, and is unable to move on to different work. They would be unable to cope, and wouldn't be able to deal with new incoming sales, thus costing our metaphorical business money.

With multithreading, our single solitary worker becomes an excellent multi-tasker, and is able to work on multiple things at different times. They can make progress on some paperwork, and then switch context to a new task when something starts preventing them from doing further work on said paperwork. By being able to switch context when something is blocking them, they are able to do far more work in a shorter period of time, and thus make our business more money.

In this example, it's important to note that we are still limited to only one worker or processing core. If we wanted to try and improve the amount of work that the business could do and complete work in parallel, then we would have to employ other workers or processes as we would call them in Python.

Let's see a few advantages of threading:

  • Multiple threads are excellent for speeding up blocking I/O bound programs
  • They are lightweight in terms of memory footprint when compared to processes
  • Threads share resources, and thus communication between them is easier
  • Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at $19.99/month. Cancel anytime


There are some disadvantages too, which are as follows:

  • CPython threads are hamstrung by the limitations of the global interpreter lock (GIL), about which we'll go into more depth in the next chapter.
  • While communication between threads may be easier, you must be very careful not to implement code that is subject to race conditions
  • It's computationally expensive to switch context between multiple threads. By adding multiple threads, you could see a degradation in your program's overall performance.

This is an excerpt from the book, Learning Concurrency in Python by Elliot Forbes. To know how to deal with issues such as deadlocks and race conditions that go hand in hand with concurrent programming be sure to check out the book.


concurrency-programming-101-why-do-programmers-hang-by-a-thread-img-1