Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Expert Python Programming – Fourth Edition
Expert Python Programming – Fourth Edition

Expert Python Programming – Fourth Edition: Master Python by learning the best coding practices and advanced programming concepts , Fourth Edition

Arrow left icon
Profile Icon Tarek Ziadé Profile Icon Michał Jaworski Profile Icon Ziadé
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (24 Ratings)
Paperback May 2021 630 pages 4th Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Tarek Ziadé Profile Icon Michał Jaworski Profile Icon Ziadé
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4 (24 Ratings)
Paperback May 2021 630 pages 4th Edition
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. €18.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Expert Python Programming – Fourth Edition

Interfaces, Patterns, and Modularity

In this chapter, we will dive deep into the realm of design patterns through the lens of interfaces, patterns, and modularity. We've already neared this realm when introducing the concept of programming idioms. Idioms can be understood as small and well-recognized programming patterns for solving small problems. The key characteristic of a programming idiom is that it is specific to a single programming language. While idioms can often be ported to a different language, it is not guaranteed that the resulting code will feel natural to "native" users of that programming language.

Idioms generally are concerned with small programming constructs—usually a few lines of code. Design patterns, on the other hand, deal with much larger code structures—functions and classes. They are also definitely more ubiquitous. Design patterns are reusable solutions to many common design problems appearing in software engineering. They are often language-agnostic and thus can be expressed using many programming languages.

In this chapter, we will look at a quite unusual take on the topic of design patterns. Many programming books start by going back to the unofficial origin of software design patterns—the Design Patterns: Elements of Reusable Object-Oriented Software book by Gamma, Vlissides, Helm, and Johnson. What usually follows is a lengthy catalog of classic design patterns with more or less idiomatic examples of their Python implementation. Singletons, factories, adapters, flyweights, bridges, visitors, strategies, and so on and so forth.

There are also countless web articles and blogs doing exactly the same, so if you are interested in learning the classic design patterns, you shouldn't have any problems finding resources online.

If you are interested in learning about the implementation of "classic" design patterns in Python, you can visit the https://python-patterns.guide site. It provides a comprehensive catalog of design patterns together with Python code examples.

Instead, we will focus on two key "design pattern enablers":

  • Interfaces
  • Inversion of control and dependency injectors

These two concepts are "enablers" because without them we wouldn't even have proper language terms to talk about design patterns. By discussing the topic of interfaces and inversion of control, we will be able to better understand what the challenges are for building modular applications. And only by deeply understanding those challenges will we be able to figure out why we actually need patterns.

We will of course use numerous classic design patterns on the way, but we won't focus on any specific pattern.

Technical requirements

The following are Python packages that are mentioned in this chapter that you can download from PyPI:

  • zope.interface
  • mypy
  • redis
  • flask
  • injector
  • flask-injector

Information on how to install packages is included in Chapter 2, Modern Python Development Environments.

The code files for this chapter can be found at https://github.com/PacktPublishing/Expert-Python-Programming-Fourth-Edition/tree/main/Chapter%205.

Interfaces

Broadly speaking, an interface is an intermediary that takes part in the interaction between two entities. For instance, the interface of a car consists mainly of the steering wheel, pedals, gear stick, dashboard, knobs, and so on. The interface of a computer traditionally consists of a mouse, keyboard, and display.

In programming, interface may mean two things:

  • The overall shape of the interaction plane that code can have
  • The abstract definition of possible interactions with the code that is intentionally separated from its implementation

In the spirit of the first meaning, the interface is a specific combination of symbols used to interact with the unit of code. The interface of a function, for instance, will be the name of that function, its input arguments, and the output it returns. The interface of an object will be all of its methods that can be invoked and all the attributes that can be accessed.

Collections of units of code (functions, objects, classes) are often grouped into libraries. In Python, libraries take the form of modules and packages (collections of modules). They also have interfaces. Contents of modules and packages usually can be used in various combinations and you don't have to interact with all of their contents. That makes them programmable applications, and that's why interfaces of libraries are often referred to as Application Programming Interfaces (APIs).

This meaning of interface can be expanded to other elements of the computing world. Operating systems have interfaces in the form of filesystems and system calls. Web and remote services have interfaces in the form of communication protocols.

The second meaning of interface can be understood as the formalization of the former. Here interface is understood as a contract that a specific element of the code declares to fulfill. Such a formal interface can be extracted from the implementation and can live as a standalone entity. This gives the possibility to build applications that depend on a specific interface but don't care about the actual implementation, as long as it exists and fulfills the contract.

This formal meaning of interface can also be expanded to larger programming concepts:

  • Libraries: The C programming language defines the API of its standard library, also known as the ISO C Library. Unlike Python, the C standard library has numerous implementations. For Linux, the most common is probably the GNU C Library (glibc), but it has alternatives like dietlibc or musl. Other operating systems come with their own ISO C Library implementations.
  • Operating System: The Portable Operating System Interface (POSIX) is a collection of standards that define a common interface for operating systems. There are many systems that are certified to be compliant with that standard (macOS and Solaris to name a couple). There are also operating systems that are mostly compliant (Linux, Android, OpenBSD, and many more). Instead of using the term "POSIX compliance," we can say that those systems implement the POSIX interface.
  • Web services: OpenID Connect (OIDC) is an open standard for authentication and an authorization framework based on the OAuth 2.0 protocol. Services that want to implement the OIDC standard must provide specific well-defined interfaces described in this standard.

Formal interfaces are an extremely important concept in object-oriented programming languages. In this context, the interface abstracts either the form or purpose of the modeled object. It usually describes a collection of methods and attributes that a class should have to implement with the desired behavior.

In a purist approach, the definition of interface does not provide any usable implementation of methods. It just defines an explicit contract for any class that wishes to implement the interface. Interfaces are often composable. This means that a single class can implement multiple interfaces at once. In this way, interfaces are the key building block of design patterns. A single design pattern can be understood as a composition of specific interfaces. Similar to interfaces, design patterns do not have an inherent implementation. They are just reusable scaffolding for developers to solve common problems.

Python developers prefer duck typing over explicit interface definitions but having well-defined interaction contracts between classes can often improve the overall quality of the software and reduce the area of potential errors. For instance, creators of a new interface implementation get a clear list of methods and attributes that a given class needs to expose. With proper implementation, it is impossible to forget about a method that is required by a given interface.

Support for an abstract interface is the cornerstone of many statically typed languages. Java, for instance, has traits that are explicit declarations that a class implements a specific interface. This allows Java programmers to achieve polymorphism without type inheritance, which sometimes can become problematic. Go, on the other hand, doesn't have classes and doesn't offer type inheritance, but interfaces in Go allow for selected object-oriented patterns and polymorphism without type inheritance. For both those languages, interfaces are like an explicit version of duck typing behavior—Java and Go use interfaces to verify type safety at compile time, rather than using duck typing to tie things together at runtime.

Python has a completely different typing philosophy than these languages, so it does not have native support for interfaces verified at compile time. Anyway, if you would like to have more explicit control of application interfaces, there is a handful of solutions to choose from:

  • Using a third-party framework like zope.interface that adds a notion of interfaces
  • Using Abstract Base Classes (ABCs)
  • Leveraging typing annotation, typing.Protocol, and static type analyzers.

We will carefully review each of those solutions in the following sections.

A bit of history: zope.interface

There are a few frameworks that allow you to build explicit interfaces in Python. The most notable one is a part of the Zope project. It is the zope.interface package. Although, nowadays, Zope is not as popular as it used to be a decade ago, the zope.interface package is still one of the main components of the still popular Twisted framework. zope.interface is also one of the oldest and still active interface frameworks commonly used in Python. It predates mainstream Python features like ABCs, so we will start from it and later see how it compares to other interface solutions.

The zope.interface package was created by Jim Fulton to mimic the features of Java interfaces at the time of its inception.

The interface concept works best for areas where a single abstraction can have multiple implementations or can be applied to different objects that probably shouldn't be tangled with inheritance structure. To better present this idea, we will take the example of a problem that can deal with different entities that share some common traits but aren't exactly the same thing.

We will try to build a simple collider system that can detect collisions between multiple overlapping objects. This is something that could be used in a simple game or simulation. Our solution will be rather trivial and inefficient. Remember that the goal here is to explore the concept of interfaces and not to build a bulletproof collision engine for a blockbuster game.

The algorithm we will use is called Axis-Aligned Bounding Box (AABB). It is a simple way to detect a collision between two axis-aligned (no rotation) rectangles. It assumes that all elements that will be tested can be constrained with a rectangular bounding box. The algorithm is fairly simple and needs to compare only four rectangle coordinates:

Obraz zawierający tekst

Opis wygenerowany automatycznie

Figure 5.1: Rectangle coordinate comparisons in the AABB algorithm

We will start with a simple function that checks whether two rectangles overlap:

def rects_collide(rect1, rect2):
    """Check collision between rectangles
    Rectangle coordinates:
        ┌─────(x2, y2)
        │            │
        (x1, y1) ────┘
    """
    return (
        rect1.x1 < rect2.x2 and
        rect1.x2 > rect2.x1 and
        rect1.y1 < rect2.y2 and
        rect1.y2 > rect2.y1
    )

We haven't defined any typing annotations but from the above code, it should be clearly visible that we expect both arguments of the rects_collide() function to have four attributes: x1, y1, x2, y2. These correspond to the coordinates of the lower-left and upper-right corners of the bounding box.

Having the rects_collide() function, we can define another function that will detect all collisions within a batch of objects. It can be as simple as follows:

import itertools
def find_collisions(objects):
    return [
        (item1, item2)
        for item1, item2
        in itertools.combinations(objects, 2)
        if rects_collide(
            item1.bounding_box,
            item2.bounding_box
        )
    ]

What is left is to define some classes of objects that can be tested together against collisions. We will model a few different shapes: a square, a rectangle, and a circle. Each shape is different so will have a different internal structure. There is no sensible class that we could make a common ancestor. To keep things simple, we will use dataclasses and properties. The following are all initial definitions:

from dataclasses import dataclass
@dataclass
class Square:
    x: float
    y: float
    size: float
    @property
    def bounding_box(self):
        return Box(
            self.x,
            self.y,
            self.x + self.size,
            self.y + self.size
        )
@dataclass
class Rect:
    x: float
    y: float
    width: float
    height: float
    @property
    def bounding_box(self):
        return Box(
            self.x,
            self.y,
            self.x + self.width,
            self.y + self.height
        )
@dataclass
class Circle:
    x: float
    y: float
    radius: float
    @property
    def bounding_box(self):
        return Box(
            self.x - self.radius,
            self.y - self.radius,
            self.x + self.radius,
            self.y + self.radius
        )

The only common thing about those classes (apart from being dataclasses) is the bounding_box property that returns the Box class instance. The Box class is also a dataclass:

@dataclass
class Box:
    x1: float
    y1: float
    x2: float
    y2: float

Definitions of dataclasses are quite simple and don't require explanation. We can test if our system works by passing a bunch of instances to the find_collisions() function as in the following example:

for collision in find_collisions([
    Square(0, 0, 10),
    Rect(5, 5, 20, 20),
    Square(15, 20, 5),
    Circle(1, 1, 2),
]):
    print(collision)

If we did everything right, the above code should yield the following output with three collisions:

(Square(x=0, y=0, size=10), Rect(x=5, y=5, width=20, height=20))
(Square(x=0, y=0, size=10), Circle(x=1, y=1, radius=2))
(Rect(x=5, y=5, width=20, height=20), Square(x=15, y=20, size=5))

Everything is fine, but let's do a thought experiment. Imagine that our application grew a little bit and was extended with additional elements. If it's a game, someone could include objects representing sprites, actors, or effect particles. Let's say that someone defined the following Point class:

@dataclass
class Point:
    x: float
    y: float

What would happen if the instance of that class was put on the list of possible colliders? You would probably see an exception traceback similar to the following:

Traceback (most recent call last):
  File "/.../simple_colliders.py", line 115, in <module>
    for collision in find_collisions([
  File "/.../simple_colliders.py", line 24, in find_collisions
    return [
  File "/.../simple_colliders.py", line 30, in <listcomp>
    item2.bounding_box
AttributeError: 'Point' object has no attribute 'bounding_box

That provides some clue about what the issue is. The question is if we could do better and catch such problems earlier? We could at least verify all input objects' find_collisions() functions to check if they are collidable. But how to do that?

Because none of the collidable classes share a common ancestor, we cannot easily use the isinstance() function to see if their types match. We can check for the bounding_box attribute using the hasattr() function, but doing that deeply enough to see whether that attribute has the correct structure would lead us to ugly code.

Here is where zope.interface comes in handy. The core class of the zope.interface package is the Interface class. It allows you to explicitly define a new interface. Let's define an ICollidable class that will be our declaration of anything that can be used in our collision system:

from zope.interface import Interface, Attribute
class ICollidable(Interface):
    bounding_box = Attribute("Object's bounding box")

The common convention for Zope is to prefix interface classes with I. The Attribute constructor denotes the desired attribute of the objects implementing the interface. Any method defined in the interface class will be used as an interface method declaration. Those methods should be empty. The common convention is to use only the docstring of the method body.

When you have such an interface defined, you must denote which of your concrete classes implement that interface. This style of interface implementation is called explicit interfaces and is similar in nature to traits in Java. In order to denote the implementation of a specific interface, you need to use the implementer() class decorator. In our case, this will look as follows:

from zope.interface import implementer
@implementer(ICollidable)
@dataclass
class Square:
    ...
@implementer(ICollidable)
@dataclass
class Rect:
    ...
@implementer(ICollidable)
@dataclass
class Circle:
    ...

The bodies of the dataclasses in the above example have been truncated for the sake of brevity.

It is common to say that the interface defines a contract that a concrete implementation needs to fulfill. The main benefit of this design pattern is being able to verify consistency between contract and implementation before the object is used. With the ordinary duck-typing approach, you only find inconsistencies when there is a missing attribute or method at runtime.

With zope.interface, you can introspect the actual implementation using two methods from the zope.interface.verify module to find inconsistencies early on:

  • verifyClass(interface, class_object): This verifies the class object for the existence of methods and correctness of their signatures without looking for attributes.
  • verifyObject(interface, instance): This verifies the methods, their signatures, and also attributes of the actual object instance.

It means that we can extend the find_collisions() function to perform initial verification of object interfaces before further processing. We can do that as follows:

from zope.interface.verify import verifyObject
def find_collisions(objects):
    for item in objects:
        verifyObject(ICollidable, item)
    ...

Now, if someone passes to the find_collisions() function an instance of the class that does not have the @implementer(ICollidable) decorator, they will receive an exception traceback similar to this one:

Traceback (most recent call last):
  File "/.../colliders_interfaces.py", line 120, in <module>
    for collision in find_collisions([
  File "/.../colliders_interfaces.py", line 26, in find_collisions
    verifyObject(ICollidable, item)
  File "/.../site-packages/zope/interface/verify.py", line 172, in verifyObject
    return _verify(iface, candidate, tentative, vtype='o')
  File "/.../site-packages/zope/interface/verify.py", line 92, in _verify
    raise MultipleInvalid(iface, candidate, excs)
zope.interface.exceptions.MultipleInvalid: The object Point(x=100, y=200) has failed to implement interface <InterfaceClass __main__.ICollidable>:
    Does not declaratively implement the interface
    The __main__.ICollidable.bounding_box attribute was not provided

The last two lines tell us about two errors:

  • Declaration error: Invalid item isn't explicitly declared to implement the interface and that's an error.
  • Structural error: Invalid item doesn't have all elements that the interface requires.

The latter error guards us from incomplete interfaces. If the Point class had the @implementer(ICollidable) decorator but didn't include the bounding_box() property, we would still receive the exception.

The verifyClass() and verifyObject() methods only verify the surface area of the interface and aren't able to traverse into attribute types. You optionally do a more in-depth verification using the validateInvariants() method that every interface class of zope.interface provides. It allows hook-in functions to validate the values of interfaces. So if we would like to be extra safe, we could use the following pattern of interfaces and their validation:

from zope.interface import Interface, Attribute, invariant
from zope.interface.verify import verifyObject
class IBBox(Interface):
    x1 = Attribute("lower-left x coordinate")
    y1 = Attribute("lower-left y coordinate")
    x2 = Attribute("upper-right x coordinate")
    y2 = Attribute("upper-right y coordinate")
class ICollidable(Interface):
    bounding_box = Attribute("Object's bounding box")
    invariant(lambda self: verifyObject(IBBox, self.bounding_box))
def find_collisions(objects):
    for item in objects:
        verifyObject(ICollidable, item)
        ICollidable.validateInvariants(item)
    ...

Thanks to using the validateInvariants() method, we are able to check if input items have all attributes necessary to satisfy the ICollidable interface, and also verify whether the structure of those attributes (here bounding_box) satisfies deeper constraints. In our case, we use invariant() to verify the nested interface.

Using zope.interface is an interesting way to decouple your application. It allows you to enforce proper object interfaces without the need for the overblown complexity of multiple inheritance, and also allows you to catch inconsistencies early.

The biggest downside of zope.interface is the requirement to explicitly declare interface implementors. This is especially troublesome if you need to verify instances coming from the external classes of built-in libraries. The library provides some solutions for that problem, although they make code eventually overly verbose. You can, of course, handle such issues on your own by using the adapter pattern, or even monkey-patching external classes. Anyway, the simplicity of such solutions is at least debatable.

Using function annotations and abstract base classes

Formal interfaces are meant to enable loose coupling in large applications, and not to provide you with more layers of complexity. zope.interface is a great concept and may greatly fit some projects, but it is not a silver bullet. By using it, you may shortly find yourself spending more time on fixing issues with incompatible interfaces for third-party classes and providing never-ending layers of adapters instead of writing the actual implementation.

If you feel that way, then this is a sign that something went wrong. Fortunately, Python supports building a lightweight alternative to the explicit interfaces. It's not a full-fledged solution such as zope.interface or its alternatives but generally provides more flexible applications. You may need to write a bit more code, but in the end, you will have something that is more extensible, better handles external types, and maybe more future-proof.

Note that Python, at its core, does not have an explicit notion of interfaces, and probably never will have, but it has some of the features that allow building something that resembles the functionality of interfaces. The features are as follows:

  • ABCs
  • Function annotations
  • Type annotations

The core of our solution is abstract base classes, so we will feature them first.

As you probably know, direct type comparison is considered harmful and not Pythonic. You should always avoid comparisons as in the following example:

assert type(instance) == list

Comparing types in functions or methods this way completely breaks the ability to pass a class subtype as an argument to the function. A slightly better approach is to use the isinstance() function, which will take the inheritance into account:

assert isinstance(instance, list) 

The additional advantage of isinstance() is that you can use a larger range of types to check the type compatibility. For instance, if your function expects to receive some sort of sequence as the argument, you can compare it against the list of basic types:

assert isinstance(instance, (list, tuple, range)) 

And such type compatibility checking is OK in some situations but is still not perfect. It will work with any subclass of list, tuple, or range, but will fail if the user passes something that behaves exactly the same as one of these sequence types but does not inherit from any of them. For instance, let's relax our requirements and say that you want to accept any kind of iterable as an argument. What would you do?

The list of basic types that are iterable is actually pretty long. You need to cover list, tuple, range, str, bytes, dict, set, generators, and a lot more. The list of applicable built-in types is long, and even if you cover all of them, it will still not allow checking against the custom class that defines the __iter__() method but inherits directly from object.

And this is the kind of situation where ABCs are the proper solution. ABC is a class that does not need to provide a concrete implementation, but instead defines a blueprint of a class that may be used to check against type compatibility. This concept is very similar to the concept of abstract classes and virtual methods known in the C++ language.

Abstract base classes are used for two purposes:

  • Checking for implementation completeness
  • Checking for implicit interface compatibility

The usage of ABCs is quite simple. You start by defining a new class that either inherits from the abc.ABC base class or has abc.ABCMeta as its metaclass. We won't be discussing metaclasses until Chapter 8, Elements of Metaprogramming, so in this chapter, we'll be using only classic inheritance.

The following is an example of a basic abstract class that defines an interface that doesn't do anything particularly special:

from abc import ABC, abstractmethod
class DummyInterface(ABC):
    @abstractmethod
    def dummy_method(self): ...
    @property
    @abstractmethod
    def dummy_property(self): ...

The @abstractmethod decorator denotes a part of the interface that must be implemented (by overriding) in classes that will subclass our ABC. If a class will have a nonoverridden method or property, you won't be able to instantiate it. Any attempt to do so will result in a TypeError exception.

This approach is a great way to ensure implementation completeness and is as explicit as the zope.interface alternative. If we would like to use ABCs instead of zope.interface in the example from the previous section, we could do the following modification of class definitions:

from abc import ABC, abstractmethod
from dataclasses import dataclass
class ColliderABC(ABC):
    @property
    @abstractmethod
    def bounding_box(self): ...
@dataclass
class Square(ColliderABC):
    ...
@dataclass
class Rect(ColliderABC):
    ...
@dataclass
class Circle(ColliderABC):
    ...

The bodies and properties of the Square, Rect, and Circle classes don't change as the essence of our interface doesn't change at all. What has changed is the way explicit interface declaration is done. We now use inheritance instead of the zope.interface.implementer() class decorator. If we still want to verify if the input of find_collisions() conforms to the interface, we need to use the isinstance() function. That will be a fairly simple modification:

def find_collisions(objects):
    for item in objects:
        if not isinstance(item, ColliderABC):
            raise TypeError(f"{item} is not a collider")
    ...

We had to use subclassing so coupling between components is a bit more tight but still comparable to that of zope.interface. As far as we rely on interfaces and not on concrete implementations (so, ColliderABC instead of Square, Rect, or Circle), coupling is still considered loose.

But things could be more flexible. This is Python and we have full introspection power. Duck typing in Python allows us to use any object that "quacks like a duck" as if it was a duck. Unfortunately, usually it is in the spirit of "try and see." We assume that the object in the given context matches the expected interface. And the whole purpose of formal interfaces was to actually have a contract that we can validate against. Is there a way to check whether an object matches the interface without actually trying to use it first?

Yes. To some extent. Abstract base classes provide the special __subclasshook__(cls) method. It allows you to inject your own logic into the procedure that determines whether the object is an instance of a given class. Unfortunately, you need to provide the logic all by yourself, as the abc creators did not want to constrain the developers in overriding the whole isinstance() mechanism. We have full power over it, but we are forced to write some boilerplate code.

Although you can do whatever you want to, usually the only reasonable thing to do in the __subclasshook__() method is to follow the common pattern. In order to verify whether the given class is implicitly compatible with the given abstract base class, we will have to check if it has all the methods of the abstract base class.

The standard procedure is to check whether the set of defined methods are available somewhere in the Method Resolution Order (MRO) of the given class. If we would like to extend our ColliderABC interface with a subclass hook, we could do the following:

class ColliderABC(ABC):
    @property
    @abstractmethod
    def bounding_box(self): ...
    @classmethod
    def __subclasshook__(cls, C):
        if cls is ColliderABC:
            if any("bounding_box" in B.__dict__ for B in C.__mro__):
                return True
        return NotImplemented

With the __subclasshook__() method defined that way, ColliderABC becomes an implicit interface. This means that any object will be considered an instance of ColliderABC as long as it has the structure that passes the subclass hook check. Thanks to this, we can add new components compatible with the ColliderABC interface without explicitly inheriting from it. The following is an example of the Line class that would be considered a valid subclass of ColliderABC:

@dataclass
class Line:
    p1: Point
    p2: Point
    @property
    def bounding_box(self):
        return Box(
            self.p1.x,
            self.p1.y,
            self.p2.x,
            self.p2.y,
        )

As you can see, the Line dataclass does not mention ColliderABC anywhere in its code. But you can verify the implicit interface compatibility of Line instances by comparing them against ColliderABC using the isinstance() function as in the following example:

>>> line = Line(Point(0, 0), Point(100, 100))
>>> line.bounding_box
Box(x1=0, y1=0, x2=100, y2=100)
>>> isinstance(line, ColliderABC)
True

We worked with properties, but the same approach may be used for methods as well. Unfortunately, this approach to the verification of type compatibility and implementation completeness does not take into account the signatures of class methods. So, if the number of expected arguments is different in the implementation, it will still be considered compatible. In most cases, this is not an issue, but if you need such fine-grained control over interfaces, the zope.interface package allows for that. As already said, the __subclasshook__() method does not constrain you in adding much more complexity to the isinstance() function's logic to achieve a similar level of control.

Using collections.abc

ABCs are like small building blocks for creating a higher level of abstraction. They allow you to implement really usable interfaces, but are very generic and designed to handle a lot more than this single design pattern. You can unleash your creativity and do magical things, but building something generic and really usable may require a lot of work that may never pay off. Python's Standard Library and Python's built-in types fully embrace the abstract base classes.

The collections.abc module provides a lot of predefined ABCs that allow checking for the compatibility of types with common Python interfaces. With the base classes provided in this module, you can check, for example, whether a given object is callable, mapping, or whether it supports iteration. Using them with the isinstance() function is way better than comparing against the base Python types. You should definitely know how to use these base classes even if you don't want to define your own custom interfaces with abc.ABC.

The most common abstract base classes from collections.abc that you will use quite often are:

  • Container: This interface means that the object supports the in operator and implements the __contains__() method.
  • Iterable: This interface means that the object supports iteration and implements the __iter__() method.
  • Callable: This interface means that it can be called like a function and implements the __call__() method.
  • Hashable: This interface means that the object is hashable (that is, it can be included in sets and as a key in dictionaries) and implements the __hash__ method.
  • Sized: This interface means that the object has a size (that is, it can be a subject of the len() function) and implements the __len__() method.

A full list of the available abstract base classes from the collections.abc module is available in the official Python documentation under https://docs.python.org/3/library/collections.abc.html.

The collections.abc module shows pretty well where ABCs work best: creating contracts for small and simple protocols of objects. They won't be good tools to conveniently ensure the fine-grained structure of a large interface. They also don't come with utilities that would allow you to easily verify attributes or perform in-depth validation of function arguments and return types.

Fortunately, there is a completely different solution available for this problem: static type analysis and the typing.Protocol type.

Interfaces through type annotations

Type annotations in Python proved to be extremely useful in increasing the quality of software. More and more professional programmers use mypy or other static type analysis tools by default, leaving conventional type-less programming for prototypes and quick throwaway scripts.

Support for typing in the standard library and community projects grew greatly in recent years. Thanks to this, the flexibility of typing annotations increases with every Python release. It also allows you to use typing annotations in completely new contexts.

One such context is using type annotations to perform structural subtyping (or static duck-typing). That's simply another approach to the concept of implicit interfaces. It also offers minimal simple-minded runtime check possibilities in the spirit of ABC subclass hooks.

The core of structural subtyping is the typing.Protocol type. By subclassing this type, you can create a definition of your interface. The following is an example of base Protocol interfaces we could use in our previous examples of the collision detection system:

from typing import Protocol, runtime_checkable
@runtime_checkable
class IBox(Protocol):
    x1: float
    y1: float
    x2: float
    y2: float
@runtime_checkable
class ICollider(Protocol):
    @property
    def bounding_box(self) -> IBox: ...

This time we have used two interfaces. Tools like mypy will be able to perform deep type verification so we can use additional interfaces to increase the type safety. The @runtime_checkable decorator extends the protocol class with isinstance() checks. It is something we had to perform manually for ABCs using subclass hooks in the previous section. Here it comes almost for free.

We will learn more about the usage of static type analysis tools in Chapter 10, Testing and Quality Automation.

To take full advantage of static type analysis, we also must annotate the rest of the code with proper annotations. The following is the full collision checking code with runtime interface validation based on protocol classes:

import itertools
from dataclasses import dataclass
from typing import Iterable, Protocol, runtime_checkable
@runtime_checkable
class IBox(Protocol):
    x1: float
    y1: float
    x2: float
    y2: float
@runtime_checkable
class ICollider(Protocol):
    @property
    def bounding_box(self) -> IBox: ...
def rects_collide(rect1: IBox, rect2: IBox):
    """Check collision between rectangles
    Rectangle coordinates:
        ┌───(x2, y2)
        │       │
      (x1, y1)──┘
    """
    return (
        rect1.x1 < rect2.x2 and
        rect1.x2 > rect2.x1 and
        rect1.y1 < rect2.y2 and
        rect1.y2 > rect2.y1
    )
def find_collisions(objects: Iterable[ICollider]):
    for item in objects:
        if not isinstance(item, ICollider):
            raise TypeError(f"{item} is not a collider")
    return [
        (item1, item2)
        for item1, item2
        in itertools.combinations(objects, 2)
        if rects_collide(
            item1.bounding_box,
            item2.bounding_box
        )
    ]

We haven't included the code of the Rect, Square, and Circle classes, because their implementation doesn't have to change. And that's the real beauty of implicit interfaces: there is no explicit interface declaration in a concrete class beyond the inherent interface that comes from the actual implementation.

In the end, we could use any of the previous Rect, Square, and Circle class iterations (plain dataclasses, zope-declared classes, or ABC-descendants). They all would work with structural subtyping through the typing.Protocol class.

As you can see, despite the fact that Python lacks native support for interfaces (in the same way as, for instance, Java or the Go language do), we have plenty of ways to standardize contracts of classes, methods, and functions. This ability becomes really useful when implementing various design patterns to solve commonly occurring programming problems. Design patterns are all about reusability and the use of interfaces can help in structuring them into design templates that can be reused over and over again.

But the use of interfaces (and analogous solutions) doesn't end with design patterns. The ability to create a well-defined and verifiable contract for a single unit of code (function, class, or method) is also a crucial element of specific programming paradigms and techniques. Notable examples are inversion of control and dependency injection. These two concepts are tightly coupled so we will discuss them in the next section together.

Inversion of control and dependency injection

Inversion of Control (IoC) is a simple property of some software designs. According to Wiktionary, if a design exhibits IoC, it means that:

(…) the flow of control in a system is inverted in comparison to the traditional architecture.

But what is the traditional architecture? IoC isn't a new idea, and we can trace it back to at least David D. Clark's paper from 1985 titled The structuring of systems using of upcalls. It means that traditional design probably refers to the design of software that was common or thought to be traditional in the 1980s.

Clark describes the traditional architecture of a program as a layered structure of procedures where control always goes from top to bottom. Higher-level layers invoke procedures from lower layers.

Those invoked procedures gain control and can invoke even deeper-layered procedures before returning control upward. In practice, control is traditionally passed from application to library functions. Library functions may pass it deeper to even lower-level libraries but, eventually, return it back to the application.

IoC happens when a library passes control up to the application so that the application can take part in the library behavior. To better understand this concept, consider the following trivial example of sorting a list of integer numbers:

sorted([1,2,3,4,5,6])

The built-in sorted() function takes an iterable of items and returns a list of sorted items. Control goes from the caller (your application) directly to the sorted() function. When the sorted() function is done with sorting, it simply returns the sorted result and gives control back to the caller. Nothing special.

Now let's say we want to sort our numbers in a quite unusual way. That could be, for instance, sorting them by the absolute distance from number 3. Integers closest to 3 should be at the beginning of the result list and the farthest should be at the end. We can do that by defining a simple key function that will specify the order key of our elements:

def distance_from_3(item):
    return abs(item - 3)

Now we can pass that function as the callback key argument to the sorted() function:

sorted([1,2,3,4,5,6], key=distance_from_3)

What will happen now is the sorted() function will invoke the key function on every element of the iterable argument. Instead of comparing item values, it will now compare the return values of the key function. Here is where IoC happens. The sorted() function "upcalls" back to the distance_from_3() function provided by the application as an argument. Now it is a library that calls the functions from the application, and thus the flow of control is reversed.

Callback-based IoC is also humorously referred to as the Hollywood principle in reference to the "don't call us, we'll call you" phrase.

Note that IoC is just a property of a design and not a design pattern by itself. An example with the sorted() function is the simplest example of callback-based IoC but it can take many different forms. For instance:

  • Polymorphism: When a custom class inherits from a base class and base methods are supposed to call custom methods
  • Argument passing: When the receiving function is supposed to call methods of the supplied object
  • Decorators: When a decorator function calls a decorated function
  • Closures: When a nested function calls a function outside of its scope

As you see, IoC is a rather common aspect of object-oriented or functional programming paradigms. And it also happens quite often without you even realizing it. While it isn't a design pattern by itself, it is a key ingredient of many actual design patterns, paradigms, and methodologies. The most notable one is dependency injection, which we will discuss later in this chapter.

Clark's traditional flow of control in procedural programming also happens in object-oriented programming. In object-oriented programs, objects themselves are receivers of control. We can say that control is passed to the object whenever a method of that object is invoked. So the traditional flow of control would require objects to hold full ownership of all dependent objects that are required to fulfill the object's behavior.

Inversion of control in applications

To better illustrate the differences between various flows of control, we will build a small but practical application. It will initially start with a traditional flow of control and later on, we will see if it can benefit from IoC in selected places.

Our use case will be pretty simple and common. We will build a service that can track web page views using so-called tracking pixels and serve page view statistics over an HTTP endpoint. This technique is commonly used in tracking advertisement views or email openings. It can also be useful in situations when you make extensive use of HTTP caching and want to make sure that caching does not affect page view statistics.

Our application will have to track counts of page views in some persistent storage. That will also give us the opportunity to explore application modularity—a characteristic that cannot be implemented without IoC.

What we need to build is a small web backend application that will have two endpoints:

  • /track: This endpoint will return an HTTP response with a 1x1 pixel GIF image. Upon request, it will store the Referer header and increase the number of requests associated with that value.
  • /stats: This endpoint will read the top 10 most common Referer values received on the track/ endpoint and return an HTTP response containing a summary of the results in JSON format.

The Referer header is an optional HTTP header that web browsers will use to tell the web server what is the URL of the origin web page from which the resource is being requested. Take note of the misspelling of the word referrer. The header was first standardized in RFC 1945, Hypertext Transfer Protocol—HTTP/1.0 (see https://tools.ietf.org/html/rfc1945). When the misspelling was discovered, it was already too late to fix it.

We've already introduced Flask as a simple web microframework in Chapter 2, Modern Python Development Environments, so we will use it here as well. Let's start by importing some modules and setting up module variables that we will use on the way:

from collections import Counter
from http import HTTPStatus
from flask import Flask, request, Response
app = Flask(__name__)
storage = Counter()
PIXEL = (
    b'GIF89a\x01\x00\x01\x00\x80\x00\x00\x00'
    b'\x00\x00\xff\xff\xff!\xf9\x04\x01\x00'
    b'\x00\x00\x00,\x00\x00\x00\x00\x01\x00'
    b'\x01\x00\x00\x02\x01D\x00;'
)

The app variable is the core object of the Flask framework. It represents a Flask web application. We will use it later to register endpoint routes and also run the application development server.

The storage variable holds a Counter instance. It is a convenient data structure from the Standard Library that allows you to track counters of any immutable values. Our ultimate goal is to store page view statistics in a persistent way, but it will be a lot easier to start off with something simpler. That's why we will initially use this variable as our in-memory storage of page view statistics.

Last but not least, is the PIXEL variable. It holds a byte representation of a 1x1 transparent GIF image. The actual visual appearance of the tracking pixel does not matter and probably will never change. It is also so small that there's no need to bother with loading it from the filesystem. That's why we are inlining it in our module to fit the whole application in a single Python module.

Once we're set, we can write code for the /track endpoint handler:

@app.route('/track')
def track():
    try:
        referer = request.headers["Referer"]
    except KeyError:
        return Response(status=HTTPStatus.BAD_REQUEST)
    storage[referer] += 1
    return Response(
        PIXEL, headers={
            "Content-Type": "image/gif",
            "Expires": "Mon, 01 Jan 1990 00:00:00 GMT",
            "Cache-Control": "no-cache, no-store, must-revalidate",
            "Pragma": "no-cache",
        }
    )

We use extra Expires, Cache-Control, and Pragma headers to control the HTTP caching mechanism. We set them so that they would disable any form of caching on most web browser implementations. We also do it in a way that should disable caching by potential proxies. Take careful note of the Expires header value that is way in the past. This is the lowest possible epoch time and in practice means that resource is always considered expired.

Flask request handlers typically start with the @app.route(route) decorator that registers the following handler function for the given HTTP route. Request handlers are also known as views. Here we have registered the track() view as a handler of the /track route endpoint. This is the first occurrence of IoC in our application: we register our own handler implementation within Flask frameworks. It is a framework that will call back our handlers on incoming requests that match associated routes.

After the signature, we have simple code for handling the request. We check if the incoming request has the expected Referer header. That's the value which the browser uses to tell what URI the requested resource was included on (for instance, the HTML page we want to track). If there's no such header, we will return an error response with a 400 Bad Request HTTP status code.

If the incoming request has the Referer header, we will increase the counter value in the storage variable. The Counter structure has a dict-like interface and allows you to easily modify counter values for keys that haven't been registered yet. In such a case, it will assume that the initial value for the given key was 0. That way we don't need to check whether a specific Referer value was already seen and that greatly simplifies the code. After increasing the counter value, we return a pixel response that can be finally displayed by the browser.

Note that although the storage variable is defined outside the track() function, it is not yet an example of IoC. That's because whoever calls the stats() function can't replace the implementation of the storage. We will try to change that in the next iterations of our application.

The code for the /stats endpoint is even simpler:

@app.route('/stats')
def stats():
    return dict(storage.most_common(10))

In the stats() view, we again take advantage of the convenient interface of the Counter object. It provides the most_common(n) method, which returns up to n most common key-value pairs stored in the structure. We immediately convert that to a dictionary. We don't use the Response class, as Flask by default serializes the non-Response class return values to JSON and assumes a 200 OK status for the HTTP response.

In order to test our application easily, we finish our script with the simple invocation of the built-in development server:

if __name__ == '__main__':
    app.run(host="0.0.0.0", port=8000)

If you store the application in the tracking.py file, you will be able to start the server using the python tracking.py command. It will start listening on port 8000. If you would like to test the application in your own browser, you can extend it with the following endpoint handler:

@app.route('/test')
def test():
    return """
    <html>
    <head></head>
    <body><img src="/track"></body>
    </html>
    """

If you open the address http://localhost:8000/test several times in your web browser and then go to http://localhost:8000/stats, you will see output similar to the following:

{"http://localhost:8000/test":6}

The problem with the current implementation is that it stores request counters in memory. Whenever the application is restarted, the existing counters will be reset and we'll lose important data. In order to keep the data between restarts, we will have to replace our storage implementation.

The options to provide data persistency are many. We could, for instance, use:

  • A simple text file
  • The built-in shelve module
  • A relational database management system (RDBMS) like MySQL, MariaDB, or PostgreSQL
  • An in-memory key-value or data struct storage service like Memcached or Redis

Depending on the context and scale of the workload our application needs to handle, the best solution will be different. If we don't know yet what is the best solution, we can also make the storage pluggable so we can switch storage backends depending on the actual user needs. To do so, we will have to invert the flow of control in our track() and stats() functions.

Good design dictates the preparation of some sort of definition of the interface of the object that is responsible for the IoC. The interface of the Counter class seems like a good starting point. It is convenient to use. The only problem is that the += operation can be implemented through either the __add__() or __iadd__() special method. We definitely want to avoid such ambiguity. Also, the Counter class has way too many extra methods and we need only two:

  • A method that allows you to increase the counter value by one
  • A method that allows you to retrieve the 10 most often requested keys

To keep things simple, and readable, we will define our views storage interface as an abstract base class of the following form:

from abc import ABC, abstractmethod
from typing import Dict
class ViewsStorageBackend(ABC):
    @abstractmethod
    def increment(self, key: str): ...
    @abstractmethod
    def most_common(self, n: int): Dict[str, int] ...

From now on, we can provide various implementations of the views storage backend. The following will be the implementation that adapts the previously used Counter class into the ViewsStorageBackend interface:

from collections import Counter
from typing import Dict
from .tracking_abc import ViewsStorageBackend
class CounterBackend(ViewsStorageBackend):
    def __init__(self):
        self._counter = Counter()
    def increment(self, key: str):
        self._counter[key] += 1
    def most_common(self, n: int) -> Dict[str, int]:
        return dict(self._counter.most_common(n))

If we would like to provide persistency through the Redis in-memory storage service, we could do so by implementing a new storage backend as follows:

from typing import Dict
from redis import Redis
class RedisBackend(ViewsStorageBackend):
    def __init__(
        self,
        redis_client: Redis,
        set_name: str
    ):
        self._client = redis_client
        self._set_name = set_name
    def increment(self, key: str):
        self._client.zincrby(self._set_name, 1, key)
    def most_common(self, n: int) -> Dict[str, int]:
        return {
            key.decode(): int(value)
            for key, value in
            self._client.zrange(
                self._set_name, 0, n-1,
                desc=True,
                withscores=True,
            )
        }

Redis is an in-memory data store. This means that by default, data is stored only in memory. Redis will persist data on disk during restart but may lose data in an unexpected crash (for instance, due to a power outage). Still, this is only a default behavior. Redis offers various modes for data persistence, some of which are comparable to other databases. This means Redis is a completely viable storage solution for our simple use case. You can read more about Redis persistence at https://redis.io/topics/persistence.

Both backends have the same interface loosely enforced with an abstract base class. It means instances of both classes can be used interchangeably. The question is, how will we invert control of our track() and stats() functions in a way that will allow us to plug in a different views storage implementation?

Let's recall the signatures of our functions:

@app.route('/stats')
def stats():
   ...
@app.route('/track')
def track():
   ...

In the Flask framework, the app.route() decorator registers a function as a specific route handler. You can think of it as a callback for HTTP request paths. You don't call that function manually anymore and Flask is in full control of the arguments passed to it. But we want to be able to easily replace the storage implementation. One way to do that would be through postponing the handler registration and letting our functions receive an extra storage argument. Consider the following example:

def track(storage: ViewsStorageBackend):
    try:
        referer = request.headers["Referer"]
    except KeyError:
        return Response(status=HTTPStatus.BAD_REQUEST)
    storage.increment(referer)
    return Response(
        PIXEL, headers={
            "Content-Type": "image/gif",
            "Expires": "Mon, 01 Jan 1990 00:00:00 GMT",
            "Cache-Control": "no-cache, no-store, must-revalidate",
            "Pragma": "no-cache",
        }
    )
def stats(storage: ViewsStorageBackend):
    return storage.most_common(10)

Our extra argument is annotated with the ViewsStorageBackend type so the type can be easily verified with an IDE or additional tools. Thanks to this we have inverted control of those functions and also achieved better modularity. Now you can easily switch the implementation of storage for different classes with a compatible interface. The extra benefit of IoC is that we can easily unit-test stats() and track() methods in isolation from storage implementations.

We will discuss the topic of unit-tests together with detailed examples of tests that leverage IoC in Chapter 10, Testing and Quality Automation.

The only part that is missing is actual route registration. We can no longer use the app.route() decorator directly on our functions. That's because Flask won't be able to resolve the storage argument on its own. We can overcome that problem by "pre-injecting" desired storage implementations into handler functions and create new functions that can be easily registered with the app.route() call.

The simple way to do that would be using the partial() function from the functools module. It takes a single function together with a set of arguments and keyword arguments and returns a new function that has selected arguments preconfigured. We can use that approach to prepare various configurations of our service. Here, for instance, is an application configuration that uses Redis as a storage backend:

from functools import partial
if __name__ == '__main__':
    views_storage = RedisBackend(Redis(host="redis"), "my-stats")
    app.route("/track", endpoint="track")(
        partial(track, storage=views_storage))
    app.route("/stats", endpoint="stats")(
        partial(stats, storage=views_storage))
    app.run(host="0.0.0.0", port=8000)

The presented approach can be applied to many other web frameworks as the majority of them have the same route-to-handler structure. It will work especially well for small services with only a handful of endpoints. Unfortunately, it may not scale well in large applications. It is simple to write but definitely not the easiest to read. Seasoned Flask programmers will for sure feel this approach is unnatural and needlessly repetitive. Here, it simply breaks the common convention of writing Flask handler functions.

The ultimate solution would be one that allows you to write and register view functions without the need to manually inject dependent objects. So, for instance:

@app.route('/track')
def track(storage: ViewsStorageBackend):
   ...

In order to do that, from the Flask framework we would need to:

  • Recognize extra arguments as dependencies of views.
  • Allow the definition of a default implementation for said dependencies.
  • Automatically resolve dependencies and inject them into views at runtime.

Such a mechanism is referred to as dependency injection, which we mentioned previously. Some web frameworks offer a built-in dependency injection mechanism, but in the Python ecosystem, it is a rather rare occurrence. Fortunately, there are plenty of lightweight dependency injection libraries that can be added on top of any Python framework. We will explore such a possibility in the next section.

Using dependency injection frameworks

When IoC is used at a great scale, it can easily become overwhelming. The example from the previous section was quite simple so it didn't require a lot of setup. Unfortunately, we have sacrificed a bit of readability and expressiveness for better modularity and responsibility isolation. For larger applications, this can be a serious problem.

Dedicated dependency injection libraries come to the rescue by combining a simple way to mark function or object dependencies with a runtime dependency resolution. All of that usually can be achieved with minimal impact on the overall code structure.

There are plenty of dependency injection libraries for Python, so definitely there is no need to build your own from scratch. They are often similar in implementation and functionality, so we will simply pick one and see how it could be applied in our view tracking application.

Our library of choice will be the injector library, which is freely available on PyPI. We will pick it up for several reasons:

  • Reasonably active and mature: Developed over more than 10 years with releases every few months.
  • Framework support: It has community support for various frameworks including Flask through the flask-injector package.
  • Typing annotation support: It allows writing unobtrusive dependency annotations and leveraging static typing analysis.
  • Simple: injector has a Pythonic API. It makes code easy to read and to reason about.

You can install injector in your environment using pip as follows:

$ pip install injector

You can find more information about injector at https://github.com/alecthomas/injector.

In our example, we will use the flask-injector package as it provides some initial boilerplate to integrate injector with Flask seamlessly. But before we do that, we will first separate our application into several modules that would better simulate a larger application. After all, dependency injection really shines in applications that have multiple components.

We will create the following Python modules:

  • interfaces: This will be the module holding our interfaces. It will contain ViewsStorageBackend from the previous section without any changes.
  • backends: This will be the module holding specific implementations of storage backends. It will contain CounterBackend and RedisBackend from the previous section without any changes.
  • tracking: This will be the module holding the application setup together with view functions.
  • di: This will be the module holding definitions for the injector library, which will allow it to automatically resolve dependencies.

The core of the injector library is a Module class. It defines a so-called dependency injection container—an atomic block of mapping between dependency interfaces and their actual implementation instances. The minimal Module subclass may look as follows:

from injector import Module, provider
def MyModule(Module):
    @provider
    def provide_dependency(self, *args) -> Type:
        return ...

The @provider decorator marks a Module method as a method providing the implementation for a particular Type interface. The creation of some objects may be complex, so injector allows modules to have additional nondecorated helper methods.

The method that provides dependency may also have its own dependencies. They are defined as method arguments with type annotations. This allows for cascading dependency resolution. injector supports composing dependency injection context from multiple modules so there's no need to define all dependencies in a single module.

Using the above template, we can create our first injector module in the di.py file. It will be CounterModule, which provides a CounterBackend implementation for the ViewsStorageBackend interface. The definition will be as follows:

from injector import Module, provider, singleton
from interfaces import ViewsStorageBackend
from backends import CounterBackend
class CounterModule(Module):
    @provider
    @singleton
    def provide_storage(self) -> ViewsStorageBackend:
        return CounterBackend()

CounterStorage doesn't take any arguments, so we don't have to define extra dependencies. The only difference from the general module template is the @singleton decorator. It is an explicit implementation of the singleton design pattern. A singleton is simply a class that can have only a single instance. In this context, it means that every time this dependency is resolved, injector will always return the same object. We need that because CounterStorage stores view counters under the internal _counter attribute. Without the @singleton decorator, every request for the ViewsStorageBackend implementation would return a completely new object and thus we would constantly lose track of view numbers.

The implementation of RedisModule will be only slightly more complex:

from injector import Module, provider, singleton
from redis import Redis
from interfaces import ViewsStorageBackend
from backends import RedisBackend
class RedisModule(Module):
    @provider
    def provide_storage(self, client: Redis) -> ViewsStorageBackend:
        return RedisBackend(client, "my-set")
    @provider
    @singleton
    def provide_redis_client(self) -> Redis:
        return Redis(host="redis")

The code files for this chapter provide a complete docker-compose environment with a preconfigured Redis Docker image so you don't have to install Redis on your own host.

In the RedisStorage module, we take advantage of the injector library's ability to resolve cascading dependencies. The RedisBackend constructor requires a Redis client instance so we can treat it as another provide_storage() method argument. injector will recognize typing annotation and automatically match the method that provides the Redis class instance. We could go even further and extract a host argument to separate configuration dependency. We won't do that for the sake of simplicity.

Now we have to tie everything up in the tracking module. We will be relying on injector to resolve dependencies on views. This means that we can finally define track() and stats() handlers with extra storage arguments and register them with the @app.route() decorator as if they were normal Flask views. Updated signatures will be the following:

@app.route('/stats')
def stats(storage: ViewsStorageBackend):
   ...
@app.route('/track')
def track(storage: ViewsStorageBackend):
   ...

What is left is the final configuration of the app that designates which modules should be used to provide interface implementations. If we would like to use RedisBackend, we would finish our tracking module with the following code:

import di
if __name__ == '__main__':
    FlaskInjector(app=app, modules=[di.RedisModule()])
    app.run(host="0.0.0.0", port=8000)

The following is the complete code of the tracking module:

from http import HTTPStatus
from flask import Flask, request, Response
from flask_injector import FlaskInjector
from interfaces import ViewsStorageBackend
import di
app = Flask(__name__)
PIXEL = (
    b'GIF89a\x01\x00\x01\x00\x80\x00\x00\x00'
    b'\x00\x00\xff\xff\xff!\xf9\x04\x01\x00'
    b'\x00\x00\x00,\x00\x00\x00\x00\x01\x00'
    b'\x01\x00\x00\x02\x01D\x00;'
)
@app.route('/track')
def track(storage: ViewsStorageBackend):
    try:
        referer = request.headers["Referer"]
    except KeyError:
        return Response(status=HTTPStatus.BAD_REQUEST)
    storage.increment(referer)
    return Response(
        PIXEL, headers={
            "Content-Type": "image/gif",
            "Expires": "Mon, 01 Jan 1990 00:00:00 GMT",
            "Cache-Control": "no-cache, no-store, must-revalidate",
            "Pragma": "no-cache",
        }
    )
@app.route('/stats')
def stats(storage: ViewsStorageBackend):
    return storage.most_common(10)
@app.route("/test")
def test():
    return """
    <html>
    <head></head>
    <body><img src="/track"></body>
    </html>
    """
if __name__ == '__main__':
    FlaskInjector(app=app, modules=[di.RedisModule()])
    app.run(host="0.0.0.0", port=8000)

As you can see, the introduction of the dependency injection mechanism didn't change the core of our application a lot. The preceding code closely resembles the first and simplest iteration, which didn't have the IoC mechanism. At the cost of a few interface and injector module definitions, we've got scaffolding for a modular application that could easily grow into something much bigger. We could, for instance, extend it with additional storage that would serve more analytical purposes or provide a dashboard that allows you to view the data at different angles.

Another advantage of dependency injection is loose coupling. In our example, views never create instances of storage backends nor their underlying service clients (in the case of RedisBackend). They depend on shared interfaces but are independent of implementations. Loose coupling is usually a good foundation for a well-architected application.

It is of course hard to show the utility of IoC and dependency injection in a really concise example like the one we've just seen. That's because these techniques really shine in big applications. Anyway, we will revisit the use case of the pixel tracking application in Chapter 10, Testing and Quality Automation, where we will show that IoC greatly improves the testability of your code.

Summary

This chapter was a journey through time. Python is considered a modern language but in order to better understand its patterns, we had to make some historical trips.

We started with interfaces—a concept almost as old as object-oriented programming (the first OOP language—Simula—dates to 1967!). We took a look at zope.interface, something that is probably one of the oldest actively maintained interface libraries in the Python ecosystem. We learned some of its advantages and disadvantages. That allowed us to really embrace two mainstream Python alternatives: abstract base classes and structural subtyping through advanced type annotations.

After familiarizing ourselves with interfaces, we looked into inversion of control. Internet sources about this topic can be really confusing and this concept is often confused with dependency injection. To settle any disputes, we traced the origin of the term to the 80s, when no one had yet ever dreamed about dependency injection containers. We learned how to recognize inversion of control in various forms and saw how it can improve the modularity of applications. We tried to invert control in a simple application manually. We saw that sometimes it can cost us readability and expressiveness. Thanks to this, we are now able to fully recognize the value that comes from the simplicity of ready-made dependency injection libraries.

The next chapter should be refreshing. We will completely move away from the topics of object-oriented programming, language features, design patterns, and paradigms. It will be all about concurrency. We will learn how to write code that does a lot, in parallel, and—hopefully—does it fast.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Discover the new features of Python, such as dictionary merge, the zoneinfo module, and structural pattern matching
  • Create manageable code to run in various environments with different sets of dependencies
  • Implement effective Python data structures and algorithms to write, test, and optimize code

Description

This new edition of Expert Python Programming provides you with a thorough understanding of the process of building and maintaining Python apps. Complete with best practices, useful tools, and standards implemented by professional Python developers, this fourth edition has been extensively updated. Throughout this book, you’ll get acquainted with the latest Python improvements, syntax elements, and interesting tools to boost your development efficiency. The initial few chapters will allow experienced programmers coming from different languages to transition to the Python ecosystem. You will explore common software design patterns and various programming methodologies, such as event-driven programming, concurrency, and metaprogramming. You will also go through complex code examples and try to solve meaningful problems by bridging Python with C and C++, writing extensions that benefit from the strengths of multiple languages. Finally, you will understand the complete lifetime of any application after it goes live, including packaging and testing automation. By the end of this book, you will have gained actionable Python programming insights that will help you effectively solve challenging problems.

Who is this book for?

The Python programming book is intended for expert programmers who want to learn Python’s advanced-level concepts and latest features. Anyone who has basic Python skills should be able to follow the content of the book, although it might require some additional effort from less experienced programmers. It should also be a good introduction to Python 3.9 for those who are still a bit behind and continue to use other older versions.

What you will learn

  • Explore modern ways of setting up repeatable and consistent Python development environments
  • Effectively package Python code for community and production use
  • Learn modern syntax elements of Python programming, such as f-strings, enums, and lambda functions
  • Demystify metaprogramming in Python with metaclasses
  • Write concurrent code in Python
  • Extend and integrate Python with code written in C and C++

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 28, 2021
Length: 630 pages
Edition : 4th
Language : English
ISBN-13 : 9781801071109
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. €18.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : May 28, 2021
Length: 630 pages
Edition : 4th
Language : English
ISBN-13 : 9781801071109
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 108.97
Python Object-Oriented Programming
€35.99
Learn Python Programming, 3rd edition
€35.99
Expert Python Programming – Fourth Edition
€36.99
Total 108.97 Stars icon

Table of Contents

15 Chapters
Current Status of Python Chevron down icon Chevron up icon
Modern Python Development Environments Chevron down icon Chevron up icon
New Things in Python Chevron down icon Chevron up icon
Python in Comparison with Other Languages Chevron down icon Chevron up icon
Interfaces, Patterns, and Modularity Chevron down icon Chevron up icon
Concurrency Chevron down icon Chevron up icon
Event-Driven Programming Chevron down icon Chevron up icon
Elements of Metaprogramming Chevron down icon Chevron up icon
Bridging Python with C and C++ Chevron down icon Chevron up icon
Testing and Quality Automation Chevron down icon Chevron up icon
Packaging and Distributing Python Code Chevron down icon Chevron up icon
Observing Application Behavior and Performance Chevron down icon Chevron up icon
Code Optimization Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4
(24 Ratings)
5 star 79.2%
4 star 8.3%
3 star 0%
2 star 0%
1 star 12.5%
Filter icon Filter
Top Reviews

Filter reviews by




N Satpall Jun 25, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is a great resource for expert programmers who want to learn about Python's advanced-level concepts and features in its newest releases. It focuses so well on tools and practices that are crucial for creating performant, reliable, and maintainable software in Python.It not only shows how Python is constantly changing, but also why it is changing. It showcases recent Python language additions and describes modern ways of setting up repeatable and consistent development environments for Python programmers.The book can also be a good resource for hobbyists who are interested in learning advanced-level concepts with Python, as also for programmers with experience in other languages by explaining how to integrate code written in different languages in their Python application. There are many practical illustrations of design patterns, programming paradigms, and metaprogramming techniques.It covers tools that can be used to assess code quality metrics and improve code style in fully automated way, while showing how to scale simple observability practices to large-scale distributed systems.
Amazon Verified review Amazon
Pax Jul 29, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have 7+ years of professional experience as a programmer and have been coding in Python since 2019. I love the comprehensiveness of this book and I learned so much not only "how to do X" but "why to do X" --- it gave best practices and I'm not just repeating what the book tagline says. For someone with limited experience working in big companies / teams, the book is very insightful.It is also easy to read; I finished this book in ~2 weeks by setting a goal of reading 35 pages/day. (Some days I read more)My only "complaint" is that there were a bit of typos but they weren't super critical. It was very obvious that they are typos and they're not a lot so it's not super distracting and most importantly, you won't be left "confused."
Amazon Verified review Amazon
R.Thompson Aug 25, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Written by experts in python programming, learn code optimization, memory profiling, resource allocation and much more.
Amazon Verified review Amazon
hawkinflight Jun 22, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The media could not be loaded. I have about two years of experience using Python, and I find this book very helpful in moving forward. The first chapter gets everyone off to a great start by sharing ideas about how to stay up-to-date with Python, as the language inevitably changes, and where to find Python community. All of the chapters are important and contain great information. I particularly like Chapter 4 which compares Python to other languages, and emphasizes that just because you might be able to write code as you would in another language, that is not necessarily the "Python way", nor the best way to do it in Python. The chapter identifies places where programmers might try things they really shouldn't. I have combined C/C++ code with Python, and so, I enjoyed reviewing the chapter which covers this topic. I closely read and enjoyed the chapter on Optimizing Code, one way of course is via choosing the best data structure. This helps the programmer learn not just what data structures are available but provides info on how data structures can impact performance. I have not had to profile code, but I think Chapter 12 would be very useful on that. I am very interested in going further and carefully reading the chapters on Interfaces, Patterns, and Modularity, on Testing and QA, as well as Concurrency, and Meta-programming. There is a lot of great material here.
Amazon Verified review Amazon
Stephan Miller Jun 26, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have written Python code for about 15 years now. Even though many times that hasn't been what I wrote for my day job, it is still my favorite programming language. I didn't think there was much to learn, but then again I started as a Python 2 developer and only switched to Python 3 in the last few years.But this book is great and taught me a bunch of new things. I learned about Poetry, which I never heard of until now, and can't wait to use it on my next project. It also walks you through using Docker for development and explains why you may want to still use Vagrant in some cases even though it is an older technology. It even goes into writing C extensions to give your Python projects a performance boost.You should know some Python before reading this book, as it goes into some concepts in-depth, but I would recommend it to anyone who has been writing Python code for a few months.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.