Chapter 1, About Performance, talks about performance. We'll dissect the term itself and try to find out what users actually mean when they say that a program is performing (or not performing) well. Then, we will move into the area of algorithm complexity. We'll skip all the boring mathematics and just mention the parts relevant to programming. We will also look at different ways of finding the slow (non-performant) parts of the program, from pure guesswork to measuring tools of a different sophistication, homemade and commercial.
Chapter 2, Fixing the Algorithm, examines a few practical examples where changing an algorithm can speed up a program dramatically. In the first part, we'll look at graphical user interfaces and what we can do when a simple update to TListBox takes too long. The second part of the chapter explores the idea of caching and presents a reusable caching class with very fast implementation. In the last part, we'll revisit some code from Chapter 1, About Performance, and make it faster, again, by changing an algorithm.
Chapter 3, Fine-Tuning the Code, deals with lots of small things. Sometimes, performance lies in many small details, and this chapter shows how to use them to your advantage. We'll check the Delphi compiler settings and see which ones affect the code speed. We'll look at the implementation details for built-in data types and method calls. Using the correct type in the right way can mean a lot. Of course, we won't forget about the practical side. This chapter will give examples of different optimization techniques, such as extracting common expressions, using pointers to manipulate data, and implementing parts of the solution in assembler. In the end, we'll revisit the code from Chapter 1, About Performance, and make it even faster.
Chapter 4, Memory Management, is all about memory. It starts with a discussion on strings, arrays, and how their memory is managed. After that, we will move to the memory functions exposed by Delphi. We'll see how we can use them to manage memory. Next, we'll cover records—how to allocate them, how to initialize them, and how to create useful dynamically-allocated generic records. We'll then move into the murky waters of memory manager implementation. I'll sketch a very rough overview of FastMM, the default memory manager in Delphi. First, I'll explain why FastMM is excellent and then I'll show when and why it may slow you down. We'll see how to analyze memory performance problems and how to switch the memory manager for a different one. In the last part, we'll revisit the SlowCode program and reduce the number of memory allocations it makes.
Chapter 5, Getting Started with the Parallel World, moves the topic to parallel programming. In the introduction, I'll talk about processes and threads, and multithreading and multitasking to establish some common ground for discussion. After that, you'll start learning what not to do when writing parallel code. I'll explain how the user interface must be handled from background threads and what problems are caused by sharing data between threads. Then, I'll start fixing those problems by implementing various kinds of synchronization mechanisms and interlocked operations. We'll also deal with the biggest problem synchronization brings to the code—deadlocking. As synchronization inevitably slows the program down, I'll explain how to achieve the highest possible speed using data duplication, aggregation, and communication. At the end, I'll introduce two third-party libraries that contain helpful parallel functions and data structures.
Chapter 6, Working with Parallel Tools, focuses on a single topic, Delphi's TThread class. In the introduction, I'll explain why I believe that TThread is still important even in this modern age. I will explore different ways in which TThread based threads can be managed in your code. After that, I'll go through the most important TThread methods and properties and explain what they're good for. In the second part of the chapter, I'll extend TThread into something more modern and easier to use. Firstly, I'll add a communication channel so that you'll be able to send messages to the thread. After that, I'll implement a derived class designed to handle one specific usage pattern and show how this approach simplifies writing parallel code to the extreme.
Chapter 7, Exploring Parallel Practices, moves the multithreaded programming to more abstract terms. In this chapter, I'll discuss modern multithreading concepts: tasks and patterns. I'll look into Delphi's own implementation, Parallel Programming Library, and demonstrate the use of TTask/ITask. We'll look at topics such as task management, exception handling, and thread pooling. After that, I'll move on to patterns and talk about all Parallel Programming Library patterns: Join, Future, and Parallel For. I will also introduce two custom patterns—Async/Await and Join/Await—and finish the chapter with a discussion on the Pipeline pattern from OmniThreadLibrary.
Chapter 8, Using External Libraries, admits that sometimes Delphi is not enough. Sometimes the problem is too complicated to be efficiently solved by a human. Sometimes Pascal is just lacking the speed. In such cases, we can try finding an existing library that solves our problem. In most cases, it will not support Delphi directly but will provide some kind of C or C++ interface. This chapter looks into linking with C object files and describes typical problems that you'll encounter on the way. In the second half, I'll present a complete example of linking to a C++ library, from writing a proxy DLL to using it in Delphi.
Chapter 9, Introduction to patterns, introduces the concept of patterns. We'll see why patterns are useful and how they should be used in programming. The book will explore the difference between design principles, patterns, and idioms. It will present a hierarchical overview of design patterns, talk a bit about anti-patterns, and finish with a description of some important design principles.
Chapter 10, Singleton, Dependency Injection, Lazy Initialization, and Object Pool, covers four patterns from the creational group. The chapter will first look into the singleton pattern, which makes sure that a class has only one instance. Next in line, the dependency injection pattern makes program architecture more flexible and appropriate for unit testing. In the second half, the chapter explores two optimization patterns. The lazy initialization pattern saves time and resources, while the object pool pattern speeds up the creation of objects.
Chapter 11, Factory Method, Abstract Factory, Prototype, and Builder, examines four more creational patterns. The factory method pattern simplifies the creation of dependent objects. The concept can be extended into the abstract factory pattern, which functions as a factory of factories. The prototype pattern is used to create copies of objects. Last, in this group, the builder pattern separates instructions for creating an object from its representation.
Chapter 12, Composite, Flyweight, Marker Interface, and Bridge, covers four patterns from the structural group. The composite pattern allows client code to treat simple and complex objects the same. The flyweight pattern can be used to minimize memory usage by introducing data sharing between objects. The marker interface allows us to unleash a new level of programming power by introducing metaprogramming. The bridge pattern helps us separate an abstract interface from its implementation.
Chapter 13, Adapter, Proxy, Decorator, and Facade, explores four more structural patterns. The adapter pattern helps in adapting old code to new use cases. The proxy pattern wraps an object and exposes an identical interface to facilitate caching, remoting, and access control, among other things. The decorator pattern specifies how the functionality of existing objects can be expanded, while the facade pattern shows us how to create a simplified view of a complex system.
Chapter 14, Nullable Value, Template Method, Command, and State, covers four patterns from the behavioral group. The null object pattern can reduce the need for frequent if statements in the code. The template method pattern helps with creating adaptable algorithms. The command pattern shows how we can treat actions as objects. It is a basis for Delphi actions. The state pattern allows an object to change its behavior on demand and is useful when we are writing state machines.
Chapter 15, Iterator, Visitor, Observer, and Memento, examines four more behavioral patterns. The iterator pattern allows us to effectively access data structures in a structure-independent way. This pattern is the basis of Delphi's for..in construct. The visitor pattern allows us to extend classes in accordance with the Open/Closed design principle. To write loosely coupled programs that react to changes in the business model, we can use the observer pattern. When we need to store the state of a complex object, the memento pattern comes to help.
Chapter 16, Lock Patterns, is entirely dedicated to data protection in a multithreaded world and covers five concurrency patterns. The lock pattern enables the code to share data between threads and is the basis for other patterns from this chapter. The lock striping pattern specifies how we can optimize locking when accessing a granular structure, such as an array. The double-checked locking pattern optimizes the creation of shared resources, while the optimistic locking pattern speeds up this process even more. The readers-writer lock is a special version of the locking mechanism designed for situations where a shared resource is mostly read from, and only rarely written to.
Chapter 17, Thread Pool, Messaging, Future, and Pipeline, finish the overview of design patterns by exploring four more concurrency patterns. As a specialized version of the object pool pattern, the thread pool pattern speeds up thread creation. The messaging pattern can be used to remove shared data access completely, and by doing so, can simplify and speed up the program. The future pattern specifies how we can integrate parallel execution of calculations into existing code. This chapter ends with a discussion of the pipeline pattern, which is a practical application of messaging designed to speed up tasks that are hard to parallelize with other approaches.