Chapter 1, Julia is Fast, is your introduction to Julia's unique performance. Julia is a high-performance language, with the possibility to run code that is competitive in performance with code written in C. This chapter explains why Julia code is fast. It also provides context and sets the stage for the rest of the book.
Chapter 2, Analyzing Performance, shows you how to measure the speed of Julia programs and understand where the bottlenecks are. It also shows you how to measure the memory usage of Julia programs and the amount of time spent on garbage collection.Â
Chapter 3, Types, Type Inference, and Stability, covers type information. One of the principal ways in which Julia achieves its performance goals is by using type information. This chapter describes how the Julia compiler uses type information to create fast machine code. It describes ways of writing Julia code to provide effective type information to the Julia compiler.Â
Chapter 4, Making Fast Function Calls, explores functions. Functions are the primary artifacts for code organization in Julia, with multiple dispatch being the single most important design feature in the language. This chapter shows you how to use these facilities for fast code.Â
Chapter 5, Fast Numbers, describes some internals of Julia's number types in relation to performance, and helps you understand the design decisions that were made to achieve that performance.Â
Chapter 6, Using Arrays, focuses on arrays. Arrays are one of the most important data structures in scientific programming. This chapter shows you how to get the best performance out of your arrays—how to store them, and how to operate on them.Â
Chapter 7, Accelerating Code with the GPU, covers the GPU. In recent years, the general-purpose GPU has turned out to be one of the best ways of running fast parallel computations. Julia provides a unique method for compiling high-level code to the GPU. This chapter shows you how to use the GPU with Julia.Â
Chapter 8, Concurrent Programming with Tasks, looks at concurrent programming. Most programs in Julia run on a single thread, on a single processor core. However, certain concurrent primitives make it possible to run parallel, or seemingly parallel, operations, without the full complexities of shared memory multi-threading. In this chapter, we discuss how the concepts of tasks and asynchronous IO help create responsive programs.Â
Chapter 9, Threads, moves on to look at how Julia now has new experimental support for shared memory multi-threading. In this chapter, we discuss the implementation details of this mode, and see how this is different from other languages. We see how to speed up our computations using threads, and learn some of the limitations that currently exist in this model.
Chapter 10, Distributed Computing with Julia, recognizes that there comes a time in every large computation's life when living on a single machine is not enough. There is either too much data to fit in the memory of a single machine, or computations need to be finished quicker than can be achieved on all the cores of a single processor. At that stage, computation moves from a single machine to many. Julia comes with advanced distributed computation facilities built in, which we describe in this chapter.