A brief history of functional programming
If you take a look at the mainstream languages of the past decade, you will notice how the prevailing programming paradigm is OOP. This might lead you to believe that FP is an upstart paradigm, one that is in a young state compared to the well-established object-oriented approach. Yet, when we look at the history of FP, we can trace its roots all the way back to the 1930s, quite some time before we talked about programming in the modern-day sense.
The roots of FP can be traced back to the Lambda calculus, which was developed in the 1930s by Alonzo Church. This was developed as a formal system based on function abstraction and application, using variable binding. There are two variants of this calculus; it can either be typed or untyped. This is directly parallel to how programming languages today are either statically typed, such as Java and Go, or dynamically typed such as Python. The Lambda calculus was proven to be Turing-complete in 1937 – again, similar to how all mainstream programming languages today are Turing-complete.
The Lambda calculus predates modern programming by a few decades. To get to the first actual code that could be thought of as FP, in the way that we understand programming today, we have to move forward a few decades. The LISt Processor (LISP) was originally created in the 1950s as a practical application of mathematical notation. This was influenced by the Lambda calculus laid out in the 1930s by Church.
LISP can be thought of as the first FP language that reached some sense of popularity. It was especially popular in the field of artificial intelligence research, but through the decades, made its way to industry. Derivatives of LISP continued to be popular for a long time, with notable achievements such as the Crash Bandicoot game, and Hacker News being written in derivatives of this language.
LISP was developed in the late 1950s by John McCarthy. To define LISP functions, he took inspiration from the Lambda calculus developed by Church. It was extended beyond the mathematical system by introducing recursion, a fundamental concept for how functional languages work. Beyond recursion, LISP also treated functions as first-class citizens and pushed innovation in programming language design by including things such as garbage collection and conditional statements.
In the early 1960s, Kenneth E. Iverson developed A Programming Language (APL). APL is, again, a functional language that is perhaps most known for its use of symbols and terse code. For example, the following is an image of the code snippet that would generate Conway’s Game Of Life:
Figure 1.2: Conway’s Game of Life in APL
Jumping ahead another decade, in 1973, we get a language called Meta Language (ML). This language introduced the polymorphic Hindley-Milner type system – that is to say, a type system in which types are assigned automatically without requiring explicit type annotations. In addition, it supported features such as function currying, which we will apply to our functional Go code later in this book. It also supports pattern matching on the arguments of a function, as we can see in the following snippet of a function to compute the factorial of a number:
fun fac 0 = 1 | fac n = n * fac (n – 1)
The pattern matcher in this example will take a look at what the input value is to the fac
function, and then either continue with the first line if the input value is 0
, or the second line in all other cases. Notice that this is also a recursive function expressed quite beautifully. Sadly, pattern matching will not be explored further in this book, as Go currently offers no way of doing this. We will see a way of doing a similar type of function dispatching using maps and higher-order functions.
In 1977, the language called FP was created. It was developed by John Backus specifically to support the FP paradigm. While the language itself did not get much traction outside of academia, the paper in which it was introduced (Can programming be liberated from the von Neumann style?) did spark a renewed interest in FP.
In the same decade as ML and FP, another language called Scheme was developed. This is the first dialect of LISP that used lexical scoping and tail-call optimization. Tail-call optimization led to the practical implementation of recursive algorithms. While the details of tail-call optimization will be discussed in Chapter 7 of this book, briefly stated, it allows recursive algorithms to be implemented in an efficient way and without using more memory than a traditional loop would, thus eliminating the “stack overflow exceptions” that otherwise would happen during deep recursion.
Scheme is one of the most influential LISP dialects to have been created, and continues to this day to have some popularity. Although it was created in 1975, the last standard was defined as recently as 2013 (R7RS-Small). Scheme, in turn, influenced other LISP dialects, the most notable of which is perhaps Common Lisp. Interestingly, although having roots in FP, Common Lisp introduced the Common Lisp Object System (CLOS). The CLOS facilitated OOP in LISP. With this, we can perhaps consider LISP a truly multi-paradigm language, not unlike Go.
The final language to look at before we jump to contemporary functional languages is Miranda. Miranda is a lazy, purely FP language. The key concept that was introduced here is lazy evaluation. When a language is said to support lazy evaluation, this means that an expression is not resolved until the value is actually needed. It can be used, for example, to implement infinite data structures. You could define a function that generates all Fibonacci numbers, a sequence that never ends – but, rather than creating the entire list (which is not possible), it will only generate a subset of that list that is relevant to the problem you are solving. For example, the following snippet of Miranda code computes all square numbers:
squares = [ n*n | n <- [0..] ]
With that, we have arrived at the next language to discuss briefly, namely Haskell.