Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Functional Python Programming

You're reading from   Functional Python Programming Create succinct and expressive implementations with functional programming in Python

Arrow left icon
Product type Paperback
Published in Jan 2015
Publisher
ISBN-13 9781784396992
Length 360 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Steven F. Lott Steven F. Lott
Author Profile Icon Steven F. Lott
Steven F. Lott
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Introducing Functional Programming 2. Introducing Some Functional Features FREE CHAPTER 3. Functions, Iterators, and Generators 4. Working with Collections 5. Higher-order Functions 6. Recursions and Reductions 7. Additional Tuple Techniques 8. The Itertools Module 9. More Itertools Techniques 10. The Functools Module 11. Decorator Design Techniques 12. The Multiprocessing and Threading Modules 13. Conditional Expressions and the Operator Module 14. The PyMonad Library 15. A Functional Approach to Web Services 16. Optimizations and Improvements Index

A classic example of functional programming

As part of our introduction, we'll look at a classic example of functional programming. This is based on the classic paper Why Functional Programming Matters by John Hughes. The article appeared in a paper called Research Topics in Functional Programming, edited by D. Turner, published by Addison-Wesley in 1990.

Here's a link to the paper Research Topics in Functional Programming:

http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf

This discussion of functional programming in general is profound. There are several examples given in the paper. We'll look at just one: the Newton-Raphson algorithm for locating the roots of a function. In this case, the function is the square root.

It's important because many versions of this algorithm rely on the explicit state managed via loops. Indeed, the Hughes paper provides a snippet of the Fortran code that emphasizes stateful, imperative processing.

The backbone of this approximation is the calculation of the next approximation from the current approximation. The next_() function takes x, an approximation to the sqrt(n) method and calculates a next value that brackets the proper root. Take a look at the following example:

def next_(n, x):
    return (x+n/x)/2

This function computes a series of values A classic example of functional programming. The distance between the values is halved each time, so they'll quickly get to converge on the value such thatA classic example of functional programming, which means A classic example of functional programming. We don't want to call the method next() because this name would collide with a built-in function. We call it the next_() method so that we can follow the original presentation as closely as possible.

Here's how the function looks when used in the command prompt:

>>> n= 2
>>> f= lambda x: next_(n, x)
>>> a0= 1.0
>>> [ round(x,4) for x in (a0, f(a0), f(f(a0)), f(f(f(a0))),) ]
[1.0, 1.5, 1.4167, 1.4142]

We've defined the f() method as a lambda that will converge on A classic example of functional programming. We started with 1.0 as the initial value for A classic example of functional programming. Then we evaluated a sequence of recursive evaluations: A classic example of functional programming, A classic example of functional programming and so on. We evaluated these functions using a generator expression so that we could round off each value. This makes the output easier to read and easier to use with doctest. The sequence appears to converge rapidly on A classic example of functional programming.

We can write a function, which will (in principle) generate an infinite sequence of A classic example of functional programmingvalues converging on the proper square root:

def repeat(f, a):
    yield a
    for v in repeat(f, f(a)):
        yield v

This function will generate approximations using a function, f(), and an initial value, a. If we provide the next_() function defined earlier, we'll get a sequence of approximations to the square root of the n argument.

Tip

The repeat() function expects the f() function to have a single argument, however, our next_() function has two arguments. We can use a lambda object, lambda x: next_(n, x), to create a partial version of the next_() function with one of two variables bound.

The Python generator functions can't be trivially recursive, they must explicitly iterate over the recursive results, yielding them individually. Attempting to use a simple return repeat(f, f(a)) will end the iteration, returning a generator expression instead of yielding the sequence of values.

We have two ways to return all the values instead of returning a generator expression, which are as follows:

  • We can write an explicit for loop as follows:
    for x in some_iter: yield x.
  • We can use the yield from statement as follows:
    yield from some_iter.

Both techniques of yielding the values of a recursive generator function are equivalent. We'll try to emphasize yield from. In some cases, however, the yield with a complex expression will be more clear than the equivalent mapping or generator expression.

Of course, we don't want the entire infinite sequence. We will stop generating values when two values are so close to each other that we can call either one the square root we're looking for. The common symbol for the value, which is close enough, is the Greek letter Epsilon, ε, which can be thought of as the largest error we will tolerate.

In Python, we'll have to be a little clever about taking items from an infinite sequence one at a time. It works out well to use a simple interface function that wraps a slightly more complex recursion. Take a look at the following code snippet:

def within(ε, iterable):
    def head_tail(ε, a, iterable):
        b= next(iterable)
        if abs(a-b) <= ε: return b
        return head_tail(ε, b, iterable)
    return head_tail(ε, next(iterable), iterable)

We've defined an internal function, head_tail(), which accepts the tolerance, ε, an item from the iterable sequence, a, and the rest of the iterable sequence, iterable. The next item from the iterable bound to a name b. If A classic example of functional programming, then the two values that are close enough together that we've found the square root. Otherwise, we use the b value in a recursive invocation of the head_tail() function to examine the next pair of values.

Our within() function merely seeks to properly initialize the internal head_tail() function with the first value from the iterable parameter.

Some functional programming languages offer a technique that will put a value back into an iterable sequence. In Python, this might be a kind of unget() or previous() method that pushes a value back into the iterator. Python iterables don't offer this kind of rich functionality.

We can use the three functions next_(), repeat(), and within() to create a square root function, as follows:

def sqrt(a0, ε, n):
    return within(ε, repeat(lambda x: next_(n,x), a0))

We've used the repeat() function to generate a (potentially) infinite sequence of values based on the next_(n,x) function. Our within() function will stop generating values in the sequence when it locates two values with a difference less than ε.

When we use this version of the sqrt() method, we need to provide an initial seed value, a0, and an ε value. An expression like sqrt(1.0, .0001, 3) will start with an approximation of 1.0 and compute the value of A classic example of functional programming to within 0.0001. For most applications, the initial a0 value can be 1.0. However, the closer it is to the actual square root, the more rapidly this method converges.

The original example of this approximation algorithm was shown in the Miranda language. It's easy to see that there are few profound differences between Miranda and Python. The biggest difference is Miranda's ability to construct cons, a value back into an iterable, doing a kind of unget. This parallelism between Miranda and Python gives us confidence that many kinds of functional programming can be easily done in Python.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime