Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Python Data Science Essentials

You're reading from   Python Data Science Essentials A practitioner's guide covering essential data science principles, tools, and techniques

Arrow left icon
Product type Paperback
Published in Sep 2018
Publisher Packt
ISBN-13 9781789537864
Length 472 pages
Edition 3rd Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Luca Massaron Luca Massaron
Author Profile Icon Luca Massaron
Luca Massaron
Alberto Boschetti Alberto Boschetti
Author Profile Icon Alberto Boschetti
Alberto Boschetti
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. First Steps FREE CHAPTER 2. Data Munging 3. The Data Pipeline 4. Machine Learning 5. Visualization, Insights, and Results 6. Social Network Analysis 7. Deep Learning Beyond the Basics 8. Spark for Big Data 9. Strengthen Your Python Foundations 10. Other Books You May Enjoy

Installing Python

First, let's proceed and introduce all the settings you need in order to create a fully working data science environment to test the examples and experiment with the code that we are going to provide you with.

Python is an open source, object-oriented, and cross-platform programming language. Compared to some of its direct competitors (for instance, C++ or Java), Python is very concise. It allows you to build a working software prototype in a very short time, and yet it has become the most used language in the data scientist's toolbox not just because of that. It is also a general-purpose language, and it is very flexible due to a variety of available packages that solve a wide spectrum of problems and necessities.

Python 2 or Python 3?

There are two main branches of Python: 2.7.x and 3.x. At the time of the revision of this third edition of the book, the Python foundation (www.python.org/) is offering downloads for Python Version 2.7.15 (release date January 5, 2018) and 3.6.5 (release date January 3, 2018). Although the Python 3 version is the newest, the older Python 2 has still been in use in both scientific (20% adoption) and commercial (30% adoption) areas in 2017, as depicted in detail by this survey by JetBrains: https://www.jetbrains.com/research/python-developers-survey-2017. If you are still using Python 2, the situation could turn quite problematic soon, because in just one year's time Python 2 will be retired and maintenance will be ceased (pythonclock.org/ will provide you with the countdown, but for an official statement about this, just read https://www.python.org/dev/peps/pep-0373/), and there are really only a handful of libraries still incompatible between the two versions (py3readiness.org/) that do not give enough reasons to stay with the older version.

In addition to all these reasons, there is no immediate backward compatibility between Python 3 and 2. In fact, if you try to run some code developed for Python 2 with a Python 3 interpreter, it may not work. Major changes have been made to the newest version, and that has affected past compatibility. Some data scientists, having built most of their work on Python 2 and its packages, are reluctant to switch to the new version.

In this third edition of the book, we will continue to address the larger audience of data scientists, data analysts, and developers, who do not have such a strong legacy with Python 2. Consequently, we will continue working with Python 3, and we suggest using a version such as the most recently available Python 3.6. After all, Python 3 is the present and the future of Python. It is the only version that will be further developed and improved by the Python foundation, and it will be the default version of the future on many operating systems.

Anyway, if you are currently working with version 2 and you prefer to keep on working with it, you can still use this book and all of its examples. In fact, for the most part, our code will simply work on Python 2 after having the code itself preceded by these imports:

from __future__ import (absolute_import, division, 
print_function, unicode_literals)
from builtins import *
from future import standard_library
standard_library.install_aliases()
The from __future__ import commands should always occur at the beginning of your scripts, or else you may experience Python reporting an error.

As described in the Python-future website (python-future.org), these imports will help convert several Python 3-only constructs to a form that's compatible with both Python 3 and Python 2 (and in any case, most Python 3 code should just simply work on Python 2, even without the aforementioned imports).

In order to run the upward commands successfully, if the future package is not already available on your system, you should install it (version >= 0.15.2) by using the following command, which is to be executed from a shell:

$> pip install -U future

If you're interested in understanding the differences between Python 2 and Python 3 further, we recommend reading the wiki page offered by the Python foundation itself: https://wiki.python.org/moin/Python2orPython3.

Step-by-step installation

Novice data scientists who have never used Python (who likely don't have the language readily installed on their machines) need to first download the installer from the main website of the project, www.python.org/downloads/, and then install it on their local machine.

This section provides you with full control over what can be installed on your machine. This is very useful when you have to set up single machines to deal with different tasks in data science. Anyway, please be warned that a step-by-step installation really takes time and effort. Instead, installing a ready-made scientific distribution will lessen the burden of installation procedures and it may be well-suited for first starting and learning because it saves you time and sometimes even trouble, though it will put a large number of packages (and we won't use most of them) on your computer all at once. Therefore, if you want to start immediately with an easy installation procedure, just skip this part and proceed to the next section, Scientific distributions.

This being a multiplatform programming language, you'll find installers for machines that either run on Windows or Unix-like operating systems.

Remember that some of the latest versions of most Linux distributions (such as CentOS, Fedora, Red Hat Enterprise, and Ubuntu) have Python 2 packaged in the repository. In such a case, and in the case that you already have a Python version on your computer (since our examples run on Python 3), you first have to check what version you are exactly running. To do such a check, just follow these instructions:

  1. Open a python shell, type python in the terminal, or click on any Python icon you find on your system.
  2. Then, after starting Python, to test the installation, run the following code in the Python interactive shell or REPL:
>>> import sys
>>> print (sys.version_info)
  1. If you can read that your Python version has the major=2 attribute, it means that you are running a Python 2 instance. Otherwise, if the attribute is valued 3, or if the print statement reports back to you something like v3.x.x (for instance, v3.5.1), you are running the right version of Python, and you are ready to move forward.

To clarify the operations we have just mentioned, when a command is given in the terminal command line, we prefix the command with $>. Otherwise, if it's for the Python REPL, it's preceded by >>>.

Installing the necessary packages

Python won't come bundled with everything you need unless you take a specific pre-made distribution. Therefore, to install the packages you need, you can use either pip or easy_install. Both of these two tools run in the command line and make the process of installation, upgrading, and removing Python packages a breeze. To check which tools have been installed on your local machine, run the following command:

$> pip
To install pip, follow the instructions given at https://pip.pypa.io/en/latest/installing/.

Alternatively, you can also run the following command:

$> easy_install

If both of these commands end up with an error, you need to install any one of them. We recommend that you use pip because it is thought of as an improvement over easy_install. Moreover, easy_install is going to be dropped in the future and pip has important advantages over it. It is preferable to install everything using pip because of the following:

  • It is the preferred package manager for Python 3. Starting with Python 2.7.9 and Python 3.4, it is included by default with the Python binary installers
  • It provides an uninstall functionality
  • It rolls back and leaves your system clear if, for whatever reason, the package's installation fails
Using easy_install in spite of the advantages of pip makes sense if you are working on Windows because pip won't always install pre-compiled binary packages. Sometimes, it will try to build the package's extensions directly from C source, thus requiring a properly configured compiler (and that's not an easy task on Windows). This depends on whether the package is running on eggs (and pip cannot directly use their binaries, but it needs to build from their source code) or wheels (in this case, pip can install binaries if available, as explained here: http://pythonwheels.com/). Instead, easy_install will always install available binaries from eggs and wheels. Therefore, if you are experiencing unexpected difficulties installing a package, easy_install can save your day (at some price, anyway, as we just mentioned in the list).

The most recent versions of Python should already have pip installed by default. Therefore, you may have it already installed on your system. If not, the safest way is to download the get-pi.py script from https://bootstrap.pypa.io/get-pip.py and then run it by using the following:

$> python get-pip.py

The script will also install the setup tool from pypi.org/project/setuptools, which also contains easy_install.

You're now ready to install the packages you need in order to run the examples provided in this book. To install the < package-name > generic package, you just need to run the following command:

$> pip install < package-name >

Alternatively, you can run the following command:

$> easy_install < package-name >

Note that, in some systems, pip might be named as pip3 and easy_install as easy_install-3 to stress the fact that both operate on packages for Python 3. If you're unsure, check the version of Python that pip is operating on with:

$> pip -V

For easy_install, the command is slightly different:

$> easy_install --version

After this, the <pk> package and all its dependencies will be downloaded and installed. If you're not certain whether a library has been installed or not, just try to import a module inside it. If the Python interpreter raises an ImportError error, it can be concluded that the package has not been installed.

This is what happens when the NumPy library has been installed:

>>> import numpy

This is what happens if it's not installed:

>>> import numpy

Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named numpy

In the latter case, you'll need to first install it through pip or easy_install.

Take care that you don't confuse packages with modules. With pip, you install a package; in Python, you import a module. Sometimes, the package and the module have the same name, but in many cases, they don't match. For example, the sklearn module is included in the package named Scikit-learn.

Finally, to search and browse the Python packages available for Python, look at pypi.org.

Package upgrades

More often than not, you will find yourself in a situation where you have to upgrade a package because either the new version is required by a dependency or it has additional features that you would like to use. First, check the version of the library you have installed by glancing at the __version__ attribute, as shown in the following example, numpy:

>>> import numpy
>>> numpy.__version__ # 2 underscores before and after
'1.11.0'

Now, if you want to update it to a newer release, say the 1.12.1 version, you can run the following command from the command line:

$> pip install -U numpy==1.12.1

Alternatively, you can use the following command:

$> easy_install --upgrade numpy==1.12.1

Finally, if you're interested in upgrading it to the latest available version, simply run the following command:

$> pip install -U numpy

You can alternatively run the following command:

$> easy_install --upgrade numpy

Scientific distributions

As you've read so far, creating a working environment is a time-consuming operation for a data scientist. You first need to install Python, and then, one by one, you can install all the libraries that you will need (sometimes, the installation procedures may not go as smoothly as you'd hoped for earlier).

If you want to save time and effort and want to ensure that you have a fully working Python environment that is ready to use, you can just download, install, and use the scientific Python distribution. Apart from Python, they also include a variety of preinstalled packages, and sometimes, they even have additional tools and an IDE. A few of them are very well-known among data scientists, and in the sections that follow, you will find some of the key features of each of these packages.

We suggest that you first promptly download and install a scientific distribution, such as Anaconda (which is the most complete one), and after practicing the examples in this book, decide whether or not to fully uninstall the distribution and set up Python alone, which can be accompanied by just the packages you need for your projects.

Anaconda

Anaconda (https://www.anaconda.com/download/) is a Python distribution offered by Continuum Analytics that includes nearly 200 packages, which comprises NumPy, SciPy, pandas, Jupyter, Matplotlib, Scikit-learn, and NLTK. It's a cross-platform distribution (Windows, Linux, and macOS) that can be installed on machines with other existing Python distributions and versions. Its base version is free; instead, add-ons that contain advanced features are charged separately. Anaconda introduces conda, a binary package manager, as a command-line tool to manage your package installations.

As stated on the website, Anaconda's goal is to provide enterprise-ready Python distribution for large-scale processing, predictive analytics, and scientific computing.

Leveraging conda to install packages

If you've decided to install an Anaconda distribution, you can take advantage of the conda binary installer we mentioned previously. conda is an open source package management system, and consequently, it can be installed separately from an Anaconda distribution. The core difference from pip is that conda can be used to install any package (not just Python's ones) in a conda environment (that is, an environment where you have installed conda and you are using it for providing packages). There are many advantages in using conda over pip, as described by Jack VanderPlas in this famous blog post of his: jakevdp.github.io/blog/2016/08/25/conda-myths-and-misconceptions.

You can test immediately whether conda is available on your system. Open a shell and digit the following:

$> conda -V

If conda is available, your version will appear; otherwise, an error will be reported. If conda is not available, you can quickly install it on your system by going to http://conda.pydata.org/miniconda.html and installing the Miniconda software that's suitable for your computer. Miniconda is a minimal installation that only includes conda and its dependencies.

Conda can help you manage two tasks: installing packages and creating virtual environments. In this paragraph, we will explore how conda can help you easily install most of the packages you may need in your data science projects.

Before starting, please check that you have the latest version of conda at hand:

$> conda update conda

Now you can install any package you need. To install the <package-name> generic package, you just need to run the following command:

$> conda install <package-name>

You can also install a particular version of the package just by pointing it out:

$> conda install <package-name>=1.11.0

Similarly, you can install multiple packages at once by listing all their names:

$> conda install <package-name-1> <package-name-2> 

If you just need to update a package that you previously installed, you can keep on using conda:

$> conda update <package-name>

You can update all the available packages simply by using the --all argument:

$> conda update --all

Finally, conda can also uninstall packages for you:

$> conda remove <package-name>

If you would like to know more about conda, you can read its documentation at http://conda.pydata.org/docs/index.html. In summary, as its main advantage, it handles binaries even better than easy_install (by always providing a successful installation on Windows without any need to compile the packages from source) but without its problems and limitations. With the use of conda, packages are easy to install (and installation is always successful), update, and even uninstall. On the other hand, conda cannot install directly from a git server (so it cannot access the latest version of many packages under development), and it doesn't cover all the packages available on PyPI like pip itself.

Enthought Canopy

Enthought Canopy (https://www.enthought.com/products/canopy/) is a Python distribution by Enthought Inc. It includes more than 200 preinstalled packages, such as NumPy, SciPy, Matplotlib, Jupyter, and pandas. This distribution is targeted at engineers, data scientists, quantitative and data analysts, and enterprises. Its base version is free (which is named Canopy Express), but if you need advanced features, you have to buy a front version. It's a multi-platform distribution, and its command-line installation tool is canopy_cli.

WinPython

WinPython (http://winpython.github.io/) is a free, open-source Python distribution that's maintained by the community. It is designed for scientists and includes many packages such as NumPy, SciPy, Matplotlib, and Jupyter. It also includes Spyder as an IDE. It is free and portable. You can put WinPython into any directory, or even into a USB flash drive, and at the same time maintain multiple copies and versions of it on your system. It only works on Microsoft Windows, and its command-line tool is the WinPython Package Manager (WPPM).

Explaining virtual environments

No matter whether you have chosen to install a standalone Python or instead used a scientific distribution, you may have noticed that you are actually bound on your system to the Python version you have installed. The only exception, for Windows users, is to use a WinPython distribution, since it is a portable installation and you can have as many different installations as you need.

A simple solution to breaking free of such a limitation is to use virtualenv, which is a tool for creating isolated Python environments. That means, by using different Python environments, you can easily achieve the following things:

  • Testing any new package installation or doing experimentation on your Python environment without any fear of breaking anything in an irreparable way. In this case, you need a version of Python that acts as a sandbox.
  • Having at hand multiple Python versions (both Python 2 and Python 3), geared with different versions of installed packages. This can help you in dealing with different versions of Python for different purposes (for instance, some of the packages we are going to present on Windows OS only work when using Python 3.4, which is not the latest release).
  • Taking a replicable snapshot of your Python environment easily and having your data science prototypes work smoothly on any other computer or in production. In this case, your main concern is the immutability and replicability of your working environment.

You can find documentation about virtualenv at http://virtualenv.readthedocs.io/en/stable/, though we are going to provide you with all the directions you need to start using it immediately. In order to take advantage of virtualenv, you first have to install it on your system:

$> pip install virtualenv

After the installation completes, you can start building your virtual environments. Before proceeding, you have to make a few decisions:

  • If you have more versions of Python installed on your system, you have to decide which version to pick up. Otherwise, virtualenv will take the Python version that was used when virtualenv was installed on your system. In order to set a different Python version, you have to digit the argument -p followed by the version of Python you want, or insert the path of the Python executable to be used (for instance, by using -p python2.7, or by just pointing to a Python executable such as -p c:Anaconda2python.exe).
  • With virtualenv, when required to install a certain package, it will install it from scratch, even if it is already available at a system level (on the python directory you created the virtual environment from). This default behavior makes sense because it allows you to create a completely separated empty environment. In order to save disk space and limit the time of installation of all the packages, you may instead decide to take advantage of already available packages on your system by using the argument --system-site-packages.
  • You may want to be able to later move around your virtual environment across Python installations, even among different machines. Therefore, you may want to make the functionality of all of the environment's scripts relative to the path it is placed in by using the argument --relocatable.

After deciding on the Python version you wish to use, linking to existing global packages, and the virtual environment being relocatable or not, in order to start, you just need to launch the command from a shell. Declare the name you would like to assign to your new environment:

$> virtualenv clone

virtualenv will just create a new directory using the name you provided, in the path from which you actually launched the command. To start using it, you can just enter the directory and digit activate:

$> cd clone
$> activate

At this point, you can start working on your separated Python environment, installing packages, and working with code.

If you need to install multiple packages at once, you may need some special function from pip, pip freeze, which will enlist all the packages (and their versions) you have installed on your system. You can record the entire list in a text file by using the following command:

$> pip freeze > requirements.txt

After saving the list in a text file, just take it into your virtual environment and install all the packages in a breeze with a single command:

$> pip install -r requirements.txt

Each package will be installed according to the order in the list (packages are listed in a case-insensitive sorted order). If a package requires other packages that are later in the list, that's not a big deal because pip automatically manages such situations. So, if your package requires NumPy and NumPy is not yet installed, pip will install it first.

When you've finished installing packages and using your environment for scripting and experimenting, in order to return to your system defaults, just issue the following command:

$> deactivate

If you want to remove the virtual environment completely, after deactivating and getting out of the environment's directory, you just have to get rid of the environment's directory itself by performing a recursive deletion. For instance, on Windows, you just do the following:

$> rd /s /q clone

On Linux and macOS, the command will be as follows:

$> rm -r -f clone
If you are working extensively with virtual environments, you should consider using virtualenvwrapper, which is a set of wrappers for virtualenv in order to help you manage multiple virtual environments easily. It can be found at bitbucket.org/dhellmann/virtualenvwrapper. If you are operating on a Unix system (Linux or macOS), another solution we have to quote is pyenv, which can be found at https://github.com/yyuu/pyenv. It lets you set your main Python version, allows for the installation of multiple versions, and creates virtual environments. Its peculiarity is that it does not depend on Python to be installed and works perfectly at the user level (no need for sudo commands).

Conda for managing environments

If you have installed the Anaconda distribution, or you have tried conda by using a Miniconda installation, you can also take advantage of the conda command to run virtual environments as an alternative to virtualenv. Let's see how to use conda for that in practice. We can check what environments we have available like this:

>$ conda info -e

This command will report to you what environments you can use on your system based on conda. Most likely, your only environment will be root, pointing to your Anaconda distribution folder.

As an example, we can create an environment based on Python Version 3.6, having all the necessary Anaconda-packaged libraries installed. This makes sense, for instance, when installing a particular set of packages for a data science project. In order to create such an environment, just perform the following:

$> conda create -n python36 python=3.6 anaconda

The preceding command asks for a particular Python version, 3.6, and requires the installation of all packages that are available on the Anaconda distribution (the argument anaconda). It names the environment as python36 using the argument -n. The complete installation should take a while, given a large number of packages in the Anaconda installation. After having completed all of the installations, you can activate the environment:

$> activate python36

If you need to install additional packages to your environment when activated, you just use the following:

$> conda install -n python36 <package-name1> <package-name2>

That is, you make the list of the required packages follow the name of your environment. Naturally, you can also use pip install, as you would do in a virtualenv environment.

You can also use a file instead of listing all the packages by name yourself. You can create a list in an environment using the list argument and pipe the output to a file:

$> conda list -e > requirements.txt

Then, in your target environment, you can install the entire list by using the following:

$> conda install --file requirements.txt

You can even create an environment, based on a requirements list:

$> conda create -n python36 python=3.6 --file requirements.txt

Finally, after having used the environment, to close the session, you simply use the following command:

$> deactivate

Contrary to virtualenv, there is a specialized argument in order to completely remove an environment from your system:

$> conda remove -n python36 --all

A glance at the essential packages

We mentioned previously that the two most relevant characteristics of Python are its ability to integrate with other languages and its mature package system, which is well embodied by PyPI (the Python Package Index: pypi.org), a common repository for the majority of Python open source packages that are constantly maintained and updated.

The packages that we are now going to introduce are strongly analytical and they will constitute a complete data science toolbox. All of the packages are made up of extensively tested and highly optimized functions for both memory usage and performance, ready to achieve any scripting operation with successful execution. A walkthrough on how to install them is provided in the following section.

Partially inspired by similar tools present in R and MATLAB environments, we will explore how a few selected Python commands can allow you to efficiently handle data and then explore, transform, experiment, and learn from the same without having to write too much code or reinvent the wheel.

NumPy

NumPy, which is Travis Oliphant's creation, is the true analytical workhorse of the Python language. It provides the user with multidimensional arrays, along with a large set of functions to operate a multiplicity of mathematical operations on these arrays. Arrays are blocks of data that are arranged along multiple dimensions, which implement mathematical vectors and matrices. Characterized by optimal memory allocation, arrays are useful—not just for storing data, but also for fast matrix operations (vectorization), which are indispensable when you wish to solve ad hoc data science problems:

  • Website: http://www.numpy.org/
  • Version at the time of print: 1.12.1
  • Suggested install command: pip install numpy

As a convention largely adopted by the Python community, when importing NumPy, it is suggested that you alias it as np:

import numpy as np

We will be doing this throughout the course of this book.

SciPy

An original project by Travis Oliphant, Pearu Peterson, and Eric Jones, SciPy completes NumPy's functionalities, which offers a larger variety of scientific algorithms for linear algebra, sparse matrices, signal and image processing, optimization, fast Fourier transformation, and much more:

  • Website: http://www.scipy.org/
  • Version at time of print: 1.1.0
  • Suggested install command: pip install scipy

pandas

The pandas package deals with everything that NumPy and SciPy cannot do. Thanks to its specific data structures, namely DataFrames and Series, pandas allows you to handle complex tables of data of different types (which is something that NumPy's arrays cannot do) and time series. Thanks to Wes McKinney's creation, you will be able to easily and smoothly load data from a variety of sources. You can then slice, dice, handle missing elements, add, rename, aggregate, reshape, and finally visualize your data at will:

Conventionally, the pandas package is imported as pd:

import pandas as pd

pandas-profiling

This is a GitHub project that easily allows you to create a report from a pandas DataFrame. The package will present the following measures in an interactive HTML report, which is used to evaluate the data at hand for a data science project:

  • Essentials, such as type, unique values, and missing values
  • Quantile statistics, such as minimum value, Q1, median, Q3, maximum, range, and interquartile range
  • Descriptive statistics such as mean, mode, standard deviation, sum, median absolute deviation, the coefficient of variation, kurtosis, and skewness
  • Most frequent values
  • Histograms
  • Correlations highlighting highly correlated variables, and Spearman and Pearson matrixes

Here is all the information about this package:

Scikit-learn

Started as part of SciKits (SciPy Toolkits), Scikit-learn is the core of data science operations in Python. It offers all that you may need in terms of data preprocessing, supervised and unsupervised learning, model selection, validation, and error metrics. Expect us to talk at length about this package throughout this book. Scikit-learn started in 2007 as a Google Summer of Code project by David Cournapeau. Since 2013, it has been taken over by the researchers at INRIA ( Institut national de recherche en informatique et en automatique, that is the French Institute for Research in Computer Science and Automation):

Note that the imported module is named sklearn.

Jupyter

A scientific approach requires the fast experimentation of different hypotheses in a reproducible fashion. Initially named IPython and limited to working only with the Python language, Jupyter was created by Fernando Perez in order to address the need for an interactive Python command shell (which is based on shell, web browser, and the application interface), with graphical integration, customizable commands, rich history (in the JSON format), and computational parallelism for an enhanced performance. Jupyter is our favored choice throughout this book; it is used to clearly and effectively illustrate operations with scripts and data, and the consequent results:

  • Website: http://jupyter.org/
  • Version at the time of print: 4.4.0 (ipykernel = 4.8.2)
  • Suggested install command: pip install jupyter

JupyterLab

JupyterLab is the next user interface for the Jupyter project, which is currently in beta. It is an environment devised for interactive and reproducible computing which will offer all the usual notebook, terminal, text editor, file browser, rich outputs, and so on arranged in a more flexible and powerful user interface. JupyterLab will eventually replace the classic Jupyter Notebook after JupyterLab reaches Version 1.0. Therefore, we intend to introduce this package now in order to make you aware of it and of its functionalities:

Matplotlib

Originally developed by John Hunter, matplotlib is a library that contains all the building blocks that are required to create quality plots from arrays and to visualize them interactively.

You can find all the MATLAB-like plotting frameworks inside the PyLab module:

  • Website: http://matplotlib.org/
  • Version at the time of print: 2.2.2
  • Suggested install command: pip install matplotlib

You can simply import what you need for your visualization purposes with the following command:

import matplotlib.pyplot as plt

Seaborn

Working out beautiful graphics using matplotlib can be really time-consuming, for this reason, Michael Waskom (http://www.cns.nyu.edu/~mwaskom/) developed Seaborn, a high-level visualization package based on matplotlib and integrated with pandas data structures (such as Series and DataFrames) capable to produce informative and beautiful statistical visualizations.

You can simply import what you need for your visualization purposes with the following command:

import seaborn as sns

Statsmodels

Previously a part of SciKits, statsmodels was thought to be a complement to SciPy's statistical functions. It features generalized linear models, discrete choice models, time series analysis, and a series of descriptive statistics, as well as parametric and non-parametric tests:

Beautiful Soup

Beautiful Soup, a creation of Leonard Richardson, is a great tool to scrap out data from HTML and XML files that are retrieved from the internet. It works incredibly well, even in the case of tag soups (hence the name), which are collections of malformed, contradictory, and incorrect tags. After choosing your parser (the HTML parser included in Python's standard library works fine), thanks to Beautiful Soup, you can navigate through the objects in the page and extract text, tables, and any other information that you may find useful:

Note that the imported module is named bs4.

NetworkX

Developed by the Los Alamos National Laboratory, NetworkX is a package specialized in the creation, manipulation, analysis, and graphical representation of real-life network data (it can easily operate with graphs made up of a million nodes and edges). Besides specialized data structures for graphs and fine visualization methods (2D and 3D), it provides the user with many standard graph measures and algorithms, such as the shortest path, centrality, components, communities, clustering, and PageRank. We will mainly use this package in Chapter 6, Social Network Analysis:

Conventionally, NetworkX is imported as nx:

import networkx as nx  

NLTK

The Natural Language Toolkit (NLTK) provides access to corpora and lexical resources, and to a complete suite of functions for Natural Language Processing (NLP), ranging from tokenizers to part-of-speech taggers and from tree models to named-entity recognition. Initially, Steven Bird and Edward Loper created the package as an NLP teaching infrastructure for their course at the University of Pennsylvania. Now it is a fantastic tool that you can use to prototype and build NLP systems:

  • Website: http://www.nltk.org/
  • Version at the time of print: 3.3
  • Suggested install command: pip install nltk

Gensim

Gensim, programmed by Radim Řehůřek, is an open source package that is suitable for the analysis of large textual collections with the help of parallel distributable online algorithms. Among advanced functionalities, it implements Latent Semantic Analysis (LSA), topic modeling by Latent Dirichlet Allocation (LDA), and Google's word2vec, a powerful algorithm that transforms text into vector features that can be used in supervised and unsupervised machine learning:

PyPy

PyPy is not a package; it is an alternative implementation of Python 3.5.3 that supports most of the commonly used Python standard packages (unfortunately, NumPy is currently not fully supported). As an advantage, it offers enhanced speed and memory handling. Thus, it is very useful for heavy-duty operations on large chunks of data, and it should be part of your big data handling strategies:

XGBoost

XGBoost is a scalable, portable, and distributed gradient boosting library (a tree ensemble machine learning algorithm). Initially created by Tianqi Chen from Washington University, it has been enriched by a Python wrapper by Bing Xu and an R interface by Tong He (you can read the story behind XGBoost directly from its principal creator at http://homes.cs.washington.edu/~tqchen/2016/03/10/story-and-lessons-behind-the-evolution-of-xgboost.html). XGBoost is available for Python, R, Java, Scala, Julia, and C++, and it can work on a single machine (leveraging multithreading) in both Hadoop and Spark clusters:

Detailed instructions for installing XGBoost on your system can be found at https://github.com/dmlc/xgboost/blob/master/doc/build.md.

The installation of XGBoost on both Linux and macOS is quite straightforward, whereas it is a little bit trickier for Windows users, though the recent release of a pre-built binary wheel for Python has made the procedure a piece of cake for everyone. You simply have to type this on your shell:

$> pip install xgboost

If you want to install XGBoost from scratch because you need the most recent bug fixes or GPU support, you need to first build the shared library from C++ (libxgboost.so for Linux/macOS and xgboost.dll for Windows) and then install the Python package. On a Linux/macOS system, you just have to build the executable by the make command, but on Windows, things are a little bit more tricky.

Generally, refer to https://xgboost.readthedocs.io/en/latest/build.html#, which provides the most recent instructions for building from scratch. For a quick reference, here, we are going to provide specific installation steps to get XGBoost working on Windows:

  1. First, download and install Git for Windows, (https://git-for-windows.github.io/).
  2. Then, you need a MINGW compiler present on your system. You can download it from http://www.mingw.org/ or http://tdm-gcc.tdragon.net/, according to the characteristics of your system.

  1. From the command line, execute the following:
$> git clone --recursive https://github.com/dmlc/xgboost
$> cd xgboost
$> git submodule init
$> git submodule update
  1. Then, always from the command line, copy the configuration for 64-byte systems to be the default one:
$> copy make\mingw64.mk config.mk
  1. Alternatively, you just copy the plain 32-byte version:
$> copy make\mingw.mk config.mk
  1. After copying the configuration file, you can run the compiler, setting it to use four threads in order to speed up the compiling procedure:
$> mingw32-make -j4
  1. In MinGW, the make command comes with the name mingw32-make. If you are using a different compiler, the previous command may not work. If so, you can simply try this:
$> make -j4
  1. Finally, if the compiler completes its work without errors, you can install the package in your Python by using the following:
$> cd python-package
$> python setup.py install
After following all the preceding instructions, if you try to import XGBoost in Python and it doesn't load and results in an error, it may well be that Python cannot find MinGW's g++ runtime libraries.
You just need to find the location on your computer of MinGW's binaries (in our case, it was in C:\mingw-w64\mingw64\bin; just modify the following code and put yours) and place the following code snippet before importing XGBoost:
import os
mingw_path = 'C:\mingw-w64\mingw64\bin'
os.environ['PATH']=mingw_path + ';' + os.environ['PATH']
import xgboost as xgb

LightGBM

LightGBM is a gradient boosting framework that was developed by Microsoft that uses the tree-based learning algorithm in a different fashion than other GBMs, favoring exploration of more promising leaves (leaf-wise) instead of developing level-wise.


In graph terminology, LightGBM is pursuing a depth-first search strategy than a breadth-first search one.

It has been designed to be distributed (Parallel and GPU learning supported), and its unique approach really achieves faster training speed with lower memory usage (thus allowing for the handling of the larger scale of data):

The installation of XGBoost requires some more actions on your side than usual Python packages. If you are operating on a Windows system, open a shell and issue the following commands:

$> git clone --recursive https://github.com/Microsoft/LightGBM
$> cd LightGBM
$> mkdir build
$> cd build
$> cmake -G "MinGW Makefiles" ..
$> mingw32-make.exe -j4
You may need to install CMake on your system first (https://cmake.org), and you also may need to run cmake -G "MinGW Makefiles" .. if a sh.exe was found in your PATH error is reported.

If you are instead operating on a Linux system, you just need to digit on a shell:

$> git clone --recursive https://github.com/Microsoft/LightGBM
$> cd LightGBM
$> mkdir build
$> cd build
$> cmake ..
$> make -j4

After you have completed compiling the package, no matter whether you are on Windows or Linux, you just import it on your Python command line:

import lightgbm as lgbm
You can also build the package using MPI for parallel computing architectures, HDFS, or GPU versions. You can find all the detailed instructions at https://github.com/Microsoft/LightGBM/blob/master/docs/Installation-Guide.rst.

CatBoost

Developed by Yandex researchers and engineers, CatBoost (which stands for categorical boosting) is a gradient boosting algorithm, based on decision trees, which is optimized in handling categorical features without much preprocessing (non-numeric features expressing a quality, such as a color, a brand, or a type). Since in most databases the majority of features are categorical, CatBoost can really boost your results on prediction:

CatBoost requires msgpack, which can be easily installed by using the pip install msgpack command.

TensorFlow

TensorFlow was initially developed by the Google Brain team to be used internally at Google, and was then to be released to the larger public. On November 9, 2015, it was distributed under the Apache 2.0 open source license, and since then it has become the most widespread open source software library for high-performance numerical computation (mostly used for deep learning). It is capable of computations across a variety of platforms (systems with multiple CPUs, GPUs, and TPUs), and from desktops to clusters of servers to mobile and edge devices.

In this book, we will use TensorFlow as the backend of Keras, that is, we won't use it directly, but we will need to have it running on our system:

Installing TensorFlow on a CPU system is quite straightforward: just use pip install tensorflow. But if you have an NVIDIA GPU (you actually need a GPU card with CUDA Compute Capability 3.0 or higher) on your system, the requirements ramp up and you first have to install the following:

  • CUDA Toolkit 9.0
  • The NVIDIA drivers associated with CUDA Toolkit 9.0
  • cuDNN v7.0

For each operation, you need to accomplish various steps depending on your system, as detailed on the NVIDIA website. You can find all the directions for installation depending on your system (Ubuntu, Windows, or macOS) at https://www.tensorflow.org/install/.

After having accomplished all the necessary steps, pip install tensorflow-gpu will install the TensorFlow package that's optimized for GPU computations.

Keras

Keras is a minimalist and highly modular neural networks library, written in Python and capable of running on top of TensorFlow (the source software library for numerical computation released by Google) as well as Microsoft Cognitive Toolkit (previously known as CNTK), Theano, or MXNet. Its primary developer and maintainer is François Chollet, a machine learning researcher working at Google:

  • Website: https://keras.io/
  • Version at the time of print: 2.2.0
  • Suggested install command: pip install keras

As an alternative, you can install the latest available version (which is advisable since the package is in continuous development) by using the following command:

$> pip install git+git://github.com/fchollet/keras.git
You have been reading a chapter from
Python Data Science Essentials - Third Edition
Published in: Sep 2018
Publisher: Packt
ISBN-13: 9781789537864
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime