Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Machine Learning with R
Machine Learning with R

Machine Learning with R: R gives you access to the cutting-edge software you need to prepare data for machine learning. No previous knowledge required – this book will take you methodically through every stage of applying machine learning.

eBook
€8.99 €36.99
Paperback
€45.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Machine Learning with R

Chapter 1. Introducing Machine Learning

If science fiction stories are to be believed, teaching machines to learn will inevitably lead to apocalyptic wars between machines and their makers. In the early stages, computers are taught to play simple games of tic-tac-toe and chess. Later, machines are given control of traffic lights and communications, followed by military drones and missiles. The machines' evolution takes an ominous turn once the computers become sentient and learn how to teach themselves. Having no more need for human programmers, humankind is then "deleted."

Thankfully, at the time of this writing, machines still require user input.

Your impressions of machine learning may be very heavily influenced by these types of mass media depictions of artificial intelligence. And even though there may be a hint of truth to such tales; in reality, machine learning is focused on more practical applications. The task of teaching a computer to learn is tied more closely to a specific problem that would be a computer that can play games, ponder philosophy, or answer trivial questions. Machine learning is more like training an employee than raising a child.

Putting these stereotypes aside, by the end of this chapter, you will have gained a far more nuanced understanding of machine learning. You will be introduced to the fundamental concepts that define and differentiate the most commonly used machine learning approaches.

You will learn:

  • The origins and practical applications of machine learning

  • How knowledge is defined and represented by computers

  • The basic concepts that differentiate machine learning approaches

In a single sentence, you could say that machine learning provides a set of tools that use computers to transform data into actionable knowledge. To learn more about how the process works, read on.

The origins of machine learning


Since birth, we are inundated with data. Our body's sensors—the eyes, ears, nose, tongue, and nerves—are continually assailed with raw data that our brain translates into sights, sounds, smells, tastes, and textures. Using language, we are able to share these experiences with others.

The earliest databases recorded information from the observable environment. Astronomers recorded patterns of planets and stars; biologists noted results from experiments crossbreeding plants and animals; and cities recorded tax payments, disease outbreaks, and populations. Each of these required a human being to first observe and second, record the observation. Today, such observations are increasingly automated and recorded systematically in ever-growing computerized databases.

The invention of electronic sensors has additionally contributed to an increase in the richness of recorded data. Specialized sensors see, hear, smell, or taste. These sensors process the data far differently than a human being would, and in many ways, this is a benefit. Without the need for translation into human language, the raw sensory data remains objective.

Tip

It is important to note that although a sensor does not have a subjective component to its observations, it does not necessarily report truth (if such a concept can be defined). A camera taking photographs in black and white might provide a far different depiction of its environment than one shooting pictures in color. Similarly, a microscope provides a far different depiction of reality than a telescope.

Between databases and sensors, many aspects of our lives are recorded. Governments, businesses, and individuals are recording and reporting all manners of information from the monumental to the mundane. Weather sensors record temperature and pressure data, surveillance cameras watch sidewalks and subway tunnels, and all manner of electronic behaviors are monitored: transactions, communications, friendships, and many others.

This deluge of data has led some to state that we have entered an era of Big Data, but this may be a bit of a misnomer. Human beings have always been surrounded by data. What makes the current era unique is that we have easy data. Larger and more interesting data sets are increasingly accessible through the tips of our fingers, only a web search away. We now live in a period with vast quantities of data that can be directly processed by machines. Much of this information has the potential to inform decision making, if only there was a systematic way of making sense from it all.

The field of study interested in the development of computer algorithms for transforming data into intelligent action is known as machine learning. This field originated in an environment where the available data, statistical methods, and computing power rapidly and simultaneously evolved. Growth in data necessitated additional computing power, which in turn spurred the development of statistical methods for analyzing large datasets. This created a cycle of advancement allowing even larger and more interesting data to be collected.

A closely related sibling of machine learning, data mining, is concerned with the generation of novel insight from large databases (not to be confused with the pejorative term "data mining," describing the practice of cherry-picking data to support a theory). Although there is some disagreement over how widely the two fields overlap, a potential point of distinction is that machine learning tends to be focused on performing a known task, whereas data mining is about the search for hidden nuggets of information. For instance, you might use machine learning to teach a robot to drive a car, whereas you would utilize data mining to learn what type of cars are the safest.

Tip

Machine learning algorithms are virtually a prerequisite for data mining but the opposite is not true. In other words, you can apply machine learning to tasks that do not involve data mining, but if you are using data mining methods, you are almost certainly using machine learning.

Uses and abuses of machine learning


At its core, machine learning is primarily interested in making sense of complex data. This is a broadly applicable mission, and largely application agnostic. As you might expect, machine learning is used widely. For instance, it has been used to:

  • Predict the outcomes of elections

  • Identify and filter spam messages from e-mail

  • Foresee criminal activity

  • Automate traffic signals according to road conditions

  • Produce financial estimates of storms and natural disasters

  • Examine customer churn

  • Create auto-piloting planes and auto-driving cars

  • Identify individuals with the capacity to donate

  • Target advertising to specific types of consumers

For now, don't worry about exactly how the machines learn to perform these tasks; we will get into the specifics later. But across each of these contexts, the process is the same. A machine learning algorithm takes data and identifies patterns that can be used for action. In some cases, the results are so successful that they seem to reach near-legendary status.

One possibly apocryphal tale is of a large retailer in the United States, which employed machine learning to identify expectant mothers for targeted coupon mailings. If mothers-to-be were targeted with substantial discounts, the retailer hoped they would become loyal customers who would then continue to purchase profitable items like diapers, formula, and toys.

By applying machine learning methods to purchase data, the retailer believed it had learned some useful patterns. Certain items, such as prenatal vitamins, lotions, and washcloths could be used to identify with a high degree of certainty not only whether a woman was pregnant, but also when the baby was due.

After using this data for a promotional mailing, an angry man contacted the retailer and demanded to know why his teenage daughter was receiving coupons for maternity items. He was furious that the merchant seemed to be encouraging teenage pregnancy. Later on, as a manager called to offer an apology, it was the father that ultimately apologized; after confronting his daughter, he had discovered that she was indeed pregnant.

Whether completely true or not, there is certainly an element of truth to the preceding tale. Retailers, do in fact, routinely analyze their customers' transaction data. If you've ever used a shopper's loyalty card at your grocer, coffee shop, or another retailer, it is likely that your purchase data is being used for machine learning.

Retailers use machine learning methods for advertising, targeted promotions, inventory management, or the layout of the items in the store. Some retailers have even equipped checkout lanes with devices that print coupons for promotions based on the items in the current transaction. Websites also routinely do this to serve advertisements based on your web browsing history. Given the data from many individuals, a machine learning algorithm learns typical patterns of behavior that can then be used to make recommendations.

Despite being familiar with the machine learning methods working behind the scenes, it still feels a bit like magic when a retailer or website seems to know me better than I know myself. Others may be less thrilled to discover that their data is being used in this manner. Therefore, any person wishing to utilize machine learning or data mining would be remiss not to at least briefly consider the ethical implications of the art.

Ethical considerations

Due to the relative youth of machine learning as a discipline and the speed at which it is progressing, the associated legal issues and social norms are often quite uncertain and constantly in flux. Caution should be exercised when obtaining or analyzing data in order to avoid breaking laws, violating terms of service or data use agreements, abusing the trust, or violating privacy of the customers or the public.

Tip

The informal corporate motto of Google, an organization, which collects perhaps more data on individuals than any other, is "don't be evil." This may serve as a reasonable starting point for forming your own ethical guidelines, but it may not be sufficient.

Certain jurisdictions may prevent you from using racial, ethnic, religious, or other protected class data for business reasons, but keep in mind that excluding this data from your analysis may not be enough—machine learning algorithms might inadvertently learn this information independently. For instance, if a certain segment of people generally live in a certain region, buy a certain product, or otherwise behave in a way that uniquely identifies them as a group, some machine learning algorithms can infer the protected information from seemingly innocuous data. In such cases, you may need to fully "de-identify" these people by excluding any potentially identifying data in addition to the protected information.

Apart from the legal consequences, using data inappropriately may hurt your bottom line. Customers may feel uncomfortable or become spooked if aspects of their lives they consider private are made public. Recently, several high-profile web applications have experienced a mass exodus of users who felt exploited when the applications' terms of service agreements changed and their data was used for purposes beyond what the users had originally agreed upon. The fact that privacy expectations differ by context, by age cohort, and by locale, adds complexity to deciding the appropriate use of personal data. It would be wise to consider the cultural implications of your work before you begin on your project.

Tip

The fact that you can use data for a particular end does not always mean that you should.

How do machines learn?


A commonly cited formal definition of machine learning, proposed by computer scientist Tom M. Mitchell, says that a machine is said to learn if it is able to take experience and utilize it such that its performance improves up on similar experiences in the future. This definition is fairly exact, yet says little about how machine learning techniques actually learn to transform data into actionable knowledge.

Tip

Although it is not strictly necessary to understand the theoretical basis of machine learning prior to using it, this foundation provides an insight into the distinctions among machine learning algorithms. Because machine learning algorithms are modeled in many ways on human minds, you may even discover yourself examining your own mind in a different light.

Regardless of whether the learner is a human or a machine, the basic learning process is similar. It can be divided into three components as follows:

  • Data input: It utilizes observation, memory storage, and recall to provide a factual basis for further reasoning.

  • Abstraction: It involves the translation of data into broader representations.

  • Generalization: It uses abstracted data to form a basis for action.

To better understand the learning process, think about the last time you studied for a difficult test, perhaps for a university final exam or a career certification. Did you wish for an eidetic (that is, photographic) memory? If so, you may be disappointed to learn that perfect recall is unlikely to save you much effort. Without a higher understanding, your knowledge is limited exactly to the data input, meaning only what you had seen before and nothing more. Therefore, without knowledge of all the questions that could appear on the exam, you would be stuck attempting to memorize answers to every question that could conceivably be asked. Obviously, this is an unsustainable strategy.

Instead, a better strategy is to spend time selectively managing only a smaller set of key ideas. The commonly used learning strategies of creating an outline or a concept map are similar to how a machine performs knowledge abstraction. The tools define relationships among information and in doing so, depict difficult ideas without needing to memorize them word-for-word. It is a more advanced form of learning because it requires that the learner puts the topic into his or her own words.

It is always a tense moment when the exam is graded and the learning strategies are either vindicated or implicated with a high or low mark. Here, one discovers whether the learning strategies generalized to the questions that the teacher or professor had selected. Generalization requires a breadth of abstracted data, as well as a higher-level understanding of how to apply such knowledge to unforeseen topics. A good teacher can be quite helpful in this regard.

Keep in mind that although we have illustrated the learning process as three distinct steps, they are merely organized this way for illustrative purposes. In reality, the three components of learning are inextricably linked. In particular, the stages of abstraction and generalization are so closely related that it would be impossible to perform one without the other. In human beings, the entire process happens subconsciously. We recollect, deduce, induct, and intuit. Yet for a computer, these processes must be made explicit. On the other hand, this is a benefit of machine learning. Because the process is transparent, the learned knowledge can be examined and utilized for future action.

Abstraction and knowledge representation

Representing raw input data in a structured format is the quintessential task for a learning algorithm. Prior to this point, the data is merely ones and zeros on a disk or in memory; they have no meaning. The work of assigning a meaning to data occurs during the abstraction process.

The connection between ideas and reality is exemplified by the famous René Magritte painting The Treachery of Images shown as follows:

The painting depicts a tobacco pipe with the caption Ceci n'est pas une pipe ("this is not a pipe"). The point Magritte was illustrating is that a representation of a pipe is not truly a pipe. In spite of the fact that the pipe is not real, anybody viewing the painting easily recognizes that the picture is a pipe, suggesting that observers' minds are able to connect the picture of a pipe to the idea of a pipe, which can then be connected to an actual pipe that could be held in the hand. Abstracted connections like this are the basis of knowledge representation, the formation of logical structures that assist with turning raw sensory information into a meaningful insight.

During the process of knowledge representation, the computer summarizes raw inputs in a model, an explicit description of the structured patterns among data. There are many different types of models. You may already be familiar with some. Examples include:

  • Equations

  • Diagrams such as trees and graphs

  • Logical if/else rules

  • Groupings of data known as clusters

The choice of model is typically not left up to the machine. Instead, the model is dictated by the learning task and the type of data being analyzed. Later in this chapter, we will discuss methods for choosing the type of model in more detail.

The process of fitting a particular model to a dataset is known as training. Why is this not called learning? First, note that the learning process does not end with the step of data abstraction. Learning requires an additional step to generalize the knowledge to future data. Second, the term training more accurately describes the actual process undertaken when the model is fitted to the data. Learning implies a sort of inductive, bottom-up reasoning. Training better connotes the fact that the machine learning model is imposed by the human teacher onto the machine student, providing the computer with a structure it attempts to model after.

When the model has been trained, the data has been transformed into an abstract form that summarizes the original information. It is important to note that the model does not itself provide additional data, yet it is sometimes interesting on its own. How can this be? The reason is that by imposing an assumed structure on the underlying data, it gives insight into the unseen and provides a theory about how the data is related. Take for instance the discovery of gravity. By fitting equations to observational data, Sir Isaac Newton deduced the concept of gravity. But gravity was always present. It simply wasn't recognized as a concept until the model noted it in abstract terms—specifically, by becoming the g term in a model that explains observations of falling objects.

Most models will not result in the development of theories that shake up scientific thought for centuries. Still, your model might result in the discovery of previously unseen relationships among data. A model trained on genomic data might find several genes that when combined are responsible for the onset of diabetes; banks might discover a seemingly innocuous type of transaction that systematically appears prior to fraudulent activity; psychologists might identify a combination of characteristics indicating a new disorder. The underlying relationships were always present; but in conceptualizing the information in a different format, a model presents the connections in a new light.

Generalization

Recall that the learning process is not complete until the learner is able to use its abstracted knowledge for future action. Yet an issue remains before the learner can proceed—there are countless underlying relationships that might be identified during the abstraction process and myriad ways to model these relationships. Unless the number of potential theories is limited, the learner will be unable to utilize the information. It would be stuck where it started, with a large pool of information but no actionable insight.

The term generalization describes the process of turning abstracted knowledge into a form that can be utilized for action. Generalization is a somewhat vague process that is a bit difficult to describe. Traditionally, it has been imagined as a search through the entire set of models (that is, theories) that could have been abstracted during training. Specifically, if you imagine a hypothetical set containing every possible theory that could be established from the data, generalization involves the reduction of this set into a manageable number of important findings.

Generally, it is not feasible to reduce the number of potential concepts by examining them one-by-one and determining which are the most useful. Instead, machine learning algorithms generally employ shortcuts that more quickly divide the set of concepts. Toward this end, the algorithm will employ heuristics, or educated guesses about the where to find the most important concepts.

Tip

Because the heuristics utilize approximations and other rules of thumb, they are not guaranteed to find the optimal set of concepts that model the data. However, without utilizing these shortcuts, finding useful information in a large dataset would be infeasible.

Heuristics are routinely used by human beings to quickly generalize experience to new scenarios. If you have ever utilized gut instinct to make a snap decision prior to fully evaluating your circumstances, you were intuitively using mental heuristics.

For example, the availability heuristic is the tendency for people to estimate the likelihood of an event by how easily examples can be recalled. The availability heuristic might help explain the prevalence of the fear of airline travel relative to automobile travel, despite automobiles being statistically more dangerous. Accidents involving air travel are highly publicized and traumatic events, and are likely to be very easily recalled, whereas car accidents barely warrant a mention in the newspaper.

The preceding example illustrates the potential for heuristics to result in illogical conclusions. Browsing a list of common logical fallacies, one is likely to note many that seem rooted in heuristic-based thinking. For instance, the gambler's fallacy, or the belief that a run of bad luck implies that a stretch of better luck is due, may be resultant from the application of the representativeness heuristic, which erroneously led the gambler to believe that all random sequences are balanced since most random sequences are balanced.

The folly of misapplied heuristics is not limited to human beings. The heuristics employed by machine learning algorithms also sometimes result in erroneous conclusions. If the conclusions are systematically imprecise, the algorithm is said to have a bias. For example, suppose that a machine learning algorithm learned to identify faces by finding two circles, or eyes, positioned side-by-side above a line for a mouth. The algorithm might then have trouble with, or be biased against faces that do not conform to its model. This may include faces with glasses, turned at an angle, looking sideways, or with darker skin tones. Similarly, it could be biased toward faces with lighter eye colors or other characteristics that do not conform to its understanding of the world.

In modern usage, the word bias has come to carry quite negative connotations. Various forms of media frequently claim to be free from bias, and claim to report the facts objectively, untainted by emotion. Still, consider for a moment the possibility that a little bias might be useful. Without a bit of arbitrariness, might it be a bit difficult to decide among several competing choices, each with distinct strengths and weaknesses? Indeed, some recent studies in the field of psychology have suggested that individuals born with damage to portions of the brain responsible for emotion are ineffectual at decision making, and might spend hours debating simple decisions such as what color shirt to wear or where to eat lunch. Paradoxically, bias is what blinds us from some information while also allowing us to utilize other information for action.

Assessing the success of learning

Bias is a necessary evil associated with the abstraction and generalization process inherent in any machine learning task. Every learner has its weaknesses and is biased in a particular way; there is no single model to rule them all. Therefore, the final step in the generalization process is to determine the model's success in spite of its biases.

After a model has been trained on an initial dataset, the model is tested on a new dataset, and judged on how well its characterization of the training data generalizes to the new data. It's worth noting that it is exceedingly rare for a model to perfectly generalize to every unforeseen case.

In part, the failure for models to perfectly generalize is due to the problem of noise, or unexplained variations in data. Noisy data is caused by seemingly random events, such as:

  • Measurement error due to imprecise sensors that sometimes add or subtract a bit from the reading

  • Issues with reporting data, such as respondents reporting random answers to survey questions in order to finish more quickly

  • Errors caused when data is recorded incorrectly, including missing, null, truncated, incorrectly coded, or corrupted values

Trying to model the noise in data is the basis of a problem called overfitting. Because noise is unexplainable by definition, attempting to explain the noise will result in erroneous conclusions that do not generalize well to new cases. Attempting to generate theories to explain the noise also results in more complex models that are more likely to ignore the true pattern the learner is trying to identify. A model that seems to perform well during training but does poorly during testing is said to be overfitted to the training dataset as it does not generalize well.

Solutions to the problem of overfitting are specific to particular machine learning approaches. For now, the important point is to be aware of the issue. How well models are able to handle noisy data is an important source of distinction among them.

Steps to apply machine learning to your data


Any machine learning task can be broken down into a series of more manageable steps. This book has been organized according to the following process:

  1. Collecting data: Whether the data is written on paper, recorded in text files and spreadsheets, or stored in an SQL database, you will need to gather it in an electronic format suitable for analysis. This data will serve as the learning material an algorithm uses to generate actionable knowledge.

  2. Exploring and preparing the data: The quality of any machine learning project is based largely on the quality of data it uses. This step in the machine learning process tends to require a great deal of human intervention. An often cited statistic suggests that 80 percent of the effort in machine learning is devoted to data. Much of this time is spent learning more about the data and its nuances during a practice called data exploration.

  3. Training a model on the data: By the time the data has been prepared for analysis, you are likely to have a sense of what you are hoping to learn from the data. The specific machine learning task will inform the selection of an appropriate algorithm, and the algorithm will represent the data in the form of a model.

  4. Evaluating model performance: Because each machine learning model results in a biased solution to the learning problem, it is important to evaluate how well the algorithm learned from its experience. Depending on the type of model used, you might be able to evaluate the accuracy of the model using a test dataset, or you may need to develop measures of performance specific to the intended application.

  5. Improving model performance: If better performance is needed, it becomes necessary to utilize more advanced strategies to augment the performance of the model. Sometimes, it may be necessary to switch to a different type of model altogether. You may need to supplement your data with additional data, or perform additional preparatory work as in step two of this process.

After these steps have been completed, if the model appears to be performing satisfactorily, it can be deployed for its intended task. As the case may be, you might utilize your model to provide score data for predictions (possibly in real time), for projections of financial data, to generate useful insight for marketing or research, or to automate tasks such as mail delivery or flying aircraft. The successes and failures of the deployed model might even provide additional data to train the next generation of your model.

Choosing a machine learning algorithm


The process of choosing a machine learning algorithm involves matching the characteristics of the data to be learned to the biases of the available approaches. Since the choice of a machine learning algorithm is largely dependent upon the type of data you are analyzing and the proposed task at hand, it is often helpful to be thinking about this process while you are gathering, exploring, and cleaning your data.

Tip

It may be tempting to learn a couple of machine learning techniques and apply them to everything, but resist this temptation. No machine learning approach is best for every circumstance. This fact is described by the No Free Lunch theorem, introduced by David Wolpert in 1996. For more information, visit: http://www.no-free-lunch.org.

Thinking about the input data

All machine learning algorithms require input training data. The exact format may differ, but in its most basic form, input data takes the form of examples and features.

An example is literally a single exemplary instance of the underlying concept to be learned; it is one set of data describing the atomic unit of interest for the analysis. If you were building a learning algorithm to identify spam e-mail, the examples would be data from many individual electronic messages. To detect cancerous tumors, the examples might comprise biopsies from a number of patients.

The phrase unit of observation is used to describe the units that the examples are measured in. Commonly, the unit of observation is in the form of transactions, persons, time points, geographic regions, or measurements. Other possibilities include combinations of these such as person years, which would denote cases where the same person is tracked over multiple time points.

A feature is a characteristic or attribute of an example, which might be useful for learning the desired concept. In the previous examples, attributes in the spam detection dataset might consist of the words used in the e-mail messages. For the cancer dataset, the attributes might be genomic data from the biopsied cells, or measured characteristics of the patient such as weight, height, or blood pressure.

The following spreadsheet shows a dataset in matrix format, which means that each example has the same number of features. In matrix data, each row in the spreadsheet is an example and each column is a feature. Here, the rows indicate examples of automobiles while the columns record various features of the cars such as the price, mileage, color, and transmission. Matrix format data is by far the most common form used in machine learning, though as you will see in later chapters, other forms are used occasionally in specialized cases.

Features come in various forms as well. If a feature represents a characteristic measured in numbers, it is unsurprisingly called numeric. Alternatively, if it measures an attribute that is represented by a set of categories, the feature is called categorical or nominal. A special case of categorical variables is called ordinal, which designates a nominal variable with categories falling in an ordered list. Some examples of ordinal variables include clothing sizes such as small, medium, and large, or a measurement of customer satisfaction on a scale from 1 to 5. It is important to consider what the features represent because the type and number of features in your dataset will assist with determining an appropriate machine learning algorithm for your task.

Thinking about types of machine learning algorithms

Machine learning algorithms can be divided into two main groups: supervised learners that are used to construct predictive models, and unsupervised learners that are used to build descriptive models. Which type you will need to use depends on the learning task you hope to accomplish.

A predictive model is used for tasks that involve, as the name implies, the prediction of one value using other values in the dataset. The learning algorithm attempts to discover and model the relationship among the target feature (the feature being predicted) and the other features. Despite the common use of the word "prediction" to imply forecasting predictive models need not necessarily foresee future events. For instance, a predictive model could be used to predict past events such as the date of a baby's conception using the mother's hormone levels; or, predictive models could be used in real time to control traffic lights during rush hours.

Because predictive models are given clear instruction on what they need to learn and how they are intended to learn it, the process of training a predictive model is known as supervised learning. The supervision does not refer to human involvement, but rather the fact that the target values provide a supervisory role, which indicates to the learner the task it needs to learn. Specifically, given a set of data, the learning algorithm attempts to optimize a function (the model) to find the combination of feature values that result in the target output.

The often used supervised machine learning task of predicting which category an example belongs to is known as classification. It is easy to think of potential uses for a classifier. For instance, you could predict whether:

  • A football team will win or lose

  • A person will live past the age of 100

  • An applicant will default on a loan

  • An earthquake will strike next year

The target feature to be predicted is a categorical feature known as the class and is divided into categories called levels. A class can have two or more levels, and the levels need not necessarily be ordinal. Because classification is so widely used in machine learning, there are many types of classification algorithms.

Supervised learners can also be used to predict numeric data such as income, laboratory values, test scores, or counts of items. To predict such numeric values, a common form of numeric prediction fits linear regression models to the input data. Although regression models are not the only type of numeric models, they are by far the most widely used. Regression methods are widely used for forecasting, as they quantify in exact terms the association between the inputs and the target, including both the magnitude and uncertainty of the relationship.

Tip

Since it is easy to convert numbers to categories (for example, ages 13 to 19 are teenagers) and categories to numbers (for example, assign 1 to all males, 0 to all females), the boundary between classification models and numeric prediction models is not necessarily firm.

A descriptive model is used for tasks that would benefit from the insight gained from summarizing data in new and interesting ways. As opposed to predictive models that predict a target of interest; in a descriptive model, no single feature is more important than any other. In fact, because there is no target to learn, the process of training a descriptive model is called unsupervised learning. Although it can be more difficult to think of applications for descriptive models—after all, what good is a learner that isn't learning anything in particular—they are used quite regularly for data mining.

For example, the descriptive modeling task called pattern discovery is used to identify frequent associations within data. Pattern discovery is often used for market basket analysis on transactional purchase data. Here, the goal is to identify items that are frequently purchased together, such that the learned information can be used to refine the marketing tactics. For instance, if a retailer learns that swimming trunks are commonly purchased at the same time as sunscreen, the retailer might reposition the items more closely in the store, or run a promotion to "up-sell" customers on associated items.

Tip

Originally used only in retail contexts, pattern discovery is now starting to be used in quite innovative ways. For instance, it can be used to detect patterns of fraudulent behavior, screen for genetic defects, or prevent criminal activity.

The descriptive modeling task of dividing a dataset into homogeneous groups is called clustering. This is sometimes used for segmentation analysis that identifies groups of individuals with similar purchasing, donating, or demographic information so that advertising campaigns can be tailored to particular audiences. Although the machine is capable of identifying the groups, human intervention is required to interpret them. For example, given five different clusters of shoppers at a grocery store, the marketing team will need to understand the differences among the groups in order to create a promotion that best suits each group. However, this is almost certainly easier than trying to create a unique appeal for each customer.

Matching your data to an appropriate algorithm

The following table lists the general types of machine learning algorithms covered in this book, each of which may be implemented in several ways. Although this covers only some of the entire set of all machine learning algorithms, learning these methods will provide a sufficient foundation for making sense of other methods as you encounter them.

Model

Task

Chapter

Supervised Learning Algorithms

Nearest Neighbor

Classification

Chapter 3

naive Bayes

Classification

Chapter 4

Decision Trees

Classification

Chapter 5

Classification Rule Learners

Classification

Chapter 5

Linear Regression

Numeric prediction

Chapter 6

Regression Trees

Numeric prediction

Chapter 6

Model Trees

Numeric prediction

Chapter 6

Neural Networks

Dual use

Chapter 7

Support Vector Machines

Dual use

Chapter 7

Unsupervised Learning Algorithms

Association Rules

Pattern detection

Chapter 8

k-means Clustering

Clustering

Chapter 9

To match a learning task to a machine learning approach, you will need to begin with one of the four types of tasks: classification, numeric prediction, pattern detection, or clustering. Certain tasks make the choice of algorithm simpler. For instance, if you are undertaking pattern detection, you will likely employ association rules. Similarly, a clustering problem will likely utilize the k-means algorithm while numeric prediction will utilize regression analysis or regression trees.

For classification, more thought is needed to match a learning problem to an appropriate classifier. In these cases, it is helpful to consider the various distinctions among the algorithms. For instance, within classification problems, decision trees result in models that are readily understood, while the models of neural networks are notoriously difficult to interpret. If you were designing a credit-scoring model, this could be an important distinction because law often requires that the applicant must be notified about the reasons he or she was rejected for the loan. Even if the neural network was better at predicting loan defaults if the predictions cannot be explained, then it is useless.

In each chapter, the key strengths and weaknesses of each approach will be listed. Although you will sometimes find that these characteristics exclude certain models from consideration in most cases, the choice of model is arbitrary. In this case, feel free to use whichever algorithm you are most comfortable with. Other times, when predictive accuracy is primary, you may need to test several and choose the one that fits best. In later chapters, we will even look at methods of combining models that utilize the best properties of each.

Using R for machine learning


Many of the algorithms needed for machine learning in R are not included as part of the base installation. Thanks to R being free open source software, there is no additional charge for this functionality. The algorithms needed for machine learning were added to base R by a large community of experts who contributed to the software. A collection of R functions that can be shared among users is called a package. Free packages exist for each of the machine learning algorithms covered in this book. In fact, this book only covers a small portion of the more popular machine learning packages.

If you are interested in the breadth of R packages (4,209 packages were available at the time of writing this), you can view a list at the Comprehensive R Archive Network (CRAN) collection of web and FTP sites located around the world to provide the most up-to-date versions of R software and R packages for download. If you obtained the R software via download, it was most likely from CRAN. The CRAN website is available at:

http://cran.r-project.org/index.html.

Tip

If you do not already have R, the CRAN website also provides installation instructions and information on where to find help if you have trouble.

The Packages link on the left side of the page will take you to a page where you can browse the packages in alphabetical order or sorted by publication date. Perhaps even better, the CRAN Task Views provide organized lists of packages by subject area. The task view for machine learning, which lists the packages covered in this book (and many more), is available at:

http://cran.r-project.org/web/views/MachineLearning.html

Installing and loading R packages

Despite the vast set of available R add-ons, the package format makes installation and use a virtually effortless process. To demonstrate the use of packages, we will install and load the RWeka package, which was developed by Kurt Hornik, Christian Buchta, and Achim Zeileis (see Open-Source Machine Learning: R Meets Weka in Computational Statistics 24: 225-232 for more information). The RWeka package provides a collection of functions that give R access to the machine learning algorithms in the Java-based Weka software package by Ian H. Witten and Eibe Frank. For more information on Weka, see:

http://www.cs.waikato.ac.nz/~ml/weka/.

Tip

To use the RWeka package, you will need to have Java installed if it isn't already (many computers come with Java preinstalled). Java is a set of programming tools, available for free, which allow for the use of cross-platform applications such as Weka. For more information and to download Java for your system, visit: http://java.com.

Installing an R package

The most direct way to install a package is via the install.packages() function. To install the RWeka package, at the R command prompt simply type:

> install.packages("RWeka")

R will then connect to CRAN and download the package in the correct format for your operating system. Some packages such as RWeka require additional packages to be installed before they can be used (these are called dependencies). By default, the installer will automatically download and install any dependencies.

Tip

The first time you install a package, R may ask you to choose a CRAN mirror. If this happens, choose the mirror residing at a location close to you. This will generally provide the fastest download speed.

The default installation options are appropriate for most systems. However, in some cases, you may want to install a package to another location. For example, if you do not have root or administrator privileges on your system, you may need to specify an alternative installation path. This can be accomplished using the lib option, as follows:

> install.packages("RWeka", lib="/path/to/library")

The installation function also provides additional options for installing from a local file, installing from source, or using experimental versions. You can read about these options in the help file by using the following command:

> ?install.packages

Installing a package using the point-and-click interface

As an alternative to typing the install.packages() command, R provides a graphical user interface (GUI) for package installation. On a Microsoft Windows system, this can be accessed from the Install package(s) command item under the Packages menu, as shown in the following screenshot. On Mac OS X, the command is labeled Package Installer and is located under the Packages & Data menu.

On Windows, after launching the package installer (and choosing a CRAN mirror location if you haven't already), a large list of packages will appear. Simply scroll to the RWeka package and click on the OK button to install the package and all dependencies to the default location.

On Mac OS X, the package installer menu provides additional options. To load the list of packages, click on the Get List button. Scroll to the RWeka package (or use the Package Search feature) and click on Install Selected. Note that by default, the Mac OS X Package Installer does not install dependencies unless the Install Dependencies checkbox is selected, as shown in the following screenshot:

Loading an R package

In order to conserve memory, R does not load every installed package by default. Instead, packages are loaded by users as they are needed using the library() function.

Tip

The name of this function leads some people to incorrectly use the terms library and package interchangeably. However, to be precise, a library refers to the location where packages are installed and never to a package itself.

To load the RWeka package we installed previously, you would type the following:

> library(RWeka)

Aside from RWeka, there are several other R packages that will be used in later chapters. Installation instructions will be provided as additional packages are used.

Summary


Machine learning originated at the intersection of statistics, database science, and computer science. It is a powerful tool, capable of finding actionable insight in large quantities of data. Still, caution must be used in order to avoid common abuses of machine learning in the real world.

In conceptual terms, learning involves the abstraction of data into a structured representation, and the generalization of this structure into action. In more practical terms, a machine learner uses data containing examples and features of the concept to be learned, and summarizes this data in the form of a model, which is then used for predictive or descriptive purposes. These can be further divided into specific tasks including classification, numeric prediction, pattern detection, and clustering. Among the many options, machine learning algorithms are chosen on the basis of the input data and the learning task.

R provides support for machine learning in the form of community-authored packages. These powerful tools are free to download, but need to be installed before they can be used. In the next chapter, we will further introduce the basic R commands that are used to manage and prepare data for machine learning.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Harness the power of R for statistical computing and data science
  • Use R to apply common machine learning algorithms with real-world applications
  • Prepare, examine, and visualize data for analysis
  • Understand how to choose between machine learning models
  • Packed with clear instructions to explore, forecast, and classify data

Description

Machine learning, at its core, is concerned with transforming data into actionable knowledge. This fact makes machine learning well-suited to the present-day era of "big data" and "data science". Given the growing prominence of R—a cross-platform, zero-cost statistical programming environment—there has never been a better time to start applying machine learning. Whether you are new to data science or a veteran, machine learning with R offers a powerful set of methods for quickly and easily gaining insight from your data. "Machine Learning with R" is a practical tutorial that uses hands-on examples to step through real-world application of machine learning. Without shying away from the technical details, we will explore Machine Learning with R using clear and practical examples. Well-suited to machine learning beginners or those with experience. Explore R to find the answer to all of your questions. How can we use machine learning to transform data into action? Using practical examples, we will explore how to prepare data for analysis, choose a machine learning method, and measure the success of the process. We will learn how to apply machine learning methods to a variety of common tasks including classification, prediction, forecasting, market basket analysis, and clustering. By applying the most effective machine learning methods to real-world problems, you will gain hands-on experience that will transform the way you think about data. "Machine Learning with R" will provide you with the analytical tools you need to quickly gain insight from complex data.

Who is this book for?

Intended for those who want to learn how to use R's machine learning capabilities and gain insight from your data. Perhaps you already know a bit about machine learning, but have never used R; or perhaps you know a little R but are new to machine learning. In either case, this book will get you up and running quickly. It would be helpful to have a bit of familiarity with basic programming concepts, but no prior experience is required.

What you will learn

  • Understand the basic terminology of machine learning and how to differentiate among various machine learning approaches
  • Use R to prepare data for machine learning
  • Explore and visualize data with R
  • Classify data using nearest neighbor methods
  • Learn about Bayesian methods for classifying data
  • Predict values using decision trees, rules, and support vector machines
  • Forecast numeric values using linear regression
  • Model data using neural networks
  • Find patterns in data using association rules for market basket analysis
  • Group data into clusters for segmentation
  • Evaluate and improve the performance of machine learning models
  • Learn specialized machine learning techniques for text mining, social network data, and ‚Äúbig‚Äù data
Estimated delivery fee Deliver to Denmark

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 25, 2013
Length: 396 pages
Edition : 1st
Language : English
ISBN-13 : 9781782162148
Category :
Languages :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Denmark

Premium delivery 7 - 10 business days

€17.95
(Includes tracking information)

Product Details

Publication date : Oct 25, 2013
Length: 396 pages
Edition : 1st
Language : English
ISBN-13 : 9781782162148
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 129.97
Practical Data Analysis
€41.99
Machine Learning with R
€45.99
Building Machine Learning Systems with Python
€41.99
Total 129.97 Stars icon
Banner background image

Table of Contents

12 Chapters
Introducing Machine Learning Chevron down icon Chevron up icon
Managing and Understanding Data Chevron down icon Chevron up icon
Lazy Learning – Classification Using Nearest Neighbors Chevron down icon Chevron up icon
Probabilistic Learning – Classification Using Naive Bayes Chevron down icon Chevron up icon
Divide and Conquer – Classification Using Decision Trees and Rules Chevron down icon Chevron up icon
Forecasting Numeric Data – Regression Methods Chevron down icon Chevron up icon
Black Box Methods – Neural Networks and Support Vector Machines Chevron down icon Chevron up icon
Finding Patterns – Market Basket Analysis Using Association Rules Chevron down icon Chevron up icon
Finding Groups of Data – Clustering with k-means Chevron down icon Chevron up icon
Evaluating Model Performance Chevron down icon Chevron up icon
Improving Model Performance Chevron down icon Chevron up icon
Specialized Machine Learning Topics Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4
(73 Ratings)
5 star 71.2%
4 star 17.8%
3 star 1.4%
2 star 2.7%
1 star 6.8%
Filter icon Filter
Top Reviews

Filter reviews by




Leif C. Ulstrup Nov 16, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I have a large collection of books on programming, R, and machine learning and I am constantly looking for new material on state of the art practices related to data science. I think this is one of the best in terms of readability, straightforward and practical examples that demonstrate the key concepts in real-world terms, and up-to-date information about the use of advanced R packages for parallel processing and very large datasets. Unlike many other books on the subject, author Brett Lantz presents the material in a crisp and clear manner and does an excellent job integrating the machine learning concepts, underlying statistical foundations, R programming constructs, and practical examples. I think this will be a constant reference in my work. I look forward to future publications from this author and hope he has a blog or other means to keep up with his ideas and insights.
Amazon Verified review Amazon
Leon Shernoff Jul 10, 2015
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A well-designed book which is organized as a tour of different kinds of machine learning. The order in which the learners are treated is a little unusual, but it does a good job of bringing up the more abstract issues in a way that makes them easy to understand.
Amazon Verified review Amazon
Amazon Customer Feb 16, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Really good application in R for beginners. You can get understand it even if you know nothing about ML or R language. But if you want to get more detailed information about the methods, like algorithms etc. This is not a good choice.
Amazon Verified review Amazon
Kurt J. Aug 19, 2016
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I really liked this book. Very well written and instructive. I had a decent knowledge of R and got A's in classes at college, but I learned more about it with this book. It taught me a good bit about the topic, all the things I really wanted to know that weren't covered in my statistics master's program. I took a week off work and covered it by spending about 2-4 hours a day. I feel like I added a skill. I'm ready to go back and see where I can apply supervised and unsupervised learning algorithms to my job.
Amazon Verified review Amazon
John L. Whiteman Sep 14, 2014
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I would recommend this book for those who are new to machine learning and want to learn R. Each chapter starts with a core machine learning concept that Lantz presents in a very readable manner. The exercises that follow put these concepts into practice with just the right amount of R. I am currently enrolled in a machine learning course at a university. As students we could choose any language, so I decided to go with R. This book has helped me to feel good about my decision.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela