Datasets
Data is undoubtedly the most important component of machine learning. If there was no data, we wouldn't have a common purpose. In most cases, the purpose for which the data is collected defines the problem itself. As we know that the variable might be of several types, the way it is stored and organized is also very important.
Lee and Elder (1997) considered a series of datasets and introduced the need for ensemble models. We will begin by looking at the details of the datasets considered in their paper, and we will then refer to other important datasets later on in the book.
Hypothyroid
The hypothyroid dataset Hypothyroid.csv
is available in the book's code bundle packet, located at /…/Chapter01/Data
. While we have 26 variables in the dataset, we will only be using seven of these variables. Here, the number of observations is n = 3163. The dataset is downloaded from http://archive.ics.uci.edu/ml/datasets/thyroid+disease and the filename is hypothyroid.data
(http://archive.ics.uci.edu/ml/machine-learning-databases/thyroid-disease/hypothyroid.data). After some tweaks to the order of relabeling certain values, the CSV file is made available in the book's code bundle. The purpose of the study is to classify a patient with a thyroid problem based on the information provided by other variables. There are multiple variants of the dataset and the reader can delve into details at the following web page: http://archive.ics.uci.edu/ml/machine-learning-databases/thyroid-disease/HELLO. Here, the column representing the variable of interest is named Hypothyroid
, which shows that we have 151 patients with thyroid problems. The remaining 3012 tested negative for it. Clearly, this dataset is an example of unbalanced data, which means that one of the two cases is outnumbered by a huge number; for each thyroid case, we have about 20 negative cases. Such problems need to be handled differently, and we need to get into the subtleties of the algorithms to build meaningful models. The additional variables or covariates that we will use while building the predictive models include Age
, Gender
, TSH
, T3
, TT4
, T4U
, and FTI
. The data is first imported into an R session and is subset according to the variables of interest as follows:
> HT <- read.csv("../Data/Hypothyroid.csv",header = TRUE,stringsAsFactors = F) > HT$Hypothyroid <- as.factor(HT$Hypothyroid) > HT2 <- HT[,c("Hypothyroid","Age","Gender","TSH","T3","TT4","T4U","FTI")]
The first line of code imports the data from the Hypothyroid.csv
file using the read.csv
function. The dataset now has a lot of missing data in the variables, as seen here:
> sapply(HT2,function(x) sum(is.na(x))) Hypothyroid Age Gender TSH T3 TT4 0 446 73 468 695 249 T4U FTI 248 247
Consequently, we remove all the rows that have a missing value, and then split the data into training and testing datasets. We will also create a formula for the classification problem:
> HT2 <- na.omit(HT2) > set.seed(12345) > Train_Test <- sample(c("Train","Test"),nrow(HT2),replace=TRUE, prob=c(0.7,0.3)) > head(Train_Test) [1] "Test" "Test" "Test" "Test" "Train" "Train" > HT2_Train <- HT2[Train_Test=="Train",] > HT2_TestX <- within(HT2[Train_Test=="Test",],rm(Hypothyroid)) > HT2_TestY <- HT2[Train_Test=="Test",c("Hypothyroid")] > HT2_Formula <- as.formula("Hypothyroid~.")
The set.seed
function ensures that the results are reproducible each time we run the program. After removing the missing observations with the na.omit
function, we split the hypothyroid data into training and testing parts. The former is used to build the model and the latter is used to validate it, using data that has not been used to build the model. Quinlan – the inventor of the popular tree algorithm C4.5 – used this dataset extensively.
Waveform
This dataset is an example of a simulation study. Here, we have twenty-one variables as input or independent variables, and a class variable referred to as classes
. The data is generated using the mlbench.waveform
function from the mlbench
R package. For more details, refer to the following link: ftp://ftp.ics.uci.edu/pub/machine-learning-databases. We will simulate 5,000 observations for this dataset. As mentioned earlier, the set.seed
function guarantees reproducibility. Since we are solving binary classification problems, we will reduce the three classes generated by the waveform function to two, and then partition the data into training and testing parts for model building and testing purposes:
> library(mlbench) > set.seed(123) > Waveform <- mlbench.waveform(5000) > table(Waveform$classes) 1 2 3 1687 1718 1595 > Waveform$classes <- ifelse(Waveform$classes!=3,1,2) > Waveform_DF <- data.frame(cbind(Waveform$x,Waveform$classes)) # Data Frame > names(Waveform_DF) <- c(paste0("X",".",1:21),"Classes") > Waveform_DF$Classes <- as.factor(Waveform_DF$Classes) > table(Waveform_DF$Classes) 1 2 3405 1595
The R function mlbench.waveform
creates a new object of the mlbench
class. Since it consists of two sub-parts in x
and classes, we will convert it into data.frame
following some further manipulations. The cbind
function binds the two objects x
(a matrix) and classes (a numeric vector) into a single matrix. The data.frame
function converts the matrix object into a data frame, which is the class desired for the rest of the program.
After partitioning the data, we will create the required formula
for the waveform dataset:
> set.seed(12345) > Train_Test <- sample(c("Train","Test"),nrow(Waveform_DF),replace = TRUE, + prob = c(0.7,0.3)) > head(Train_Test) [1] "Test" "Test" "Test" "Test" "Train" "Train" > Waveform_DF_Train <- Waveform_DF[Train_Test=="Train",] > Waveform_DF_TestX <- within(Waveform_DF[Train_Test=="Test",],rm(Classes)) > Waveform_DF_TestY <- Waveform_DF[Train_Test=="Test","Classes"] > Waveform_DF_Formula <- as.formula("Classes~.")
German Credit
Loans are not always repaid in full, and there are defaulters. In this case, it becomes important for the bank to identify potential defaulters based on the available information. Here, we adapt the GC
dataset from the RSADBE
package to properly reflect the labels of the factor variable. The transformed dataset is available as GC2.RData
in the data folder. The GC
dataset itself is mainly an adaptation of the version available at https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data). Here, we have 1,000 observations, and 20 covariate/independent variables such as the status of existing checking account, duration, and so forth. The final status of whether the loan was completely paid or not is available in the good_bad
column. We will partition the data into training and testing parts, and create the formula too:
> library(RSADBE) > load("../Data/GC2.RData") > table(GC2$good_bad) bad good 300 700 > set.seed(12345) > Train_Test <- sample(c("Train","Test"),nrow(GC2),replace = TRUE,prob=c(0.7,0.3)) > head(Train_Test) [1] "Test" "Test" "Test" "Test" "Train" "Train" > GC2_Train <- GC2[Train_Test=="Train",] > GC2_TestX <- within(GC2[Train_Test=="Test",],rm(good_bad)) > GC2_TestY <- GC2[Train_Test=="Test","good_bad"] > GC2_Formula <- as.formula("good_bad~.")
Iris
Iris is probably the most famous classification dataset. The great statistician Sir R. A. Fisher popularized the dataset, which he used for classifying the three types of iris
plants based on length and width measurements of their petals and sepals. Fisher used this dataset to pioneer the invention of the statistical classifier linear discriminant analysis. Since there are three species of iris
, we converted this into a binary classification problem, separated the dataset, and created a formula as seen here:
> data("iris") > ir2 <- iris > ir2$Species <- ifelse(ir2$Species=="setosa","S","NS") > ir2$Species <- as.factor(ir2$Species) > set.seed(12345) > Train_Test <- sample(c("Train","Test"),nrow(ir2),replace = TRUE,prob=c(0.7,0.3)) > head(Train_Test) [1] "Test" "Test" "Test" "Test" "Train" "Train" > ir2_Train <- ir2[Train_Test=="Train",] > ir2_TestX <- within(ir2[Train_Test=="Test",],rm(Species)) > ir2_TestY <- ir2[Train_Test=="Test","Species"] > ir2_Formula <- as.formula("Species~.")
Pima Indians Diabetes
Diabetes is a health hazard, which is mostly incurable, and patients who are diagnosed with it have to adjust their lifestyles in order to cater to this condition. Based on variables such as pregnant
, glucose
, pressure
, triceps
, insulin
, mass
, pedigree
, and age
, the problem here is to classify the person as diabetic or not. Here, we have 768 observations. This dataset is drawn from the mlbench
package:
> data("PimaIndiansDiabetes") > set.seed(12345) > Train_Test <- sample(c("Train","Test"),nrow(PimaIndiansDiabetes),replace = TRUE, + prob = c(0.7,0.3)) > head(Train_Test) [1] "Test" "Test" "Test" "Test" "Train" "Train" > PimaIndiansDiabetes_Train <- PimaIndiansDiabetes[Train_Test=="Train",] > PimaIndiansDiabetes_TestX <- within(PimaIndiansDiabetes[Train_Test=="Test",], + rm(diabetes)) > PimaIndiansDiabetes_TestY <- PimaIndiansDiabetes[Train_Test=="Test","diabetes"] > PID_Formula <- as.formula("diabetes~.")
The five datasets described up to this point are classification problems. We look at one example each for regression, time series, survival, clustering, and outlier detection problems.
US Crime
A study of the crime rate per million of the population among the 47 different states of the US is undertaken here, and an attempt is made to find its dependency on 13 variables. These include age distribution, indicator of southern states, average number of schooling years, and so on. As with the earlier datasets, we will also partition this one into the following chunks of R program:
> library(ACSWR) Warning message: package 'ACSWR' was built under R version 3.4.1 > data(usc) > str(usc) 'data.frame': 47 obs. of 14 variables: $ R : num 79.1 163.5 57.8 196.9 123.4 ... $ Age: int 151 143 142 136 141 121 127 131 157 140 ... $ S : int 1 0 1 0 0 0 1 1 1 0 ... $ Ed : int 91 113 89 121 121 110 111 109 90 118 ... $ Ex0: int 58 103 45 149 109 118 82 115 65 71 ... $ Ex1: int 56 95 44 141 101 115 79 109 62 68 ... $ LF : int 510 583 533 577 591 547 519 542 553 632 ... $ M : int 950 1012 969 994 985 964 982 969 955 1029 ... $ N : int 33 13 18 157 18 25 4 50 39 7 ... $ NW : int 301 102 219 80 30 44 139 179 286 15 ... $ U1 : int 108 96 94 102 91 84 97 79 81 100 ... $ U2 : int 41 36 33 39 20 29 38 35 28 24 ... $ W : int 394 557 318 673 578 689 620 472 421 526 ... $ X : int 261 194 250 167 174 126 168 206 239 174 ... > set.seed(12345) > Train_Test <- sample(c("Train","Test"),nrow(usc),replace = TRUE,prob=c(0.7,0.3)) > head(Train_Test) [1] "Test" "Test" "Test" "Test" "Train" "Train" > usc_Train <- usc[Train_Test=="Train",] > usc_TestX <- within(usc[Train_Test=="Test",],rm(R)) > usc_TestY <- usc[Train_Test=="Test","R"] > usc_Formula <- as.formula("R~.")
In each example discussed in this section thus far, we had a reason to believe that the observations are independent of each other. This assumption simply means that the regressands and regressors of one observation have no relationship with other observations' regressands and regressors. This is a simple and reasonable assumption. We have another class of observations/datasets where such assumptions are not practical. For example, the maximum temperature of a day is not completely independent of the previous day's temperature. If that were to be the case, we could have a scorchingly hot day, followed by winter, followed by another hot day, which in turn is followed by a very heavy rainy day. However, weather does not happen in this way as on successive days, the weather is dependent on previous days. In the next example, we consider the number of overseas visitors to New Zealand.
Overseas visitors
The New Zealand overseas dataset is dealt with in detail in Chapter 10 of Tattar, et al. (2017). Here, the number of overseas visitors is captured on a monthly basis from January 1977 to December 1995. We have visitors' data available for over 228 months. The osvisit.dat
file is available at multiple web links, including https://www.stat.auckland.ac.nz/~ihaka/courses/726-/osvisit.dat and https://github.com/AtefOuni/ts/blob/master/Data/osvisit.dat. It is also available in the book's code bundle. We will import the data in R, convert it into a time series object, and visualize it:
> osvisit <- read.csv("../Data/osvisit.dat", header= FALSE) > osv <- ts(osvisit$V1, start = 1977, frequency = 12) > class(osv) [1] "ts" > plot.ts(osv)
Here, the dataset is not partitioned! Time series data can't be arbitrarily partitioned into training and testing parts. The reason is quite simple: if we have five observations in a time sequential order y1, y2, y3, y4, y5, and we believe that the order of impact is y1→y2→y3→y4→y5, an arbitrary partition of y1, y2, y5, will have different behavior. It won't have the same information as three consecutive observations. Consequently, the time series partitioning has to preserve the dependency structure; we keep the most recent part of the time as the test data. For the five observations example, we choose a sample of y1, y2, y3, as the test data. The partitioning is simple, and we will cover this in Chapter 11, Ensembling Time Series Models.
Live testing experiments rarely yield complete observations. In reliability analysis, as well as survival analysis/clinical trials, the units/patients are observed up to a predefined time and a note is made regarding whether a specific event occurs, which is usually failure or death. A considerable fraction of observations would not have failed by the pre-decided time, and the analysis cannot wait for all units to fail. A reason to curtail the study might be that the time by which all units would have failed would be very large, and it would be expensive to continue the study until such a time. Consequently, we are left with incomplete observations; we only know that the lifetime of the units lasts for at least the predefined time before the study was called off, and the event of interest may occur sometime in the future. Consequently, some observations are censored and the data is referred to as censored data. Special statistical methods are required for the analysis of such datasets. We will give an example of these types of datasets next, and analyze them later, in Chapter 10, Ensembling Survival Models.
Primary Biliary Cirrhosis
The pbc
dataset from the survival package is a benchmark dataset in the domain of clinical trials. Mayo Clinic collected the data, which is concerned with the primary biliary cirrhosis (PBC) of the liver. The study was conducted between 1974 and 1984. More details can be found by running pbc
, followed by library(survival)
on the R terminal. Here, the main time to the event of interest is the number of days between registration and either death, transplantation, or study analysis in July 1986, and this is captured in the time variable. Similarly to a survival study, the events might be censored and the indicator is in the column status. The time to event needs to be understood, factoring in variables such as trt
, age
, sex
, ascites
, hepato
, spiders
, edema
, bili
, chol
, albumin
, copper
, alk.phos
, ast
, trig
, platelet
, protime
, and stage
.
The eight datasets discussed up until this point have a target variable, or a regressand/dependent variable, and are examples of the supervised learning problem. On the other hand, there are practical cases in which we simply attempt to understand the data and find useful patterns and groups/clusters in it. Of course, it is important to note that the purpose of clustering is to find an identical group and give it a sensible label. For instance, if we are trying to group cars based on their characteristics such as length, width, horsepower, engine cubic capacity, and so on, we may find groups that might be labeled as hatch, sedan, and saloon classes, while another clustering solutions might result in labels of basic, premium, and sports variant groups. The two main problems posed in clustering are the choice of the number of groups and the formation of robust clusters. We consider a simple dataset from the factoextra
R package.
Multishapes
The multishapes
dataset from the factoextra
package consists of three variables: x
, y
, and shape
. It consists of different shapes, with each shape forming a cluster. Here, we have two concurrent circle shapes, two parallel rectangles/beds, and one cluster of points at the bottom-right. Outliers are also added across scatterplots. Some brief R code gives a useful display:
> library(factoextra) > data("multishapes") > names(multishapes) [1] "x" "y" "shape" > table(multishapes$shape) 1 2 3 4 5 6 400 400 100 100 50 50 > plot(multishapes[,1],multishapes[,2],col=multishapes[,3])
This dataset includes a column named shape, as it is a hypothetical dataset. In true clustering problems, we will have neither a cluster group indicator nor the visualization luxury of only two variables. Later in this book, we will see how ensemble clustering techniques help overcome the problems of deciding the number of clusters and the consistency of cluster membership.
Although it doesn't happen that often, frustrations can arise when fine-tuning different parameters, fitting different models, and other tricks all fail to find a useful working model. The culprit of this is often the outlier. A single outlier is known to wreak havoc on an otherwise potentially useful model, and their detection is of paramount importance. Hitherto this, the parametric and nonparametric outlier detections would be a matter of deep expertise. In complex scenarios, the identification would be an insurmountable task. A consensus on an observation being an outlier can be achieved using the ensemble outlier framework. To consider this, the board stiffness dataset will be considered. We will see how an outlier is pinned down in the conclusion of this book.
Board Stiffness
The board stiffness dataset is available in the ACSWR
package through the stiff data.frame
stiff. The dataset consists of four measures of stiffness for 30 boards. The first measure of stiffness is obtained by sending a shock wave down the board, the second measure is obtained by vibrating the board, and the remaining two are obtained from static tests. A quick method of identifying the outliers in a multivariate dataset is by using the Mahalanobis distance function. The further the distance an observation is from the center, the more likely it is that the observation will be an outlier:
> data(stiff) > sort(mahalanobis(stiff,colMeans(stiff),cov(stiff)),decreasing = TRUE) [1] 16.8474070168 12.2647549939 9.8980384087 7.6166439053 [5] 6.2837628235 5.4770195915 5.2076098038 5.0557446013 [9] 4.9883497928 4.5767867224 3.9900602512 3.5018290410 [13] 3.3979804418 2.9951752177 2.6959023813 2.5838186338 [17] 2.5385575365 2.3816049840 2.2191408683 1.9307771418 [21] 1.4876569689 1.4649908273 1.3980776252 1.3632123553 [25] 1.0792484215 0.7962095966 0.7665399704 0.6000128595 [29] 0.4635158597 0.1295713581