Chapter 3. Data Preprocessing
Real-world data is usually noisy and inconsistent with missing observations. No classification, regression, or clustering model can extract relevant information from unprocessed data.
Data preprocessing consists of cleaning, filtering, transforming, and normalizing raw observations using statistics in order to correlate features or groups of features, identify trends and model, and filter out noise. The purpose of cleansing raw data is twofold:
- Extract some basic knowledge from raw datasets
- Evaluate the quality of data and generate clean datasets for unsupervised or supervised learning
You should not underestimate the power of traditional statistical analysis methods to infer and classify information from textual or unstructured data.
In this chapter, you will learn how to:
- Apply commonly used moving average techniques to detect long-term trends in a time series
- Identify market and sector cycles using discrete Fourier series
- Leverage the Kalman filter to...