Chapter 3. Data Preprocessing
Real-world observations are usually noisy and inconsistent, with missing data. No classification, regression, or clustering model can extract reliable information from data that has not been cleansed, filtered, or analyzed.
Data preprocessing consists of cleaning, filtering, transforming, and normalizing raw observations using statistics in order to correlate features or groups of features, identify trends, model, and filter out noise. The purpose of cleansing raw data is twofold:
- Identify flaws in raw input data
- Provide unsupervised or supervised learning with a clean and reliable dataset
You should not underestimate the power of traditional statistical analysis methods to infer and classify information from textual or unstructured data.
In this chapter, you will learn how to to the following:
- Apply commonly used moving average techniques to detect long-term trends in a time series
- Identify market and sector cycles using the discrete Fourier series
- Leverage...