SVMs can also be efficiently employed for regression tasks. However, it's necessary to consider a slightly different loss function that can take into account the maximum discrepancy between prediction and the target value. The most common choice is the ε-insensitive loss (which we've already seen in passive-aggressive regression):
![](https://static.packt-cdn.com/products/9781789347999/graphics/assets/54d588b7-f6bc-4822-a0f0-3e9e9f2b4a47.png)
In this case, we consider the problem as one of a standard SVM where the separating hyperplane and the (soft) margins are built sequentially to minimize the prediction error. In the following diagram, there's a schema representing this process:
![](https://static.packt-cdn.com/products/9781789347999/graphics/assets/77aa1123-059a-4ff2-8c72-f3ee0aa32232.png)
The goal is to find the optimal parameters so that all predictions lie inside the margins (which are controlled by parameter ε). This condition minimized the ε-insensitive...