With real datasets, SVMs can extract a very large number of support vectors to increase accuracy, and this strategy can slow down the whole process. To find a trade-off between precision and the number of support vectors, it's possible to employ a slightly different model called ν-SVM. The problem (with kernel support and n samples denoted by xi) becomes the following:
Parameter ν is bounded between 0 (excluded) and 1, and can be used to control at the same time the number of support vectors (greater values will increase their number) and training error (lower values reduce the fraction of errors). The formal proof of these results requires us to express the problem using a Lagrangian; however, it's possible to understand the dynamics intuitively, considering the boundary cases. When ν → 0, the τ variable...