To put it simply, SVM algorithms search for hyperplanes in order to build classifiers and regressions. The mathematics behind it are nothing but amazing. The core idea behind it is to look for improved perspectives (hyperplanes) in order to separate data points, hence allowing to separate classes that are linearly-inseparable.
In other words, some variables may be linearly-inseparable in the X-Y dimension but you could apply a transformation (hyperplane transformation) that would give it an extra dimension (Z). Looking from this new perspective, you might be able to find a hyperplane that could separate well the distinct classes. In an extreme scenario, this process would burst dimensions right in our faces depending on the problem we were looking at. Lucky for us, there is the kernel trick.
However, there is no need to actually know which transformation...