In this example, we ran a decision tree in R by extracting a sample from the Spark dataframe and running the tree model using base R. While that is perfectly acceptable (since it forced you to think about sampling), in many instances it would be more efficient to run the models directly on the Spark dataframe using a MLlib package or equivalent.
For the version of Spark, you should be working with (2.1); decision tree algorithms are not available to be run under R. Fortunately, native Spark decision trees are already implemented in Python and Scala. We will illustrate the example using Python so that you can see that there are options available. If you will be following algorithm development in Spark you will find that often algorithms are written first in Scala, since that is the native Spark language.