Again, if you have read Chapter 4, Gensim - Vectorizing Text and Transformations and n-grams, Chapter 5, POS-Tagging and Its applications, and Chapter 6, NER-Tagging and Its applications, then you would be comfortable with the theory behind training our own models in spaCy. We would recommend that you go back and read Vector transformations in Gensim section from chapter 4 and Training our own POS-taggers section from chapter 5 to refresh your ideas on what exactly training means in context with machine learning and in particular, spaCy.
Again, the advantage with spaCy is that we don't need to care about the algorithm being used under the hood, or which features are the best to select for dependency parsing - this is usually the hardest part of machine learning research. We know that an optimal learning algorithm has been selected, and all...