If you've followed every chapter of this book until this one, you would already have finished dependency parsing your data, multiple times; each run of your text through the pipeline had already annotated the words in the sentences in your document with their dependencies to the other words in the sentence. Let's set-up our models again, similar to how we did in the previous chapters.
import spacy nlp = spacy.load('en')
Now that our pipeline is ready, we can begin analyzing our sentences.
spaCy's parsing portion of the pipeline does both phrasal parsing and dependency parsing - this means that we can get information about what the noun and verb chunks in a sentence are, as well as information about the dependencies between words.
Phrasal parsing can also be referred to as chunking, as we get chunks that are part of sentences...