Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Python Natural Language Processing Cookbook

You're reading from   Python Natural Language Processing Cookbook Over 60 recipes for building powerful NLP solutions using Python and LLM libraries

Arrow left icon
Product type Paperback
Published in Sep 2024
Publisher Packt
ISBN-13 9781803245744
Length 312 pages
Edition 2nd Edition
Languages
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Saurabh Chakravarty Saurabh Chakravarty
Author Profile Icon Saurabh Chakravarty
Saurabh Chakravarty
Zhenya Antić Zhenya Antić
Author Profile Icon Zhenya Antić
Zhenya Antić
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Chapter 1: Learning NLP Basics 2. Chapter 2: Playing with Grammar FREE CHAPTER 3. Chapter 3: Representing Text – Capturing Semantics 4. Chapter 4: Classifying Texts 5. Chapter 5: Getting Started with Information Extraction 6. Chapter 6: Topic Modeling 7. Chapter 7: Visualizing Text Data 8. Chapter 8: Transformers and Their Applications 9. Chapter 9: Natural Language Understanding 10. Chapter 10: Generative AI and Large Language Models 11. Index 12. Other Books You May Enjoy

Getting the dependency parse

A dependency parse is a tool that shows dependencies in a sentence. For example, in the sentence The cat wore a hat, the root of the sentence is the verb, wore, and both the subject, the cat, and the object, a hat, are dependents. The dependency parse can be very useful in many NLP tasks since it shows the grammatical structure of the sentence, with the subject, the main verb, the object, and so on. It can then be used in downstream processing.

The spaCy NLP engine does the dependency parse as part of its overall analysis. The dependency parse tags explain the role of each word in the sentence. ROOT is the main word that all other words depend on, usually the verb.

Getting ready

We will use spaCy to create the dependency parse. The required packages are part of the Poetry environment.

How to do it…

We will take a few sentences from the sherlock_holmes1.txt file to illustrate the dependency parse. The steps are as follows:

  1. Run the file and language utility notebooks:
    %run -i "../util/file_utils.ipynb"
    %run -i "../util/lang_utils.ipynb"
  2. Define the sentence we will be parsing:
    sentence = 'I have seldom heard him mention her under any other name.'
  3. Define a function that will print the word, its grammatical function embedded in the dep_ attribute, and the explanation of that attribute. The dep_ attribute of the Token object shows the grammatical function of the word in the sentence:
    def print_dependencies(sentence, model):
        doc = model(sentence)
        for token in doc:
            print(token.text, "\t", token.dep_, "\t", 
                spacy.explain(token.dep_))
  4. Now, let’s use this function on the first sentence in our list. We can see that the verb heard is the ROOT word of the sentence, with all other words depending on it:
    print_dependencies(sentence, small_model)

    The result should be as follows:

    I    nsubj    nominal subject
    have    aux    auxiliary
    seldom    advmod    adverbial modifier
    heard    ROOT    root
    him    nsubj    nominal subject
    mention    ccomp    clausal complement
    her    dobj    direct object
    under    prep    prepositional modifier
    any    det    determiner
    other    amod    adjectival modifier
    name    pobj    object of preposition
    .    punct    punctuation
  5. To explore the dependency parse structure, we can use the attributes of the Token class. Using the ancestors and children attributes, we can get the tokens that this token depends on and the tokens that depend on it, respectively. The function to print the ancestors is as follows:
    def print_ancestors(sentence, model):
        doc = model(sentence)
        for token in doc:
            print(token.text, [t.text for t in token.ancestors])
  6. Now, let’s use this function on the first sentence in our list:
    print_ancestors(sentence, small_model)

    The output will be as follows. In the result, we see that heard has no ancestors since it is the main word in the sentence. All other words depend on it, and in fact, contain heard in their ancestor lists.

    The dependency chain can be seen by following the ancestor links for each word. For example, if we look at the word name, we see that its ancestors are under, mention, and heard. The immediate parent of name is under, the parent of under is mention, and the parent of mention is heard. A dependency chain will always lead to the root, or the main word, of the sentence:

    I ['heard']
    have ['heard']
    seldom ['heard']
    heard []
    him ['mention', 'heard']
    mention ['heard']
    her ['mention', 'heard']
    under ['mention', 'heard']
    any ['name', 'under', 'mention', 'heard']
    other ['name', 'under', 'mention', 'heard']
    name ['under', 'mention', 'heard']
    . ['heard']
  7. To see all the children, use the following function. This function prints out each word and the words that depend on it, its children:
    def print_children(sentence, model):
        doc = model(sentence)
        for token in doc:
            print(token.text,[t.text for t in token.children])
  8. Now, let’s use this function on the first sentence in our list:
    print_children(sentence, small_model)

    The result should be as follows. Now, the word heard has a list of words that depend on it since it is the main word in the sentence:

    I []
    have []
    seldom []
    heard ['I', 'have', 'seldom', 'mention', '.']
    him []
    mention ['him', 'her', 'under']
    her []
    under ['name']
    any []
    other []
    name ['any', 'other']
    . []
  9. We can also see left and right children in separate lists. In the following function, we print the children as two separate lists, left and right. This can be useful when doing grammatical transformations in the sentence:
    def print_lefts_and_rights(sentence, model):
        doc = model(sentence)
        for token in doc:
            print(token.text,
                [t.text for t in token.lefts],
                [t.text for t in token.rights])
  10. Let’s use this function on the first sentence in our list:
    print_lefts_and_rights(sentence, small_model)

    The result should be as follows:

    I [] []
    have [] []
    seldom [] []
    heard ['I', 'have', 'seldom'] ['mention', '.']
    him [] []
    mention ['him'] ['her', 'under']
    her [] []
    under [] ['name']
    any [] []
    other [] []
    name ['any', 'other'] []
    . [] []
  11. We can also see the subtree that the token is in by using this function:
    def print_subtree(sentence, model):
        doc = model(sentence)
        for token in doc:
            print(token.text, [t.text for t in token.subtree])
  12. Let’s use this function on the first sentence in our list:
    print_subtree(sentence, small_model)

    The result should be as follows. From the subtrees that each word is part of, we can see the grammatical phrases that appear in the sentence, such as the noun phrase, any other name, and the prepositional phrase, under any other name:

    I ['I']
    have ['have']
    seldom ['seldom']
    heard ['I', 'have', 'seldom', 'heard', 'him', 'mention', 'her', 'under', 'any', 'other', 'name', '.']
    him ['him']
    mention ['him', 'mention', 'her', 'under', 'any', 'other', 'name']
    her ['her']
    under ['under', 'any', 'other', 'name']
    any ['any']
    other ['other']
    name ['any', 'other', 'name']
    . ['.']

See also

The dependency parse can be visualized graphically using the displaCy package, which is part of spaCy. Please see Chapter 7, Visualizing Text Data, for a detailed recipe on how to do the visualization.

You have been reading a chapter from
Python Natural Language Processing Cookbook - Second Edition
Published in: Sep 2024
Publisher: Packt
ISBN-13: 9781803245744
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime