Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Python Natural Language Processing Cookbook

You're reading from   Python Natural Language Processing Cookbook Over 60 recipes for building powerful NLP solutions using Python and LLM libraries

Arrow left icon
Product type Paperback
Published in Sep 2024
Publisher Packt
ISBN-13 9781803245744
Length 312 pages
Edition 2nd Edition
Languages
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Zhenya Antić Zhenya Antić
Author Profile Icon Zhenya Antić
Zhenya Antić
Saurabh Chakravarty Saurabh Chakravarty
Author Profile Icon Saurabh Chakravarty
Saurabh Chakravarty
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Chapter 1: Learning NLP Basics 2. Chapter 2: Playing with Grammar FREE CHAPTER 3. Chapter 3: Representing Text – Capturing Semantics 4. Chapter 4: Classifying Texts 5. Chapter 5: Getting Started with Information Extraction 6. Chapter 6: Topic Modeling 7. Chapter 7: Visualizing Text Data 8. Chapter 8: Transformers and Their Applications 9. Chapter 9: Natural Language Understanding 10. Chapter 10: Generative AI and Large Language Models 11. Index 12. Other Books You May Enjoy

Dividing text into sentences

When we work with text, we can work with text units on different scales: the document itself, such as a newspaper article, the paragraph, the sentence, or the word. Sentences are the main unit of processing in many NLP tasks. For example, when we send data over to Large Language Models (LLMs), we frequently want to add some context to the prompt. In some cases, we would like that context to include sentences from a text so that the model can extract some important information from that text. In this section, we will show you how to divide a text into sentences.

Getting ready

For this part, we will be using the text of the book The Adventures of Sherlock Holmes. You can find the whole text in the book’s GitHub file (https://github.com/PacktPublishing/Python-Natural-Language-Processing-Cookbook-Second-Edition/blob/main/data/sherlock_holmes.txt). For this recipe we will need just the beginning of the book, which can be found in the file at https://github.com/PacktPublishing/Python-Natural-Language-Processing-Cookbook-Second-Edition/blob/main/data/sherlock_holmes_1.txt.

In order to do this task, you will need the NLTK package and its sentence tokenizers, which are part of the Poetry file. Directions to install Poetry are described in the Technical requirements section.

How to do it…

We will now divide the text of a small piece of The Adventures of Sherlock Holmes, outputting a list of sentences. (Reference notebook: https://github.com/PacktPublishing/Python-Natural-Language-Processing-Cookbook-Second-Edition/blob/main/Chapter01/dividing_text_into_sentences_1.1.ipynb.) Here, we assume that you are running the notebook, so the paths are all relative to the notebook location:

  1. Import the file utility functions from the util folder (https://github.com/PacktPublishing/Python-Natural-Language-Processing-Cookbook-Second-Edition/blob/main/util/file_utils.ipynb):
    %run -i "../util/file_utils.ipynb"
  2. Read in the book part text:
    sherlock_holmes_part_of_text = read_text_file("../data/sherlock_holmes_1.txt")

    The read_text_file function is located in the util notebook we imported previously. Here is its source code:

    def read_text_file(filename):
        file = open(filename, "r", encoding="utf-8")
        return file.read()
  3. Print out the resulting text to make sure everything worked correctly and the file loaded:
    print(sherlock_holmes_part_of_text)

    The beginning of the printout will look like this:

    To Sherlock Holmes she is always _the_ woman. I have seldom heard him
    mention her under any other name. In his eyes she eclipses and
    predominates the whole of her sex…
  4. Import the nltk package:
    import nltk
  5. If this is the first time you are running the code, you will need to download tokenizer data. You will not need to run this command after that:
    nltk.download('punkt')
  6. Initialize the tokenizer:
    tokenizer = nltk.data.load("tokenizers/punkt/english.pickle")
  7. Divide the text into sentences using the tokenizer. The result will be a list of sentences:
    sentences_nltk = tokenizer.tokenize(
        sherlock_holmes_part_of_text)
  8. Print the result:
    print(sentences_nltk)

    It should look like this. There are newlines inside the sentences that come from the book formatting. They are not necessarily sentence endings:

    ['To Sherlock Holmes she is always _the_ woman.', 'I have seldom heard him\nmention her under any other name.', 'In his eyes she eclipses and\npredominates the whole of her sex.', 'It was not that he felt any emotion\nakin to love for Irene Adler.', 'All emotions, and that one particularly,\nwere abhorrent to his cold, precise but admirably balanced mind.', 'He\nwas, I take it, the most perfect reasoning and observing machine that\nthe world has seen, but as a lover he would have placed himself in a\nfalse position.', 'He never spoke of the softer passions, save with a gibe\nand a sneer.', 'They were admirable things for the observer—excellent for\ndrawing the veil from men's motives and actions.', 'But for the trained\nreasoner to admit such intrusions into his own delicate and finely\nadjusted temperament was to introduce a distracting factor which might\nthrow a doubt upon all his mental results.', 'Grit in a sensitive\ninstrument, or a crack in one of his own high-power lenses, would not\nbe more disturbing than a strong emotion in a nature such as his.', 'And\nyet there was but one woman to him, and that woman was the late Irene\nAdler, of dubious and questionable memory.']
  9. Print the number of sentences in the result; there should be 11 sentences in total:
    print(len(sentences_nltk))

    This gives the result:

    11

Although it might seem straightforward to divide a text into sentences by just using a regular expression to split it at the periods, in reality, it is more complicated. We use periods in places other than ends of sentences; for example, after abbreviations – for example, “Dr. Smith will see you now.” Similarly, while all sentences in English start with a capital letter, we also use capital letters for proper names. The approach used in nltk takes all these points into consideration; it is an implementation of an unsupervised algorithm presented in https://aclanthology.org/J06-4003.pdf.

There’s more…

We can also use a different strategy to parse the text into sentences, employing the other very popular NLP package, spaCy. Here is how it works:

  1. Import the spaCy package:
    import spacy
  2. The first time you run the notebook, you will need to download a spaCy model. The model is trained on a large amount of English text and there are several tools that can be used with it, including the sentence tokenizer. Here, I’m downloading the smallest model, but you might try other ones (see https://spacy.io/usage/models/):
    !python -m spacy download en_core_web_sm
  3. Initialize the spaCy engine:
    nlp = spacy.load("en_core_web_sm")
  4. Process the text using the spaCy engine. This line assumes that you have the sherlock_holmes_part_of_text variable initialized. If not, you need to run one of the earlier cells where the text is read into this variable:
    doc = nlp(sherlock_holmes_part_of_text)
  5. Get the sentences from the processed doc object, and print the resulting array and its length:
    sentences_spacy = [sentence.text for sentence in doc.sents]
    print(sentences_spacy)
    print(len(sentences_spacy))

    The result will look like this:

    ['To Sherlock Holmes she is always _the_ woman.', 'I have seldom heard him\nmention her under any other name.', 'In his eyes she eclipses and\npredominates the whole of her sex.', 'It was not that he felt any emotion\nakin to love for Irene Adler.', 'All emotions, and that one particularly,\nwere abhorrent to his cold, precise but admirably balanced mind.', 'He\nwas, I take it, the most perfect reasoning and observing machine that\nthe world has seen, but as a lover he would have placed himself in a\nfalse position.', 'He never spoke of the softer passions, save with a gibe\nand a sneer.', 'They were admirable things for the observer—excellent for\ndrawing the veil from men's motives and actions.', 'But for the trained\nreasoner to admit such intrusions into his own delicate and finely\nadjusted temperament was to introduce a distracting factor which might\nthrow a doubt upon all his mental results.', 'Grit in a sensitive\ninstrument, or a crack in one of his own high-power lenses, would not\nbe more disturbing than a strong emotion in a nature such as his.', 'And\nyet there was but one woman to him, and that woman was the late Irene\nAdler, of dubious and questionable memory.']
    11

An important difference between spaCy and NLTK is the time it takes to complete the sentence-splitting process. The reason for this is that spaCy loads a language model and uses several tools in addition to the tokenizer, while the NLTK tokenizer has only one function: to separate the text into sentences. We can time the execution by using the time package and putting the code to split the sentences into the main function:

import time
def split_into_sentences_nltk(text):
    sentences = tokenizer.tokenize(text)
    return sentences
def split_into_sentences_spacy(text):
    doc = nlp(text)
    sentences = [sentence.text for sentence in doc.sents]
    return sentences
start = time.time()
split_into_sentences_nltk(sherlock_holmes_part_of_text)
print(f"NLTK: {time.time() - start} s")
start = time.time()
split_into_sentences_spacy(sherlock_holmes_part_of_text)
print(f"spaCy: {time.time() - start} s")

The spaCy algorithm takes 0.019 seconds, while the NLTK algorithm takes 0.0002. The time is calculated by subtracting the current time (time.time()) from the start time that is set at the beginning of the code block. It is possible that you will get slightly different values.

The reason why you might use spaCy is if you are doing other processing with the package along with splitting it into sentences. The spaCy processor does many other things, and that is why it takes longer. If you are using other features of spaCy, there is no reason to use NLTK just for sentence splitting, and it’s better to employ spaCy for the whole pipeline.

It is also possible to use only the tokenizer without other tools from spaCy. Please see their documentation for more information: https://spacy.io/usage/processing-pipelines.

Important note

spaCy might be slower, but it is doing many more things in the background, and if you are using its other features, use it for sentence splitting as well.

See also

You can use NLTK and spaCy to divide texts in languages other than English. NLTK includes tokenizer models for Czech, Danish, Dutch, Estonian, Finnish, French, German, Greek, Italian, Norwegian, Polish, Portuguese, Slovene, Spanish, Swedish, and Turkish. In order to load those models, use the name of the language followed by the .pickle extension:

tokenizer = nltk.data.load("tokenizers/punkt/spanish.pickle")

See the NLTK documentation to find out more: https://www.nltk.org/index.html.

Likewise, spaCy has models for other languages: Chinese, Dutch, English, French, German, Greek, Italian, Japanese, Portuguese, Romanian, Spanish, and others. These models are trained on text in those languages. In order to use those models, you would have to download them separately. For example, for Spanish, use this command to download the model:

python -m spacy download es_core_news_sm

Then, put this line in the code to use it:

nlp = spacy.load("es_core_news_sm")

See the spaCy documentation to find out more: https://spacy.io/usage/models.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime
Visually different images