Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Python Data Cleaning and Preparation Best Practices

You're reading from   Python Data Cleaning and Preparation Best Practices A practical guide to organizing and handling data from various sources and formats using Python

Arrow left icon
Product type Paperback
Published in Sep 2024
Publisher Packt
ISBN-13 9781837634743
Length 456 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Maria Zervou Maria Zervou
Author Profile Icon Maria Zervou
Maria Zervou
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Preface 1. Part 1: Upstream Data Ingestion and Cleaning
2. Chapter 1: Data Ingestion Techniques FREE CHAPTER 3. Chapter 2: Importance of Data Quality 4. Chapter 3: Data Profiling – Understanding Data Structure, Quality, and Distribution 5. Chapter 4: Cleaning Messy Data and Data Manipulation 6. Chapter 5: Data Transformation – Merging and Concatenating 7. Chapter 6: Data Grouping, Aggregation, Filtering, and Applying Functions 8. Chapter 7: Data Sinks 9. Part 2: Downstream Data Cleaning – Consuming Structured Data
10. Chapter 8: Detecting and Handling Missing Values and Outliers 11. Chapter 9: Normalization and Standardization 12. Chapter 10: Handling Categorical Features 13. Chapter 11: Consuming Time Series Data 14. Part 3: Downstream Data Cleaning – Consuming Unstructured Data
15. Chapter 12: Text Preprocessing in the Era of LLMs 16. Chapter 13: Image and Audio Preprocessing with LLMs 17. Index 18. Other Books You May Enjoy

Real-time versus semi-real-time ingestion

Real-time ingestion refers to the process of collecting, processing, and loading data almost instantaneously as it is generated, as we have discussed. This approach is critical for applications that require immediate insights and actions, such as fraud detection, stock trading, and live monitoring systems. Real-time ingestion provides the lowest latency, enabling businesses to react to events as they occur. However, it demands robust infrastructure and continuous resource allocation, making it complex and potentially expensive to maintain.

Semi-real-time ingestion, on the other hand, also known as near real-time ingestion, involves processing data with minimal delay, typically in seconds or minutes, rather than instantly. This approach strikes a balance between real-time and batch processing, providing timely insights while reducing the resource intensity and complexity associated with true real-time systems. Semi-real-time ingestion is suitable for applications such as social media monitoring, customer feedback analysis, and operational dashboards, where near-immediate data processing is beneficial but not critically time-sensitive.

Common use cases for near-real-time ingestion

Let’s look at some of the common use cases wherein we can use near-real-time ingestion.

Real-time analytics

Streaming enables organizations to continuously monitor data as it flows in, allowing for real-time dashboards and visualizations. This is critical in industries such as finance, where stock prices, market trends, and trading activities need to be tracked live. It also allows for instant report generation, facilitating timely decision-making and reducing the latency between data generation and analysis.

Social media and sentiment analysis

Companies track mentions and sentiments on social media in real-time to manage brand reputation and respond to customer feedback promptly. Streaming data allows for the continuous analysis of public sentiment towards brands, products, or events, providing immediate insights that can influence marketing and PR strategies.

Customer experience enhancement

Near-real-time processing allows support teams to access up-to-date information on customer issues and behavior, enabling quicker and more accurate responses to customer inquiries. Businesses can also use near-real-time data to update customer profiles and trigger personalized marketing messages, such as emails or notifications, shortly after a customer interacts with their website or app.

Semi-real-time mode with an example

Transitioning from real-time to semi-real-time data processing involves adjusting the example to introduce a more structured approach to handling data updates, rather than processing each record immediately upon arrival. This can be achieved by batching data updates over short intervals, which allows for more efficient processing while still maintaining a responsive data processing pipeline. Let’s have a look at the example and as always, you can find the code in the GitHub repository https://github.com/PacktPublishing/Python-Data-Cleaning-and-Preparation-Best-Practices/blob/main/chapter01/3.semi_real_time.py:

  1. For generating mock data continuously, there are no changes from the previous example. This continuously generates mock data records with a slight delay (time.sleep(0.1)).
  2. For processing in semi-real-time, we can use a deque to buffer incoming records. This function processes records when either the specified time interval has elapsed, or the buffer reaches a specified size (batch_size). Then, it converts the deque to a list (list(buffer)) before passing it to transform_data, ensuring the data is processed in a batch:
    def process_semi_real_time(batch_size, interval):
        buffer = deque()
        start_time = time.time()
        for record in generate_mock_data():
            buffer.append(record)
  3. Check whether the interval has elapsed, or the buffer size has been reached:
            if (time.time() - start_time) >= interval or len(buffer) >= batch_size:
  4. Process and clear the buffer:
                transformed_batch = transform_data(list(buffer))  # Convert deque to list
                print(f"Batch of {len(transformed_batch)} records before loading:")
                for rec in transformed_batch:
                    print(rec)
                load_data(transformed_batch)
                buffer.clear()
                start_time = time.time()  # Reset start time
  5. Then, we transform each record in the batch. There are no changes from the previous example and we load the data.

When you run this code, it continuously generates mock data records. Records are buffered until either the specified time interval (interval) has elapsed, or the buffer reaches the specified size (batch_size). Once the conditions are met, the buffered records are processed as a batch, transformed, and then “loaded” (printed) into the simulated database.

When discussing the different types of data sources that are suitable for batch, streaming, or semi-real-time streaming processing, it’s essential to consider the diversity and characteristics of these sources. Data can originate from various sources, such as databases, logs, IoT devices, social media, or sensors, as we will see in the next section.

You have been reading a chapter from
Python Data Cleaning and Preparation Best Practices
Published in: Sep 2024
Publisher: Packt
ISBN-13: 9781837634743
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image