Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Real-time Data Processing and Analytics

You're reading from   Practical Real-time Data Processing and Analytics Distributed Computing and Event Processing using Apache Spark, Flink, Storm, and Kafka

Arrow left icon
Product type Paperback
Published in Sep 2017
Publisher Packt
ISBN-13 9781787281202
Length 360 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Shilpi Saxena Shilpi Saxena
Author Profile Icon Shilpi Saxena
Shilpi Saxena
Saurabh Gupta Saurabh Gupta
Author Profile Icon Saurabh Gupta
Saurabh Gupta
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Introducing Real-Time Analytics FREE CHAPTER 2. Real Time Applications – The Basic Ingredients 3. Understanding and Tailing Data Streams 4. Setting up the Infrastructure for Storm 5. Configuring Apache Spark and Flink 6. Integrating Storm with a Data Source 7. From Storm to Sink 8. Storm Trident 9. Working with Spark 10. Working with Spark Operations 11. Spark Streaming 12. Working with Apache Flink 13. Case Study

Real–time analytics – the myth and the reality

One of the biggest truths about the real–time analytics is that nothing is actually real–time; it's a myth. In reality, it's close to real–time. Depending upon the performance and ability of a solution and the reduction of operational latencies, the analytics could be close to real–time, but, while day-by-day we are bridging the gap between real–time and near–real–time, it's practically impossible to eliminate the gap due to computational, operational, and network latencies.

Before we go further, let's have a quick overview of what the high–level expectations from these so called real–time analytics solutions are. The following figure captures the high–level intercept of the same, where, in terms of data we are looking for a system that could process millions of transactions with a variety of structured and unstructured data sets. My processing engine should be ultra–fast and capable of handling very complex joined-up and diverse business logic, and at the end, it is also expected to generate astonishingly accurate reports, revert to my ad–hoc queries in a split–second, and render my visualizations and dashboards with no latency:

As if the previous aspects of the expectations from the real–time solutions were not sufficient, to have them rolling out to production, one of the basic expectations in today's data generating and zero downtime era, is that the system should be self–managed/managed with minimalistic efforts and it should be inherently built in a fault tolerant and auto–recovery manner for handling most if not all scenarios. It should also be able to provide my known basic SQL kind of interface in similar/close format.

However outrageously ridiculous the previous expectations sound, they are perfectly normal and minimalistic expectation from any big data solution of today. Nevertheless, coming back to our topic of real–time analytics, now that we have touched briefly upon the system level expectations in terms of data, processing and output, the systems are being devised and designed to process zillions of transactions and apply complex data science and machine learning algorithms on the fly, to compute the results as close to real time as possible. The new term being used is close to real–time/near real–time or human real–time. Let's dedicate a moment to having a look at the following figure that captures the concept of computation time and the context and significance of the final insight:

As evident in the previous figure, in the context of time:

  • Ad–hoc queries over zeta bytes of data take up computation time in the order of hour(s) and are thus typically described as batch. The noteworthy aspect being depicted in the previous figure with respect to the size of the circle is that it is an analogy to capture the size of the data being processed in diagrammatic form.
  • Ad impressions/Hashtag trends/deterministic workflows/tweets: These use cases are predominantly termed as online and the compute time is generally in the order of 500ms/1 second. Though the compute time is considerably reduced as compared to previous use cases, the data volume being processed is also significantly reduced. It would be very rapidly arriving data stream of a few GBs in magnitude.
  • Financial tracking/mission critical applications: Here, the data volume is low, the data arrival rate is extremely high, the processing is extremely high, and low latency compute results are yielded in time windows of a few milliseconds.

Apart from the computation time, there are other significant differences between batch and real–time processing and solution designing:

Batch processing Real–time processing

Data is at rest

Data is in motion

Batch size is bounded

Data is essentially coming in as a stream and is un–bounded

Access to entire data

Access to data in current transaction/sliding window

Data processed in batches

Processing is done at event, window, or at the most at micro batch level

Efficient, easier administration

Real–time insights, but systems are fragile as compared to batch

 

Towards the end of this section, all I would like to emphasis is that a near real–time (NRT) solution is as close to true real–time as it is practically possible attain. So, as said, RT is actually a myth (or hypothetical) while NRT is a reality. We deal with and see NRT applications on a daily basis in terms of connected vehicles, prediction and recommendation engines, health care, and wearable appliances.

There are some critical aspects that actually introduce latency to total turnaround time, or TAT as we call it. It's actually the time lapse between occurrences of an event to the time actionable insight is generated out of it.

  • The data/events generally travel from diverse geographical locations over the wire (internet/telecom channels) to the processing hub. There is some time lapsed in this activity.
  • Processing:
    • Data landing: Due to security aspects, data generally lands on an edge node and is then ingested into the cluster
    • Data cleansing: The data veracity aspect needs to be catered for, to eliminate bad/incorrect data before processing
    • Data massaging and enriching: Binding and enriching transnational data with dimensional data
    • Actual processing
    • Storing the results
      • All previous aspects of processing incur:
        • CPU cycles
        • Disk I/O
        • Network I/O
        • Active marshaling and un–marshalling of data serialization aspects.

So, now that we understand the reality of real–time analytics, let's look a little deeper into the architectural segments of such solutions.

You have been reading a chapter from
Practical Real-time Data Processing and Analytics
Published in: Sep 2017
Publisher: Packt
ISBN-13: 9781787281202
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime