Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Real-time Data Processing and Analytics

You're reading from   Practical Real-time Data Processing and Analytics Distributed Computing and Event Processing using Apache Spark, Flink, Storm, and Kafka

Arrow left icon
Product type Paperback
Published in Sep 2017
Publisher Packt
ISBN-13 9781787281202
Length 360 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (2):
Arrow left icon
Shilpi Saxena Shilpi Saxena
Author Profile Icon Shilpi Saxena
Shilpi Saxena
Saurabh Gupta Saurabh Gupta
Author Profile Icon Saurabh Gupta
Saurabh Gupta
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Introducing Real-Time Analytics FREE CHAPTER 2. Real Time Applications – The Basic Ingredients 3. Understanding and Tailing Data Streams 4. Setting up the Infrastructure for Storm 5. Configuring Apache Spark and Flink 6. Integrating Storm with a Data Source 7. From Storm to Sink 8. Storm Trident 9. Working with Spark 10. Working with Spark Operations 11. Spark Streaming 12. Working with Apache Flink 13. Case Study

Near real–time solution – an architecture that works

In this section, we will learn about what all architectural patterns are possible to build a scalable, sustainable, and robust real–time solution.

A high–level NRT solution recipe looks very straight and simple, with a data collection funnel, a distributed processing engine, and a few other ingredients like in–memory cache, stable storage, and dashboard plugins.

At a high level, the basic analytics process can be segmented into three shards, which are depicted well in previous figure:

  • Real–time data collection of the streaming data
  • Distributed high–performance computation on flowing data
  • Exploring and visualizing the generated insights in the form of query–able consumable layer/dashboards

If we delve a level deeper, there are two contending proven streaming computation technologies on the market, which are Storm and Spark. In the coming section we will take a deeper look at a high–level NRT solution that's derived from these stacks.

NRT – The Storm solution

This solution captures the high–level streaming data in real–time and routes it through some Queue/broker: Kafka or RabbitMQ. Then, the distributed processing part is handled through Storm topology, and once the insights are computed, they can be written to a fast write data store like Cassandra or some other queue like Kafka for further real–time downstream processing:

As per the figure, we collect real–time streaming data from diverse data sources, through push/pull collection agents like Flume, Logstash, FluentD, or Kafka adapters. Then, the data is written to Kafka partitions, Storm topologies pull/read the streaming data from Kafka and processes this flowing data in its topology, and writes the insights/results to Cassandra or some other real–time dashboards.

NRT – The Spark solution

At a very high–level, the data flow pipeline with Spark is very similar to the Storm architecture depicted in the previous figure, but one the most critical aspects of this flow is that Spark leverages HDFS as a distributed storage layer. Here, have a look before we get into further dissection of the overall flow and its nitty–gritty:

As with a typical real–time analytic pipeline, we ingest the data using one of the streaming data grabbing agents like Flume or Logstash. We introduce Kafka to ensure decoupling into the system between the sources agents. Then, we have the Spark streaming component that provides a distributed computing platform for processing the data, before we dump the results to some stable storage unit, dashboard, or Kafka queue.

One essential difference between previous two architectural paradigms is that, while Storm is essentially a real–time transactional processing engine that is, by default, good at processing the incoming data event by event, Spark works on the concept of micro–batching. It's essentially a pseudo real–time compute engine, where close to real–time compute expectations can be met by reducing the micro batch size. Storm is essentially designed for lightning fast processing, thus all transformations are in memory because any disk operation generates latency; this feature is a boon and bane for Storm (because memory is volatile if things break, everything has to be reworked and intermediate results are lost). On the other hand, Spark is essentially backed up by HDFS and is robust and more fault tolerant, as intermediaries are backed up in HDFS.

Over the last couple of years, big data applications have seen a brilliant shift as per the following sequence:

  1. Batch only applications (early Hadoop implementations)
  2. Streaming only (early Storm implementations)
  3. Could be both (custom made combinations of the previous two)
  4. Should be both (Lambda architecture)

Now, the question is: why did the above evolution take place? Well, the answer is that, when folks were first acquainted with the power of Hadoop, they really liked building the applications which could process virtually any amount of data and could scale up to any level in a seamless, fault tolerant, non–disruptive way. Then we moved to an era where people realized the power of now and ambitious processing became the need, with the advent of scalable, distributed processing engines like Storm. Storm was scalable and came with lighting–fast processing power and guaranteed processing. But then, something changed; we realized the limitations and strengths of both Hadoop batch systems and Storm real–time systems: the former were catering to my appetite for volume and the latter was excellent at velocity. My real–time applications were perfect, but they were performing over a small window of the entire data set and did not have any mechanism for correction of data/results at some later time. Conversely, while my Hadoop implementations were accurate and robust, they took a long time to arrive at any conclusive insight. We reached a point where we replicated complete/part solutions to arrive at a solution involving the combination of both batch and real–time implementations. One of the very recent NRT architectural patterns is Lambda architecture, which is a most sought after solution that combines the best of both batch and real–time implementations, without having any need to replicate and maintain two solutions. It gives me volume and velocity, which is an edge over earlier architecture, and it can cater to a wider set of use cases.

You have been reading a chapter from
Practical Real-time Data Processing and Analytics
Published in: Sep 2017
Publisher: Packt
ISBN-13: 9781787281202
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image