Solutions for complex distributed use cases
Now that we understand the power that real-time solutions can get into various industry verticals, let's explore and find out what options we have to process vast amount of data being generated at a very fast pace.
The Hadoop solution
The Hadoop solution is one of the solutions to solve the problems that require dealing with humongous volumes of data. It works by executing jobs in a clustered setup.
MapReduce is a programming paradigm where we process large data sets by using a mapper function that processes a key and value pair and thus generates intermediate output again in the form of a key-value pair. Then a reduce function operates on the mapper output and merges the values associated with the same intermediate key and generates a result.
In the preceding figure, we demonstrate the simple word count MapReduce job where the simple word count job is being demonstrated using the MapReduce where:
- There is a huge Big Data store, which can go up to zettabytes or petabytes.
- Input datasets or files are split into blocks of configured size and each block is replicated onto multiple nodes in the Hadoop cluster depending upon the replication factor.
- Each mapper job counts the number of words on the data blocks allocated to it.
- Once the mapper is done, the words (which are actually the keys) and their counts are stored in a local file on the mapper node. The reducer then starts the reduce function and thus generates the result.
- Reducers combine the mapper output and the final results are generated.
Big data, as we know, did provide a solution to processing and generating results out of humongous volumes of data, but that's predominantly a batch processing system and has almost no utility on a real-time use case.
A custom solution
Here we talk about a solution that was used in the social media world before we had a scalable framework such as Storm. A simplistic version of the problem could be that you need a real-time count of the tweets by each user; Twitter solved the problem by following the mechanism shown in the figure:
Here is the detailed information of how the preceding mechanism works:
- A custom solution created a fire hose or queue onto which all the tweets are pushed.
- A set of workers' nodes read data from the queue, parse the messages, and maintain counts of tweets by each user. The solution is scalable, as we can increase the number of workers to handle more load in the system. But the sharding algorithm for random distribution of the data among these workers nodes' should ensure equal distribution of data to all workers.
- These workers assimilate this first level count into the next set of queues.
- From these queues (the ones mentioned at level 1) second level of workers pick from these queues. Here, the data distribution among these workers is neither equal, nor random. The load balancing or the sharding logic has to ensure that tweets from the same user should always go to the same worker, to get the correct counts. For example, lets assume we have different users— "A, K, M, P, R, and L" and we have two workers "worker A" and "worker B". The tweets from user "A, K, and M" always goes to "worker A", and those of "P, R, and L users" goes to "worker B"; so the tweet counts for "A, K, and M" are always maintained by "worker A". Finally, these counts are dumped into the data store.
The queue-worker solution described in the preceding points works fine for our specific use case, but it has the following serious limitations:
- It's very complex and specific to the use case
- Redeployment and reconfiguration is a huge task
- Scaling is very tedious
- The system is not fault tolerant
Licensed proprietary solutions
After an open source Hadoop and custom Queue-worker solution, let's discuss the licensed options' proprietary solutions in the market to cater to the distributed real-time processing needs.
The Alabama Occupational Therapy Association (ALOTA) of big companies has invested in such products, because they clearly see where the future of computing is moving to. They can foresee demands of such solutions and support them in almost every vertical and domain. They have developed such solutions and products that let us do complex batch and real-time computing but that comes at a heavy license cost. A few solutions to name are from companies such as:
- IBM: IBM has developed InfoSphere Streams for real-time ingestion, analysis, and correlation of data.
- Oracle: Oracle has a product called Real Time Decisions (RTD) that provides analysis, machine learning, and predictions in real-time context
- GigaSpaces: GigaSpaces has come up with a product called XAP that provides in-memory computation to deliver real-time results
Other real-time processing tools
There are few other technologies that have some similar traits and features such as Apache Storm and S4 from Yahoo, but it lacks guaranteed processing. Spark is essentially a batch processing system with some features on micro-batching, which could be utilized as real time.