Who is creating big data?
Data is growing exponentially, and comes from multiple sources that are emitting data continuously and consistently. In some domains, we have to analyze the data that are processed by machines, sensors, quality, equipment, data points, and so on. A list of some sources that are creating big data is mentioned as follows:
- Monitoring sensors: Climate or ocean wave monitoring sensors generate data consistently and in a good size, and there would be more than millions of sensors that capture data.
- Posts to social media sites: Social media websites such as Facebook, Twitter, and others have a huge amount of data in petabytes.
- Digital pictures and videos posted online: Websites such as YouTube, Netflix, and others process a huge amount of digital videos and data that can be petabytes.
- Transaction records of online purchases: E-commerce sites such as eBay, Amazon, Flipkart, and others process thousands of transactions on a single time.
- Server/application logs: Applications generate log data that grows consistently, and analysis on these data becomes difficult.
- CDR (call data records): Roaming data and cell phone GPS signals to name a few.
- Science, genomics, biogeochemical, biological, and other complex and/or interdisciplinary scientific research.
Big data use cases
Let's look at the credit card issuer (use case demonstrated by MapR).
A credit card issuer client wants to improve the existing recommendation system that is lagging and can have potentially huge profits if recommendations can be faster.
The existing system is an Enterprise Data Warehouse (EDW), which is very costly and slower in generating recommendations, which, in turn, impacts on potential profits. As Hadoop is cheaper and faster, it will generate huge profits than the existing system.
Usually, a credit card customer will have data like the following:
- Customer purchase history (big)
- Merchant designations
- Merchant special offers
Let's analyze a general comparison of existing EDW platforms with a big data solution. The recommendation system is designed using Mahout (scalable Machine Learning library API) and Solr/Lucene. Recommendation is based on the co-occurrence matrix implemented as the search index.
The time improvement benchmarked was from 20 hours to just 3 hours, which is unbelievably six times less, as shown in the following image:
In the web tier in the following image, we can see that the improvement is from 8 hours to 3 minutes:
So, eventually, we can say that time decreases, revenue increases, and Hadoop offers a cost-effective solution, hence profit increases, as shown in the following image: