Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering MongoDB 6.x

You're reading from   Mastering MongoDB 6.x Expert techniques to run high-volume and fault-tolerant database solutions using MongoDB 6.x

Arrow left icon
Product type Paperback
Published in Aug 2022
Publisher Packt
ISBN-13 9781803243863
Length 460 pages
Edition 3rd Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Alex Giamas Alex Giamas
Author Profile Icon Alex Giamas
Alex Giamas
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1 – Basic MongoDB – Design Goals and Architecture
2. Chapter 1: MongoDB – A Database for the Modern Web FREE CHAPTER 3. Chapter 2: Schema Design and Data Modeling 4. Part 2 – Querying Effectively
5. Chapter 3: MongoDB CRUD Operations 6. Chapter 4: Auditing 7. Chapter 5: Advanced Querying 8. Chapter 6: Multi-Document ACID Transactions 9. Chapter 7: Aggregation 10. Chapter 8: Indexing 11. Part 3 – Administration and Data Management
12. Chapter 9: Monitoring, Backup, and Security 13. Chapter 10: Managing Storage Engines 14. Chapter 11: MongoDB Tooling 15. Chapter 12: Harnessing Big Data with MongoDB 16. Part 4 – Scaling and High Availability
17. Chapter 13: Mastering Replication 18. Chapter 14: Mastering Sharding 19. Chapter 15: Fault Tolerance and High Availability 20. Index 21. Other Books You May Enjoy

Big data use case with servers on-premises

Putting all of this into action, we will develop a fully working system using a data source, a Kafka message broker, an Apache Spark cluster on top of HDFS feeding a Hive table, and a MongoDB database. Our Kafka message broker will ingest data from an API, streaming market data for a Monero (XMR)/Bitcoin (BTC) currency pair. This data will be passed on to an Apache Spark algorithm on HDFS to calculate the price for the next ticker timestamp, based on the following factors:

  • The corpus of historical prices already stored on HDFS
  • The streaming market data arriving from the API

This predicted price will then be stored in MongoDB using the MongoDB Connector for Hadoop. MongoDB will also receive data straight from the Kafka message broker, storing it in a special collection with the document expiration date set to 1 minute. This collection will hold the latest orders, with the goal of being used by our system to buy or sell, using...

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image