Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Java Concurrency and Parallelism

You're reading from   Java Concurrency and Parallelism Master advanced Java techniques for cloud-based applications through concurrency and parallelism

Arrow left icon
Product type Paperback
Published in Aug 2024
Publisher Packt
ISBN-13 9781805129264
Length 496 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Jay Wang Jay Wang
Author Profile Icon Jay Wang
Jay Wang
Arrow right icon
View More author details
Toc

Table of Contents (20) Chapters Close

Preface 1. Part 1: Foundations of Java Concurrency and Parallelism in Cloud Computing
2. Chapter 1: Concurrency, Parallelism, and the Cloud: Navigating the Cloud-Native Landscape FREE CHAPTER 3. Chapter 2: Introduction to Java’s Concurrency Foundations: Threads, Processes, and Beyond 4. Chapter 3: Mastering Parallelism in Java 5. Chapter 4: Java Concurrency Utilities and Testing in the Cloud Era 6. Chapter 5: Mastering Concurrency Patterns in Cloud Computing 7. Part 2: Java's Concurrency in Specialized Domains
8. Chapter 6: Java and Big Data – a Collaborative Odyssey 9. Chapter 7: Concurrency in Java for Machine Learning 10. Chapter 8: Microservices in the Cloud and Java’s Concurrency 11. Chapter 9: Serverless Computing and Java’s Concurrent Capabilities 12. Part 3: Mastering Concurrency in the Cloud – The Final Frontier
13. Chapter 10: Synchronizing Java’s Concurrency with Cloud Auto-Scaling Dynamics 14. Chapter 11: Advanced Java Concurrency Practices in Cloud Computing 15. Chapter 12: The Horizon Ahead 16. Index 17. Other Books You May Enjoy Appendix A: Setting up a Cloud-Native Java Environment 1. Appendix B: Resources and Further Reading

Hadoop – the foundation for distributed data processing

As a Java developer, you’re in the perfect position to harness this power. Hadoop is built with Java, offering a rich set of tools and APIs to craft scalable big data solutions. Let’s dive into the core components of Hadoop’s HDFS and MapReduce. Here’s a detailed explanation of each component.

Hadoop distributed file system

Hadoop distributed file system or HDFS is the primary storage system used by Hadoop applications. It is designed to store massive amounts of data across multiple commodity hardware nodes, providing scalability and fault tolerance. The key characteristics of HDFS include the following:

  • Scaling out, not up: HDFS splits large files into smaller blocks (typically, 128 MB) and distributes them across multiple nodes in a cluster. This allows for parallel processing and enables a system to handle files that are larger than the capacity of a single node.
  • Resilience...
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime