Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering Ceph

You're reading from   Mastering Ceph Redefine your storage system

Arrow left icon
Product type Paperback
Published in May 2017
Publisher Packt
ISBN-13 9781785888786
Length 240 pages
Edition 1st Edition
Tools
Arrow right icon
Author (1):
Arrow left icon
Nick Fisk Nick Fisk
Author Profile Icon Nick Fisk
Nick Fisk
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Planning for Ceph 2. Deploying Ceph FREE CHAPTER 3. BlueStore 4. Erasure Coding for Better Storage Efficiency 5. Developing with Librados 6. Distributed Computation with Ceph RADOS Classes 7. Monitoring Ceph 8. Tiering with Ceph 9. Tuning Ceph 10. Troubleshooting 11. Disaster Recovery

How Ceph works?

The core storage layer in Ceph is RADOS, which provides, as the name suggests, an object store on which the higher level storage protocols are built and distributed. The RADOS layer in Ceph comprises a number of OSDs. Each OSD is completely independent and forms peer-to-peer relationships to form a cluster. Each OSD is typically mapped to a single physical disk via a basic host bus adapter (HBA) in contrast to the traditional approach of presenting a number of disks via a Redundant Array of Independent Disks (RAID) controller to the OS.

The other key component in a Ceph cluster are the monitors; these are responsible for forming cluster quorum via the use of Paxos. By forming quorum the monitors can be sure that are in a state where they are allowed to make authoritative decisions for the cluster and avoid split brain scenarios. The monitors are not directly involved in the data path and do not have the same performance requirements as OSDs. They are mainly used to provide a known cluster state including membership, configuration, and statistics via the use of various cluster maps. These cluster maps are used by both Ceph cluster components and clients to describe the cluster topology and enable data to be safely stored in the right location.

Due to the scale that Ceph is intended to be operated at, one can appreciate that tracking the state and placement of every single object in the cluster would become computationally very expensive. Ceph solves this problem using CRUSH to place the objects into groups of objects named placement groups (PGs). This reduces the need to track millions of objects to a much more manageable number in the thousands range.

Librados is a Ceph library that can be used to build applications that interact directly with the RADOS cluster to store and retrieve objects.

For more information on how the internals of Ceph work, it would be strongly recommended to read the official Ceph documentation and also the thesis written by Sage Weil, the creator and primary architect of Ceph.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime