Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering vRealize Operations Manager

You're reading from   Mastering vRealize Operations Manager Analyze and optimize your IT environment by gaining a practical understanding of vRealize Operations 6.6

Arrow left icon
Product type Paperback
Published in Mar 2018
Publisher
ISBN-13 9781788474870
Length 426 pages
Edition 2nd Edition
Arrow right icon
Authors (3):
Arrow left icon
Chris Slater Chris Slater
Author Profile Icon Chris Slater
Chris Slater
Spas Kaloferov Spas Kaloferov
Author Profile Icon Spas Kaloferov
Spas Kaloferov
Scott Norris Scott Norris
Author Profile Icon Scott Norris
Scott Norris
Arrow right icon
View More author details
Toc

Table of Contents (17) Chapters Close

Preface 1. Going Ahead with vRealize Operations FREE CHAPTER 2. Which vRealize Operations Deployment Model Fits Your Needs 3. Initial Setup and Configuration 4. Extending vRealize Operations with Management Packs and Plugins 5. Badges 6. Getting a Handle on Alerting and Notifications 7. Capacity Management Made Easy 8. Aligning vRealize Operations with Business Outcomes 9. Super Metrics Made Super Easy 10. Creating Custom Views 11. Creating Custom Dashboards 12. Using vRealize Operations to Monitor Applications 13. Leveraging vRealize Operations for vSphere and vRealize Automation Workload Placement 14. Using vRealize Operations for Infrastructure Compliance 15. Troubleshooting vRealize Operations 16. Other Books You May Enjoy

High Availability in vRealize Operations 6.6

One of the features that came in vRealize Operations 6.0 was the ability to configure the cluster in an HA mode to prevent data loss. This still remains an impressive feature, used even today in vRealize Operations 6.6. Enabling HA makes two major changes to the Operations Manager cluster:

  • The primary effect of HA is that all sharded data is duplicated by the Controller layer to a primary and backup copy in both the GemFire cache and GemFire Persistence layers.
  • The secondary effect is that the master replica is created on a chosen data node for replication of the database. This node then takes over the role of the master node in the event that the original master fails.

How does HA and data duplication work?

As we just said, HA duplicates all incoming resource data so that two copies exist instead of one in both the GemFire cache and Persistence layers. This is done by creating a secondary copy of each piece of data that is used in queries, if the node hosting a primary copy is unavailable.

It is important to note that as HA is simply creating a secondary copy of each piece of data, as such only one node failure can be sustained at a time (N-1) without data loss, regardless of the cluster size. If a node is down, a new secondary shard of the data is not created unless the original node is removed from the cluster permanently.

When a failed node becomes available again, a node is placed into recovery mode. During this time, data is synchronized with the other cluster members and when the synchronization is complete, the node is returned to active status:

Let's run through this process using the diagram above for an example, about how incoming data or the creation of a new object is handled in an HA configuration. In the above diagram, R3 represents our new resource, and R3' represents the secondary copy:

  1. A running adapter instance receives data from vCenter, as it is required to create a new resource for the new object, and a discovery task is created.
  2. The discovery task is passed to the cluster. This task could be passed to any one node in the cluster, and once assigned, that node is responsible for completing the task.
  3. A new analytics item is created for the new object in the GemFire cache on any node in the cluster.
  4. A secondary copy of the data is created on a different node to protect against failure.
  5. The system then saves the data to the Persistence layer. The object is created in the inventory (HIS), and its statistics are stored in the FSDB.
  6. A secondary copy of the saved (GemFire Persistence sharding) HIS and FSDB data is stored on a different node to protect against data loss.

The following diagram shows the same duplication process, but this time for a non-HA setup:

In a non-HA scenario, the following happens when a new object is discovered:

  1. A new object is discovered by the adapter, which is located in the Collector.
  2. The Collector receives the object’s metric and property information from the adapter.
  3. The Collector sends the object information to the Controller.
  4. The global database is updated with the new object type information. The object is created in the central database.
  5. The object is also cached by the analytics component.
  6. The Alerts database is updated with object information.
You have been reading a chapter from
Mastering vRealize Operations Manager - Second Edition
Published in: Mar 2018
Publisher:
ISBN-13: 9781788474870
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image