Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
vSphere High Performance Cookbook - Second Edition

You're reading from   vSphere High Performance Cookbook - Second Edition Recipes to tune your vSphere for maximum performance

Arrow left icon
Product type Paperback
Published in Jun 2017
Publisher
ISBN-13 9781786464620
Length 338 pages
Edition 2nd Edition
Tools
Arrow right icon
Authors (3):
Arrow left icon
Christopher Kusek Christopher Kusek
Author Profile Icon Christopher Kusek
Christopher Kusek
Prasenjit Sarkar Prasenjit Sarkar
Author Profile Icon Prasenjit Sarkar
Prasenjit Sarkar
Kevin Elder Kevin Elder
Author Profile Icon Kevin Elder
Kevin Elder
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface 1. CPU Performance Design 2. Memory Performance Design FREE CHAPTER 3. Networking Performance Design 4. DRS, SDRS, and Resource Control Design 5. vSphere Cluster Design 6. Storage Performance Design 7. Designing vCenter on Windows for Best Performance 8. Designing VCSA for Best Performance 9. Virtual Machine and Virtual Environment Performance Design 10. Performance Tools

CPU scheduler - processor topology/cache-aware

The ESXi Server has an advanced CPU scheduler geared towards providing high performance, fairness, and isolation of VMs running on Intel/AMD x86 architectures.

The ESXi CPU scheduler is designed with the following objectives:

  • Performance isolation: Multi-VM fairness
  • Coscheduling: Illusion that all vCPUs are concurrently online
  • Performance: High throughput, low latency, high scalability, and low overhead
  • Power efficiency: Saving power without losing performance
  • Wide Adoption: Enabling all the optimizations on diverse processor architecture

There can be only one active process per CPU at any given instant; for example, multiple vCPUs can run on the same pCPU, just not in one instance--often, there are more processes than CPUs. Therefore, queuing will occur, and the scheduler will become responsible for controlling the queue, handling priorities, and preempting the use of the CPU.

The main tasks of the CPU scheduler are to choose which world is to be scheduled to a processor. In order to give each world a chance to run, the scheduler dedicates a time slice (also known as the duration in which a world can be executed (usually 10-20 ms, 50 for VMkernel by default)) to each process and then migrates the state of the world between run, wait, co-stop, and ready.

ESXi implements the proportional share-based algorithm. It associates each world with a share of CPU resource across all VMs. This is called entitlement and is calculated from the user-provided resource specifications, such as shares, reservations, and limits.

Getting ready

To step through this recipe, you need a running ESXi Server, a VM that is powered off, and vSphere Web Client. No other prerequisites are required.

How to do it...

Let's get started:

  1. Open up vSphere Web Client.
  2. On the home screen, navigate to Hosts and Clusters.
  3. Expand the left-hand navigation list.
  4. In the VM inventory, right-click on virtual machine, and click on Edit Settings. The Virtual Machine Edit Settings dialog box appears.
  5. Click on the VM Options tab.
  6. Under the Advanced section, click on Edit Configuration.
  1. At the bottom, enter sched.cpu.vsmpConsolidate as Name, True for Value, and click on Add.
  2. The final screen should like the following screenshot. Once you get this, click on OK to save the setting:

How it works...

The CPU scheduler uses processor topology information to optimize the placement of vCPUs onto different sockets.

Cores within a single socket typically use a shared last-level cache. The use of a shared last-level cache can improve vCPU performance if the CPU is running memory-intensive workloads.

By default, the CPU scheduler spreads the load across all the sockets in under-committed systems. This improves performance by maximizing the aggregate amount of cache available to the running vCPUs. For such workloads, it can be beneficial to schedule all the vCPUs on the same socket, with a shared last-level cache, even when the ESXi host is under committed. In such scenarios, you can override the default behavior of the spreading vCPUs across packages by including the following configuration option in the VM's VMX configuration file: sched.cpu.vsmpConsolidate=TRUE. However, it is usually better to stick with the default behavior.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image