Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Troubleshooting vSphere Storage

You're reading from   Troubleshooting vSphere Storage All vSphere administrators will benefit big-time from this book because it gives you clear, practical instructions on troubleshooting a whole host of storage problems. From fundamental to advanced techniques, it's all here.

Arrow left icon
Product type Paperback
Published in Nov 2013
Publisher Packt
ISBN-13 9781782172062
Length 150 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Mike Preston Mike Preston
Author Profile Icon Mike Preston
Mike Preston
Arrow right icon
View More author details
Toc

Supported filesystems


VMware ESXi supports a couple of different filesystems to use as virtual machine storage; Virtual Machine File System (VMFS) and Network File System (NFS).

VMFS

One of the most common ESXi storage configurations utilizes a purpose-built, high-performance clustered filesystem called VMFS. VMFS is a distributed storage architecture that facilitates concurrent read and write access from multiple ESXi hosts. Any supported SCSI-based block device, whether it is local, Fibre Channel, or network attached may be formatted as a VMFS datastore. See the following table for more information on the various vSphere supported storage protocols.

NFS

NFS, like VMFS, is also a distributed file system and has been around for nearly 20 years. NFS, however, is strictly network attached and utilizes Remote Procedure Call (RPC) in order to access remote files just as if they were stored locally. vSphere, as it stands today supports NFSv3 over TCP/IP, allowing the ESXi host to mount the NFS volume and use it for any storage needs, including storage for virtual machines. NFS does not contain a VMFS partition. When utilizing NFS, the NAS storage array handles the underlying filesystem assignment and shares in which ESXi simply attaches to as a mount point.

Raw disk

Although not technically a filesystem, vSphere also supports storing virtual machine guest files on a raw disk. This is configured by selecting Raw Device Mapping when adding a new virtual disk to a VM. In general, this allows a guest OS to utilize its preferred filesystem directly on the SAN. A Raw Device Mapping (RDM) may be mounted in a couple of different compatibility modes: physical or virtual. In physical mode, all commands except for REPORT LUNS are sent directly to the storage device. REPORT LUNS is masked in order to allow the VMkernel to isolate the LUN from the virtual machine. In virtual mode, only read and write commands are sent directly to the storage device while the VMkernel handles all other commands from the virtual machine. Virtual mode allows you to take advantage of many of vSphere's features such as file locking and snapshotting whereas physical mode does not.

The following table explains the supported storage connections in vSphere:

 

Fibre Channel

FCoE

iSCSI

NFS

Description

Remote blocks are accessed by encapsulating SCSI commands and data into FC frames and transmitted over the FC network.

Remote blocks are accessed by encapsulating SCSI commands and data into Ethernet frames. FCoE contains many of the same characteristics as Fibre Channel except for Ethernet transport.

Remote blocks are accessed by encapsulating SCSI commands and data into TCP/IP packets and transmitted over the Ethernet network.

ESXi hosts access metadata and files located on the NFS server by utilizing file devices that are presented over a network.

Filesystem support

VMFS (block)

VMFS (block)

VMFS (block)

NFS (file)

Interface

Requires a dedicated Host Bus Adapter (HBA).

Requires either a hardware converged network adapter or NIC that supports FCoE capabilities in conjunction with the built-in software FCoE initiator.

Requires either a dependent or independent hardware iSCSI initiator or a NIC with iSCSI capabilities utilizing the built-in software iSCSI initiator and a VMkernel port.

Requires a NIC and the use of a VMkernel port.

Load Balancing/Failover

Uses VMware's Pluggable Storage Architecture to provide standard path selections and failover mechanisms.

Utilizes VMware's Pluggable Storage Architecture as well as the built-in iSCSI binding functionality.

Due to the nature of NFS implementing a single session, there is no load balancing available. Aggregate bandwidth can be achieved by manually accessing the NFS server across different paths. Failover can be configured only in an active/standby type configuration.

Security

Utilizes zoning between the hosts and the FC targets to isolate storage devices from hosts.

Utilizes Challenge Handshake Authentication Protocol (CHAP) to allow different hosts to see different LUNs.

Depends on the NFS storage device. Most implement an access control list (ACL) type deployment to allow hosts to see certain NFS exports.

You have been reading a chapter from
Troubleshooting vSphere Storage
Published in: Nov 2013
Publisher: Packt
ISBN-13: 9781782172062
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at R$50/month. Cancel anytime