Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Virtualization

115 Articles
article-image-vmware-vrealize-operations-performance-and-capacity-management
Packt
08 May 2015
4 min read
Save for later

VMware vRealize Operations Performance and Capacity Management

Packt
08 May 2015
4 min read
Virtualization is what allows companies like Dropbox and Spotify to operate internationally with ever-growing customer bases. From virtualizing desktops, applications, and operating systems to creating highly-available platforms that enable developers to quickly host operating systems and entire content delivery networks, this book centers on the tools, techniques, and platforms that administrators and developers use to decouple and utilize hardware and infrastructure resources to power applications and web services. Key pointers vCenter, vSphere, VMware, VM, Virtualization, SDDC Counters, key counters, metric groups, vRealize, ESXi Cluster, Datastore, Datastore Cluster, Datacenter CPU, Network, Disk, Storage, Contention, Utilization, Memory vSwitch, vMotion, Capacity Management, Performance Management, Dashboards, vC Ops What the book covers Content-wise, the book is split into two main parts. The first part provides the foundation and theory. The second part provides the solutions and sample use cases. It aims to clear up the misunderstandings that customers have about SDDC. It explains why a VM is radically different from a physical server, and hence a virtual data center is fundamentally different from a physical data center. It then covers the aspects of management that are affected. It also covers the practical aspects of this book, as they show how sample solutions are implemented. The chapters in the book provide both performance management and capacity management. How the book differs Virtualization is one of the biggest shifts in IT history. Almost all large enterprises are embarking on a journey to transform the IT department into a service provider. VMware vRealize Operations Management is a suite of products that automates operations management using patented analytics and an integrated approach to performance, capacity, and configuration management. vCenter Operations Manager is the most important component of this suite that helps Administrators to maintain and troubleshoot your VMware environment as well as your physical environment. Written in a light and easy-to-follow language, the book differs in a way as it covers the complex topic of managing performance and capacity when the datacentre is software defined. It sets the foundation by demystifying deep rooted misunderstanding on virtualization and virtual machine. How will the book help you Master the not-so-obvious differences between a physical server and a virtual machine that customers struggle with during management of virtual datacentre Educate and convince your peers on why and how performance and capacity management change in virtual datacentre Correct many misperceptions about virtualization Know how your peers operationalize their vRealize Operations Master all the key metrics in vSphere and vRealize Operations Be confident in performance troubleshooting with vSphere and vRealize Operations See real-life examples of how super metric and advance dashboards make management easier Develop rich, custom dashboards with interaction and super metrics Unlearn the knowledge that makes performance and capacity management difficult in SDDC Master the counters in vCenter and vRealize Operations by knowing what they mean and their interdependencies Build rich dashboards using a practical and easy-to-follow approach supported with real-life examples Summary The book would teach how to get the best out of vCenter Operations in managing performance and capacity in a Software-Defined datacenter. The book starts by explaining the difference between a Software-Defined datacentre and classic physical datacentre, and how it impacts both architecture and operations. From this strategic view, the book then zooms into the most common challenge, which is performance management. The book then covers all the key counters in both vSphere and vRealize Operations, explains their dependency, and provides practical guidance on values you should expect in a healthy environment. At the end, the book puts the theory together and provides real-life examples created together with customers. This book is an invaluable resource for those embarking on a journey to master Virtualization. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] An Introduction to VMware Horizon Mirage [Article]
Read more
  • 0
  • 0
  • 1257

article-image-symmetric-messages-and-asynchronous-messages-part-1
Packt
05 May 2015
31 min read
Save for later

Symmetric Messages and Asynchronous Messages (Part 1)

Packt
05 May 2015
31 min read
In this article by Kingston Smiler. S, author of the book OpenFlow Cookbook describes the steps involved in sending and processing symmetric messages and asynchronous messages in the switch and contains the following recipes: Sending and processing a hello message Sending and processing an echo request and a reply message Sending and processing an error message Sending and processing an experimenter message Handling a Get Asynchronous Configuration message from the controller, which is used to fetch a list of asynchronous events that will be sent from the switch Sending a Packet-In message to the controller Sending a Flow-removed message to the controller Sending a port-status message to the controller Sending a controller-role status message to the controller Sending a table-status message to the controller Sending a request-forward message to the controller Handling a packet-out message from the controller Handling a barrier-message from the controller (For more resources related to this topic, see here.) Symmetric messages can be sent from both the controller and the switch without any solicitation between them. The OpenFlow switch should be able to send and process the following symmetric messages to or from the controller, but error messages will not be processed by the switch: Hello message Echo request and echo reply message Error message Experimenter message Asynchronous messages are sent by both the controller and the switch when there is any state change in the system. Like symmetric messages, asynchronous messages also should be sent without any solicitation between the switch and the controller. The switch should be able to send the following asynchronous messages to the controller: Packet-in message Flow-removed message Port-status message Table-status message Controller-role status message Request-forward message Similarly, the switch should be able to receive, or process, the following controller-to-switch messages: Packet-out message Barrier message The controller can program or instruct the switch to send a subset of interested asynchronous messages using an asynchronous configuration message. Based on this configuration, the switch should send the subset of asynchronous messages only via the communication channel. The switch should replicate and send asynchronous messages to all the controllers based on the information present in the asynchronous configuration message sent from each controller. The switch should maintain asynchronous configuration information on a per communication channel basis. Sending and processing a hello message The OFPT_HELLO message is used by both the switch and the controller to identify and negotiate the OpenFlow version supported by both the devices. Hello messages should be sent from the switch once the TCP/TLS connection is established and are considered part of the communication channel establishment procedure. The switch should send a hello message to the controller immediately after establishing the TCP/TLS connection with the controller. How to do it... As hello messages are transmitted by both the switch and the controller, the switch should be able to send, receive, and process the hello message. The following section explains these procedures in detail. Sending the OFPT_HELLO message The message format to be used to send the hello message from the switch is as follows. This message includes the OpenFlow header along with zero or more elements that have variable size: /* OFPT_HELLO. This message includes zero or more    hello elements having variable size. */ struct ofp_hello { struct ofp_header header; /* Hello element list */ struct ofp_hello_elem_header elements[0]; /* List of elements */ }; The version field in the ofp_header should be set with the highest OpenFlow protocol version supported by the switch. The elements field is an optional field and might contain the element definition, which takes the following TLV format: /* Version bitmap Hello Element */ struct ofp_hello_elem_versionbitmap { uint16_t type;           /* OFPHET_VERSIONBITMAP. */ uint16_t length;         /* Length in bytes of this element. */        /* Followed by:          * - Exactly (length - 4) bytes containing the bitmaps,          * then Exactly (length + 7)/8*8 - (length) (between 0          * and 7) bytes of all-zero bytes */ uint32_t bitmaps[0]; /* List of bitmaps - supported versions */ }; The type field should be set with OFPHET_VERSIONBITMAP. The length field should be set to the length of this element. The bitmaps field should be set with the list of the OpenFlow versions the switch supports. The number of bitmaps included in the field should depend on the highest version number supported by the switch. The ofp_versions 0 to 31 should be encoded in the first bitmap, ofp_versions 32 to 63 should be encoded in the second bitmap, and so on. For example, if the switch supports only version 1.0 (ofp_versions = 0 x 01) and version 1.3 (ofp_versions = 0 x 04), then the first bitmap should be set to 0 x 00000012. Refer to the send_hello_message() function in the of/openflow.c file for the procedure to build and send the OFPT_Hello message. Receiving the OFPT_HELLO message The switch should be able to receive and process the OFPT_HELLO messages that are sent from the controller. The controller also uses the same message format, structures, and enumerations as defined in the previous section of this recipe. Once the switch receives the hello message, it should calculate the protocol version to be used for messages exchanged with the controller. The procedure required to calculate the protocol version to be used is as follows: If the hello message received from the switch contains an optional OFPHET_VERSIONBITMAP element and the bitmap field contains a valid value, then the negotiated version should be the highest common version among the supported protocol versions in the controller, with the bitmap field in the OFPHET_VERSIONBITMAP element. If the hello message doesn't contain any OFPHET_VERSIONBITMAP element, then the negotiated version should be the smallest of the switch-supported protocol versions and the version field set in the OpenFlow header of the received hello message. If the negotiated version is supported by the switch, then the OpenFlow connection between the controller and the switch continues. Otherwise, the switch should send an OFPT_ERROR message with the type field set as OFPET_HELLO_FAILED, the code field set as OFPHFC_INCOMPATIBLE, and an optional ASCII string explaining the situation in the data and terminate the connection. There's more… Once the switch and the controller negotiate the OpenFlow protocol version to be used, the connection setup procedure is complete. From then on, both the controller and the switch can send OpenFlow protocol messages to each other. Sending and processing an echo request and a reply message Echo request and reply messages are used by both the controller and the switch to maintain and verify the liveliness of the controller-switch connection. Echo messages are also used to calculate the latency and bandwidth of the controller-switch connection. On reception of an echo request message, the switch should respond with an echo reply message. How to do it... As echo messages are transmitted by both the switch and the controller, the switch should be able to send, receive, and process them. The following section explains these procedures in detail. Sending the OFPT_ECHO_REQUEST message The OpenFlow specification doesn't specify how frequently this echo message has to be sent from the switch. However, the switch might choose to send an echo request message periodically to the controller with the configured interval. Similarly, the OpenFlow specification doesn't mention what the timeout (the longest period of time the switch should wait) for receiving echo reply message from the controller should be. After sending an echo request message to the controller, the switch should wait for the echo reply message for the configured timeout period. If the switch doesn't receive the echo reply message within this period, then it should initiate the connection interruption procedure. The OFPT_ECHO_REQUEST message contains an OpenFlow header followed by an undefined data field of arbitrary length. The data field might be filled with the timestamp at which the echo request message was sent, various lengths or values to measure the bandwidth, or be zero-size for just checking the liveliness of the connection. In most open source implementations of OpenFlow, the echo request message only contains the header field and doesn't contain any body. Refer to the send_echo_request() function in the of/openflow.c file for the procedure to build and send the echo_request message. Receiving OFPT_ECHO_REQUEST The switch should be able to receive and process OFPT_ECHO_REQUEST messages that are sent from the controller. The controller also uses the same message format, structures, and enumerations as defined in the previous section of this recipe. Once the switch receives the echo request message, it should build the OFPT_ECHO_REPLY message. This message consists of ofp_header and an arbitrary-length data field. While forming the echo reply message, the switch should copy the content present in the arbitrary-length field of the request message to the reply message. Refer to the process_echo_request() function in the of/openflow.c file for the procedure to handle and process the echo request message and send the echo reply message. Processing OFPT_ECHO_REPLY message The switch should be able to receive the echo reply message from the controller. If the switch sends the echo request message to calculate the latency or bandwidth, on receiving the echo reply message, it should parse the arbitrary-length data field and can calculate the bandwidth, latency, and so on. There's more… If the OpenFlow switch implementation is divided into multiple layers, then the processing of the echo request and reply should be handled in the deepest possible layer. For example, if the OpenFlow switch implementation is divided into user-space processing and kernel-space processing, then the echo request and reply message handling should be in the kernel space. Sending and processing an error message Error messages are used by both the controller and the switch to notify the other end of the connection about any problem. Error messages are typically used by the switch to inform the controller about failure of execution of the request sent from the controller. How to do it... Whenever the switch wants to send the error message to the controller, it should build the OFPT_ERROR message, which takes the following message format: /* OFPT_ERROR: Error message (datapath -> the controller). */ struct ofp_error_msg { struct ofp_header header; uint16_t type; uint16_t code; uint8_t data[0]; /* Variable-length data. Interpreted based on the type and code. No padding. */ }; The type field indicates a high-level type of error. The code value is interpreted based on the type. The data value is a piece of variable-length data that is interpreted based on both the type and the value. The data field should contain an ASCII text string that adds details about why the error occurred. Unless specified otherwise, the data field should contain at least 64 bytes of the failed message that caused this error. If the failed message is shorter 64 bytes, then the data field should contain the full message without any padding. If the switch needs to send an error message in response to a specific message from the controller (say, OFPET_BAD_REQUEST, OFPET_BAD_ACTION, OFPET_BAD_INSTRUCTION, OFPET_BAD_MATCH, or OFPET_FLOW_MOD_FAILED), then the xid field of the OpenFlow header in the error message should be set with the offending request message. Refer to the send_error_message() function in the of/openflow.c file for the procedure to build and send an error message. If the switch needs to send an error message for a request message sent from the controller (because of an error condition), then the switch need not send the reply message to that request. Sending and processing an experimenter message Experimenter messages provide a way for the switch to offer additional vendor-defined functionalities. How to do it... The controller sends the experimenter message with the format. Once the switch receives this message, it should invoke the appropriate vendor-specific functions. Handling a "Get Asynchronous Configuration message" from the controller The OpenFlow specification provides a mechanism in the controller to fetch the list of asynchronous events that can be sent from the switch to the controller channel. This is achieved by sending the "Get Asynchronous Configuration message" (OFPT_GET_ASYNC_REQUEST) to the switch. How to do it... The message format to be used to get the asynchronous configuration message (OFPT_GET_ASYNC_REQUEST) doesn't have any body other than ofp_header. On receiving this OFPT_GET_ASYNC_REQUEST message, the switch should respond with the OFPT_GET_ASYNC_REPLY message. The switch should fill the property list with the list of asynchronous configuration events / property types that the relevant controller channel is preconfigured to receive. The switch should get this information from its internal data structures. Refer to the process_async_config_request() function in the of/openflow.c file for the procedure to process the get asynchronous configuration request message from the controller. Sending a packet-in message to the controller Packet-in messages (OFP_PACKET_IN) are sent from the switch to the controller to transfer a packet received from one of the switch-ports to the controller for further processing. By default, a packet-in message should be sent to all the controllers that are in equal (OFPCR_ROLE_EQUAL) and master (OFPCR_ROLE_MASTER) roles. This message should not be sent to controllers that are in the slave state. There are three ways by which the switch can send a packet-in event to the controller: Table-miss entry: When there is no matching flow entry for the incoming packet, the switch can send the packet to the controller. TTL checking: When the TTL value in a packet reaches zero, the switch can send the packet to the controller. The "send to the controller" action in the matching entry (either the flow table entry or the group table entry) of the packet. How to do it... When the switch wants to send a packet received in its data path to the controller, the following message format should be used: /* Packet received on port (datapath -> the controller). */ struct ofp_packet_in { struct ofp_header header; uint32_t buffer_id; /* ID assigned by datapath. */ uint16_t total_len; /* Full length of frame. */ uint8_t reason;     /* Reason packet is being sent                      * (one of OFPR_*) */ uint8_t table_id;   /* ID of the table that was looked up */ uint64_t cookie;   /* Cookie of the flow entry that was                      * looked up. */ struct ofp_match match; /* Packet metadata. Variable size. */ /* The variable size and padded match is always followed by: * - Exactly 2 all-zero padding bytes, then * - An Ethernet frame whose length is inferred from header.length. * The padding bytes preceding the Ethernet frame ensure that IP * header (if any) following the Ethernet header is 32-bit aligned. */ uint8_t pad[2]; /* Align to 64 bit + 16 bit */ uint8_t data[0]; /* Ethernet frame */ }; The buffer-id field should be set to the opaque value generated by the switch. When the packet is buffered, the data portion of the packet-in message should contain some bytes of data from the incoming packet. If the packet is sent to the controller because of the "send to the controller" action of a table entry, then the max_len field of ofp_action_output should be used as the size of the packet to be included in the packet-in message. If the packet is sent to the controller for any other reason, then the miss_send_len field of the OFPT_SET_CONFIG message should be used to determine the size of the packet. If the packet is not buffered, either because of unavailability of buffers or an explicit configuration via OFPCML_NO_BUFFER, then the entire packet should be included in the data portion of the packet-in message with the buffer-id value as OFP_NO_BUFFER. The date field should be set to the complete packet or a fraction of the packet. The total_length field should be set to the length of the packet included in the data field. The reason field should be set with any one of the following values defined in the enumeration, based on the context that triggers the packet-in event: /* Why is this packet being sent to the controller? */ enum ofp_packet_in_reason { OFPR_TABLE_MISS = 0,   /* No matching flow (table-miss                        * flow entry). */ OFPR_APPLY_ACTION = 1, /* Output to the controller in                        * apply-actions. */ OFPR_INVALID_TTL = 2, /* Packet has invalid TTL */ OFPR_ACTION_SET = 3,   /* Output to the controller in action set. */ OFPR_GROUP = 4,       /* Output to the controller in group bucket. */ OFPR_PACKET_OUT = 5,   /* Output to the controller in packet-out. */ }; If the packet-in message was triggered by the flow-entry "send to the controller" action, then the cookie field should be set with the cookie of the flow entry that caused the packet to be sent to the controller. This field should be set to -1 if the cookie cannot be associated with a particular flow. When the packet-in message is triggered by the "send to the controller" action of a table entry, there is a possibility that some changes have already been applied over the packet in previous stages of the pipeline. This information needs to be carried along with the packet-in message, and it can be carried in the match field of the packet-in message with a set of OXM (short for OpenFlow Extensible Match) TLVs. If the switch includes an OXM TLV in the packet-in message, then the match field should contain a set of OXM TLVs that include context fields. The standard context fields that can be added into the OXL TLVs are OFPXMT_OFB_IN_PORT, OFPXMT_OFB_IN_PHY_PORT, OFPXMT_OFB_METADATA, and OFPXMT_OFB_TUNNEL_ID. When the switch receives the packet in the physical port, and this packet information needs to be carried in the packet-in message, then OFPXMT_OFB_IN_PORT and OFPXMT_OFB_IN_PHY_PORT should have the same value, which is the OpenFlow port number of that physical port. When the switch receives the packet in the logical port and this packet information needs to be carried in the packet-in message, then the switch should set the logical port's port number in OFPXMT_OFB_IN_PORT and the physical port's port number in OFPXMT_OFB_IN_PHY_PORT. For example, consider a packet received on a tunnel interface defined over a Link Aggregation Group (LAG) with two member ports. Then the packet-in message should carry the tunnel interface's port_no to the OFPXMT_OFB_IN_PORT field and the physical interface's port_no to the OFPXMT_OFB_IN_PHY_PORT field. Refer to the send_packet_in_message() function in the of/openflow.c file for the procedure to send a packet-in message event to the controller. How it works... The switch can send either the entire packet it receives from the switch port to the controller, or a fraction of the packet to the controller. When the switch is configured to send only a fraction of the packet, it should buffer the packet in its memory and send a portion of packet data. This is controlled by the switch configuration. If the switch is configured to buffer the packet, and it has sufficient memory to buffer the packet, then the packet-in message should contain the following: A fraction of the packet. This is the size of the packet to be included in the packet-in message, configured via the switch configuration message. By default, it is 128 bytes. When the packet-in message is resulted by a table-entry action, then the output action itself can specify the size of the packet to be sent to the controller. For all other packet-in messages, it is defined in the switch configuration. The buffer ID to be used by the controller when the controller wants to forward the message at a later point in time. There's more… The switch that implements buffering should be expected to expose some details, such as the amount of available buffers, the period of time the buffered data will be available, and so on, through documentation. The switch should implement the procedure to release the buffered packet when there is no response from the controller to the packet-in event. Sending a flow-removed message to the controller A flow-removed message (OFPT_FLOW_REMOVED) is sent from the switch to the controller when a flow entry is removed from the flow table. This message should be sent to the controller only when the OFPFF_SEND_FLOW_REM flag in the flow entry is set. The switch should send this message only to the controller channel wherein the controller requested the switch to send this event. The controller can express its interest in receiving this event by sending the switch configuration message to the switch. By default, OFPT_FLOW_REMOVED should be sent to all the controllers that are in equal (OFPCR_ROLE_EQUAL) and master (OFPCR_ROLE_MASTER) roles. This message should not be sent to a controller that is in the slave state. How to do it... When the switch removes an entry from the flow table, it should build an OFPT_FLOW_REMOVED message with the following format and send this message to the controllers that have already shown interest in this event: /* Flow removed (datapath -> the controller). */ struct ofp_flow_removed { struct ofp_header header; uint64_t cookie;       /* Opaque the controller-issued identifier. */ uint16_t priority;     /* Priority level of flow entry. */ uint8_t reason;         /* One of OFPRR_*. */ uint8_t table_id;       /* ID of the table */ uint32_t duration_sec; /* Time flow was alive in seconds. */ uint32_t duration_nsec; /* Time flow was alive in nanoseconds                          * beyond duration_sec. */ uint16_t idle_timeout; /* Idle timeout from original flow mod. */ uint16_t hard_timeout; /* Hard timeout from original flow mod. */ uint64_t packet_count; uint64_t byte_count; struct ofp_match match; /* Description of fields.Variable size. */ }; The cookie field should be set with the cookie of the flow entry, the priority field should be set with the priority of the flow entry, and the reason field should be set with one of the following values defined in the enumeration: /* Why was this flow removed? */ enum ofp_flow_removed_reason { OFPRR_IDLE_TIMEOUT = 0,/* Flow idle time exceeded idle_timeout. */ OFPRR_HARD_TIMEOUT = 1, /* Time exceeded hard_timeout. */ OFPRR_DELETE = 2,       /* Evicted by a DELETE flow mod. */ OFPRR_GROUP_DELETE = 3, /* Group was removed. */ OFPRR_METER_DELETE = 4, /* Meter was removed. */ OFPRR_EVICTION = 5,     /* the switch eviction to free resources. */ }; The duration_sec and duration_nsec should be set with the elapsed time of the flow entry in the switch. The total duration in nanoseconds can be computed as duration_sec*109 + duration_nsec. All the other fields, such as idle_timeout, hard_timeoutm, and so on, should be set with the appropriate value from the flow entry, that is, these values can be directly copied from the flow mode that created this entry. The packet_count and byte_count should be set with the number of packet count and the byte count associated with the flow entry, respectively. If the values are not available, then these fields should be set with the maximum possible value. Refer to the send_flow_removed_message() function in the of/openflow.c file for the procedure to send a flow removed event message to the controller. Sending a port-status message to the controller Port-status messages (OFPT_PORT_STATUS) are sent from the switch to the controller when there is any change in the port status or when a new port is added, removed, or modified in the switch's data path. The switch should send this message only to the controller channel that the controller requested the switch to send it. The controller can express its interest to receive this event by sending an asynchronous configuration message to the switch. By default, the port-status message should be sent to all configured controllers in the switch, including the controller in the slave role (OFPCR_ROLE_SLAVE). How to do it... The switch should construct an OFPT_PORT_STATUS message with the following format and send this message to the controllers that have already shown interest in this event: /* A physical port has changed in the datapath */ struct ofp_port_status { struct ofp_header header; uint8_t reason; /* One of OFPPR_*. */ uint8_t pad[7]; /* Align to 64-bits. */ struct ofp_port desc; }; The reason field should be set to one of the following values as defined in the enumeration: /* What changed about the physical port */ enum ofp_port_reason { OFPPR_ADD = 0,   /* The port was added. */ OFPPR_DELETE = 1, /* The port was removed. */ OFPPR_MODIFY = 2, /* Some attribute of the port has changed. */ }; The desc field should be set to the port description. In the port description, all properties need not be filled by the switch. The switch should fill the properties that have changed, whereas the unchanged properties can be included optionally. Refer to the send_port_status_message() function in the of/openflow.c file for the procedure to send port_status_message to the controller. Sending a controller role-status message to the controller Controller role-status messages (OFPT_ROLE_STATUS) are sent from the switch to the set of controllers when the role of a controller is changed as a result of an OFPT_ROLE_REQUEST message. For example, if there are three the controllers connected to a switch (say controller1, controller2, and controller3) and controller1 sends an OFPT_ROLE_REQUEST message to the switch, then the switch should send an OFPT_ROLE_STATUS message to controller2 and controller3. How to do it... The switch should build the OFPT_ROLE_STATUS message with the following format and send it to all the other controllers: /* Role status event message. */ struct ofp_role_status { struct ofp_header header; /* Type OFPT_ROLE_REQUEST /                            * OFPT_ROLE_REPLY. */ uint32_t role;           /* One of OFPCR_ROLE_*. */ uint8_t reason;           /* One of OFPCRR_*. */ uint8_t pad[3];           /* Align to 64 bits. */ uint64_t generation_id;   /* Master Election Generation Id */ /* Role Property list */ struct ofp_role_prop_header properties[0]; }; The reason field should be set with one of the following values as defined in the enumeration: /* What changed about the controller role */ enum ofp_controller_role_reason { OFPCRR_MASTER_REQUEST = 0, /* Another the controller asked                            * to be master. */ OFPCRR_CONFIG = 1,         /* Configuration changed on the                            * the switch. */ OFPCRR_EXPERIMENTER = 2,   /* Experimenter data changed. */ }; The role should be set to the new role of the controller. The generation_id should be set with the generation ID of the OFPT_ROLE_REQUEST message that triggered the OFPT_ROLE_STATUS message. If the reason code is OFPCRR_EXPERIMENTER, then the role property list should be set in the following format: /* Role property types. */ enum ofp_role_prop_type { OFPRPT_EXPERIMENTER = 0xFFFF, /* Experimenter property. */ };   /* Experimenter role property */ struct ofp_role_prop_experimenter { uint16_t type;         /* One of OFPRPT_EXPERIMENTER. */ uint16_t length;       /* Length in bytes of this property. */ uint32_t experimenter; /* Experimenter ID which takes the same                        * form as struct                        * ofp_experimenter_header. */ uint32_t exp_type;     /* Experimenter defined. */ /* Followed by: * - Exactly (length - 12) bytes containing the experimenter data, * - Exactly (length + 7)/8*8 - (length) (between 0 and 7) * bytes of all-zero bytes */ uint32_t experimenter_data[0]; }; The experimenter field in the experimenter ID should take the same format as the experimenter structure. Refer to the send_role_status_message() function in the of/openflow.c file for the procedure to send a role status message to the controller. Sending a table-status message to the controller Table-status messages (OFPT_TABLE_STATUS) are sent from the switch to the controller when there is any change in the table status; for example, the number of entries in the table crosses the threshold value, called the vacancy threshold. The switch should send this message only to the controller channel in which the controller requested the switch to send it. The controller can express its interest to receive this event by sending the asynchronous configuration message to the switch. How to do it... The switch should build an OFPT_TABLE_STATUS message with the following format and send this message to the controllers that have already shown interest in this event: /* A table config has changed in the datapath */ struct ofp_table_status { struct ofp_header header; uint8_t reason;             /* One of OFPTR_*. */ uint8_t pad[7];             /* Pad to 64 bits */ struct ofp_table_desc table; /* New table config. */ }; The reason field should be set with one of the following values defined in the enumeration: /* What changed about the table */ enum ofp_table_reason { OFPTR_VACANCY_DOWN = 3, /* Vacancy down threshold event. */ OFPTR_VACANCY_UP = 4,   /* Vacancy up threshold event. */ }; When the number of free entries in the table crosses the vacancy_down threshold, the switch should set the reason code as OFPTR_VACANCY_DOWN. Once the vacancy_down event is generated by the switch, the switch should not generate any further vacancy down event until a vacancy up event is generated. When the number of free entries in the table crosses the vacancy_up threshold value, the switch should set the reason code as OFPTR_VACANCY_UP. Again, once the vacancy up event is generated by the switch, the switch should not generate any further vacancy up event until a vacancy down event is generated. The table field should be set with the table description. Refer to the send_table_status_message() function in the of/openflow.c file for the procedure to send a table status message to the controller. Sending a request-forward message to the controller When a the switch receives a modify request message from the controller to modify the state of a group or meter entries, after successful modification of the state, the switch should forward this request message to all other controllers as a request forward message (OFPT_REQUESTFORWAD). The switch should send this message only to the controller channel in which the controller requested the switch to send this event. The controller can express its interest to receive this event by sending an asynchronous configuration message to the switch. How to do it... The switch should build the OFPT_REQUESTFORWAD message with the following format, and send this message to the controllers that have already shown interest in this event: /* Group/Meter request forwarding. */ struct ofp_requestforward_header { struct ofp_header header; /* Type OFPT_REQUESTFORWARD. */ struct ofp_header request; /* Request being forwarded. */ }; The request field should be set with the request that received from the controller. Refer to the send_request_forward_message() function in the of/openflow.c file for the procedure to send request_forward_message to the controller. Handling a packet-out message from the controller Packet-out (OFPT_PACKET_OUT) messages are sent from the controller to the switch when the controller wishes to send a packet out through the switch's data path via a switch port. How to do it... There are two ways in which the controller can send a packet-out message to the switch: Construct the full packet: In this case, the controller generates the complete packet and adds the action list field to the packet-out message. The action field contains a list of actions defining how the packet should be processed by the switch. If the switch receives a packet_out message with buffer_id set as OFP_NO_BUFFER, then the switch should look into the action list, and based on the action to be performed, it can do one of the following: Modify the packet and send it via the switch port mentioned in the action list Hand over the packet to OpenFlow's pipeline processing, based on the OFPP_TABLE specified in the action list Use a packet buffer in the switch: In this mechanism, the switch should use the buffer that was created at the time of sending the packet-in message to the controller. While sending the packet_in message to the controller, the switch adds the buffer_id to the packet_in message. When the controller wants to send a packet_out message that uses this buffer, the controller includes this buffer_id in the packet_out message. On receiving the packet_out message with a valid buffer_id, the switch should fetch the packet from the buffer and send it via the switch port. Once the packet is sent out, the switch should free the memory allocated to the buffer, which was cached. Handling a barrier message from the controller The switch implementation could arbitrarily reorder the message sent from the controller to maximize its performance. So, if the controller wants to enforce the processing of the messages in order, then barrier messages are used. Barrier messages (OFPT_TABLE_STATUS) are sent from the controller to the switch to ensure message ordering. The switch should not reorder any messages across the barrier message. For example, if the controller is sending a group add message, followed by a flow add message referencing the group, then the message order should be preserved in the barrier message. How to do it... When the controller wants to send messages that are related to each other, it sends a barrier message between these messages. The switch should process these messages as follows: Messages before a barrier request should be processed fully before the barrier, including sending any resulting replies or errors. The barrier request message should then be processed and a barrier reply should be sent. While sending the barrier reply message, the switch should copy the xid value from the barrier request message. The switch should process the remaining messages. Both the barrier request and barrier reply messages don't have any body. They only have the ofp_header. Summary This article covers the list of symmetric and asynchronous messages sent and received by the OpenFlow switch, along with the procedure for handling these messages. Resources for Article: Further resources on this subject: The OpenFlow Controllers [article] Untangle VPN Services [article] Getting Started [article]
Read more
  • 0
  • 0
  • 6116

article-image-solving-some-not-so-common-vcenter-issues
Packt
05 May 2015
7 min read
Save for later

Solving Some Not-so-common vCenter Issues

Packt
05 May 2015
7 min read
In this article by Chuck Mills, author of the book vCenter Troubleshooting, we will review some of the not-so-common vCenter issues that administrators could face while they work with the vSphere environment. The article will cover the following issues and provide the solutions: The vCenter inventory shows no objects after you log in You get the VPXD must be stopped to perform this operation message Removing the vCenter plugins when they are no longer needed (For more resources related to this topic, see here.) Solving the problem of no objects in vCenter After successfully completing the vSphere 5.5 installation (not an upgrade) process with no error messages whatsoever, and logging in you log in to vCenter with the account you used for the installation. In this case, it is the local administrator account. Surprisingly, you are presented with an inventory of 0. The first thing is to make sure you have given vCenter enough time to start. Considering the previously mentioned account was the account used to install vCenter, you would assume the account is granted appropriate rights that allow you to manage your vCenter Server. Also consider the fact that you can log in and receive no objects from vCenter. Then, you might try logging in with your domain administrator account. This makes you wonder, What is going on here? After installing vCenter 5.5 using the Windows option, remember that the administrator@vsphere.local user will have administrator privileges for both the vCenter Single Sign-On Server and vCenter Server. You log in using the administrator@vsphere.local account with the password you defined during the installation of the SSO server: vSphere attaches the permissions along with assigning the role of administrator to the default account administrator@vsphere.local. These privileges are given for both the vCenter Single Sign-On server and the vCenter Server system. You must log in with this account after the installation is complete. After logging in with this account, you can configure your domain as an identity source. You can also give your domain administrator access to vCenter Server. Remember, the installation does not assign any administrator rights to the user account that was used to install vCenter. For additional information, review the Prerequisites for Installing vCenter Single Sign-On, Inventory Service, and vCenter Server document found at https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-C6AF2766-1AD0-41FD-B591-75D37DDB281F.html. Now that you understand what is going on with the vCenter account, use the following steps to enable the use of your Active Directory account for managing vCenter. Add or verify your AD domain as an identity source using the following procedure: Log in with administrator@vsphere.local. Select Administration from the menu. Choose Configuration under the Single Sign-On option. You will see the Single Sign-On | Configuration option only when you log in using the administrator@vsphere.local account. Select the Identity Sources tab and verify that the AD domain is listed. If not, choose Active Directory (Integrated Windows Authentication) found at the top of the window. Enter your Domain name and click on OK at the bottom of the window. Verify that your domain was added to Identity Sources, as shown in the following screenshot: Add the permissions for the AD account using the following steps: Click on Home at the top left of the window. Select vCenter from the menu options. Select vCenter Servers and then choose the vCenter Server object: Select the Manage tab and then the Permissions tab found in the vCenter Object window. Review the image that follows the steps to verify the process. Click on the green + icon to add permission. Choose the Add button located at the bottom of the window. Select the AD domain found in the drop-down option at the top of the window. Choose a user or group you want to assign permission to (the account named Chuck was selected for this example). Verify that the user or group is selected in the window. Use the drop-down options to choose the level of permissions (verify that Propagate to children is checked). Now, you should be able to log into vCenter with your AD account. See the results of the successful login in the following screenshot: Now, by adding the permissions to the account, you are able to log into vCenter using your AD credentials. The preceding screenshot shows the results of the changes, which is much different than the earlier attempt. Fixing the VPXD must be stopped to perform this operation message It has been mentioned several times in this article that the Virtual Center Service Appliance (VCSA) is the direction VMware is moving in when it comes to managing vCenter. As the number of administrators using it keeps increasing, the number of problems will also increase. One of the components an administrator might have problems with is the Virtual Centre Server service. This service should not be running during any changes to the database or the account settings. However, as with most vSphere components, there are times when something happens and you need to stop or start a service in order to fix the problem. There are times when an administrator who works within the VCSA appliance encounters the following error: This service can be stopped using the web console, by performing the following steps: Log into the console using https://ip-of-vcsa:5480. Enter your username and password: Choose vCenter Server after logging in. Make sure the Summary tab is selected. Click on the Stop button to stop the server: This should work most of the time, but if you find that using the web console is not working, then you need to log into the VCSA appliance directly and use the following procedure to stop the server: Connect to the appliance by using an SSH client such as Putty or mRemote. Type the command chkconfig. This will list all the services and their current status: Verify that vmware-vxpd is on: You can stop the server by using service vmware-vpxd stop command: After completing your work, you can start the server using one of the following methods: Restart the VCSA appliance Use the web console by clicking on the Start button on the vCenter Summary page Type service vmware-vpxd start on the SSH command line This should fix the issues that occur when you see the VPXD must be stopped to perform this operation message. Removing unwanted plugins in vSphere Administrators add and remove tools from their environment based on the needs and also the life of the tool. This is no different for the vSphere environment. As the needs of the administrator change, so does the usage of the plugins used in vSphere. The following section can be used to remove any unwanted plugins from your current vCenter. So, if you have lots of plugins and they are no longer needed, use the follow procedure to remove them: Log into your vCenter using http://vCenter_name or IP_address/mob and enter your username and password: Click on the content link under Properties: Click on ExtensionManager, which is found in the VALUE column: Highlight, right-click, and Copy the extension to be removed. Check out the Knowledge Base 1025360 found at http://Kb.vmware.com/kb/1025360 to get an overview of the plugins and their names. Select UnregisterExtension near the bottom of the page: Right-click on the plugin name and Paste it into the Value field: Click on Invoke Method to remove the plugin: This will give you the Method Invocation Result: void message. This message informs you that the selected plugin has been removed. You can repeat this process for each plugin that you want to remove. Summary In this article, we covered some of the not-so-common challenges an administrator could encounter in the vSphere environment. It provided the troubleshooting along with the solutions to the following issues: Seeing NO objects after logging into vCenter with the account you used to install it How to get past the VPXD must be stopped error when you are performing certain tasks within vCenter Removing the unwanted plugins from vCenter Server Resources for Article: Further resources on this subject: Availability Management [article] The Design Documentation [article] Design, Install, and Configure [article]
Read more
  • 0
  • 0
  • 2304

article-image-overview-horizon-view-architecture-and-its-components
Packt
27 Mar 2015
31 min read
Save for later

An Overview of Horizon View Architecture and its Components

Packt
27 Mar 2015
31 min read
In this article by Peter von Oven and Barry Coombs, authors of the book Mastering VMware Horizon 6, we will introduce you to the architecture and architectural components that make up the core VMware Horizon solution, concentrating on the virtual desktop elements of Horizon with Horizon View Standard. This article will cover the core Horizon View functionality of brokering virtual desktop machines that are hosted on the VMware vSphere platform. In this article, we will discuss the role of each of the Horizon View components and explain how they fit into the overall infrastructure and the benefits they bring, followed by a deep-dive into how Horizon View works. (For more resources related to this topic, see here.) Introducing the key Horizon components To start with, we are going to introduce, at a high level, the main components that make up the Horizon View product. All of the VMware Horizon components described are included as part of the licensed product, and the features that are available to you depend on whether you have the View Standard Edition, the Advanced Edition, or the Enterprise Edition. Horizon licensing also includes ESXi and vCenter licensing to support the ability to deploy the core hosting infrastructure. You can deploy as many ESXi hosts and vCenter Servers as you require to host the desktop infrastructure. The key elements of Horizon View are outlined in the following diagram: In the next section, we are going to start drilling down deeper into the architecture of how these high-level components fit together and how they work. A high-level architectural overview In this article, we will cover the core Horizon View functionality of brokering virtual desktop machines that are hosted on the VMware vSphere platform. The Horizon View architecture is pretty straightforward to understand, as its foundations lie in the standard VMware vSphere products (ESXi and vCenter). So, if you have the necessary skills and experience of working with this platform, then you are already halfway there. Horizon View builds on the vSphere infrastructure, taking advantage of some of the features of the ESX hypervisor and vCenter Server. Horizon View requires adding a number of virtual machines to perform the various View roles and functions. An overview of the View architecture is shown in the following diagram: View components run as applications that are installed on the Microsoft Windows Server operating system, so they could actually run on physical hardware as well. However, there are a great number of benefits available when you run them as virtual machines, such as delivering HA and DR, as well as the typical cost savings that can be achieved through virtualization. The following sections will cover each of these roles/components of the View architecture in greater detail. The Horizon View Connection Server The Horizon View Connection Server, sometimes referred to as Connection Broker or View Manager, is the central component of the View infrastructure. Its primary role is to connect a user to their virtual desktop by means of performing user authentication and then delivering the appropriate desktop resources based on the user's profile and user entitlement. When logging on to your virtual desktop, it is the Connection Server that you are communicating with. How does the Connection Server work? A user typically connects to their virtual desktop from their device by launching the View Client. Once the View Client has launched, the user enters the address details of the View Connection Server, which in turn responds by asking them to provide their network login details (their Active Directory (AD) domain username and password). It's worth noting that Horizon View now supports different AD function levels. These are detailed in the following screenshot: Based on their entitlements, these credentials are authenticated with AD and, if successful, the user is able to continue the logon process. Depending on what they are entitled to, the user could see a launch screen that displays a number of different desktop shortcuts available for login. These desktops represent the desktop pools that the user has been entitled to use. A pool is basically a collection of virtual desktops; for example, it could be a pool for the marketing department where the desktops contain specific applications/software for that department. Once authenticated, the View Manager makes a call to the vCenter Server to create a virtual desktop machine and then vCenter makes a call to View Composer (if you are using linked clones) to start the build process of the virtual desktop if there is not one already available. Once built, the virtual desktop is displayed/delivered within the View Client window, using the chosen display protocol (PCoIP or RDP). The process is described in detail in the following diagram: There are other ways to deploy VDI solutions that do not require a connection broker, and allow a user to connect directly to a virtual desktop; fact, there might be a specific use case for doing this such as having a large number of branches, where having local infrastructure allows trading to continue in the event of a WAN outage or poor network communication with the branch. VMware has a solution for what's referred to as a "Brokerless View": the VMware Horizon View Agent Direct-Connection Plugin. However, don't forget that, in a Horizon View environment, the View Connection Server provides greater functionality and does much more than just connecting users to desktops. The Horizon View Connection Server runs as an application on a Windows Server that, which in turn, could either be a physical or a virtual machine. Running as a virtual machine has many advantages; for example, it means that you can easily add high-availability features, which are key if you think about them, as you could potentially have hundreds of virtual user desktops running on a single-host server. Along with managing the connections for the users, the Connection Server also works with vCenter Server to manage the virtual desktop machines. For example, when using linked clones and powering on virtual desktops, these tasks might be initiated by the Connection Server, but they are executed at the vCenter Server level. Minimum requirements for the Connection Server To install the View Connection Server, you need to meet the following minimum requirements to run on physical or virtual machines: Hardware requirements: The following screenshot shows the hardware required:< Supported operating systems: The View Connection Server must be installed on one of the following operating systems: The Horizon View Security Server Horizon View Security Server is another instance and another version of the View Connection Server but, this time, it sits within your DMZ so that you can allow end users to securely connect to their virtual desktop machine from an external network or the Internet. You cannot install the View Security Server on the same machine that is already running as a View Connection Server or any of the other Horizon View components. How does the Security Server work? The user login process at the start is the same as when using a View Connection Server for internal access but, now we have added an extra security layer with the Security Server. The idea is that users can access their desktop externally without unnecessarily needing a VPN on the network first. The process is described in detail in the following diagram: The Security Server is paired with a View Connection Server that is configured by the use of a one-time password during installation. It's a bit like pairing your phone's Bluetooth with the hands-free kit in your car. When the user logs in from the View Client, they access the View Connection Server, which in turn authenticates the user against AD. If the View Connection Server is configured as a PCoIP gateway, then it will pass the connection and addressing information to the View Client. This connection information will allow the View Client to connect to the View Security Server using PCoIP. This is shown in the diagram by the green arrow (1). The View Security Server will then forward the PCoIP connection to the virtual desktop machine, (2) creating the connection for the user. The virtual desktop machine is displayed/delivered within the View Client window (3) using the chosen display protocol (PCoIP or RDP). The Horizon View Replica Server The Horizon View Replica Server, as the name suggests, is a replica or copy of a View Connection Server that is used to enable high availability to your Horizon View environment. Having a replica of your View Connection Server means that, if the Connection Server fails, users are still able to connect to their virtual desktop machines. You will need to change the IP address or update the DNS record to match this server if you are not using a load balancer. How does the Replica Server work? So, the first question is, what actually gets replicated? The View Connection Broker stores all its information relating to the end users, desktop pools, virtual desktop machines, and other View-related objects, in an Active Directory Application Mode (ADAM) database. Then, using the Lightweight Directory Access Protocol (LDAP) (it uses a method similar to what AD uses for replication), this View information gets copied from the original View Connection Server to the Replica Server. As both, the Connection Server and the Replica Server are now identical to each other, if your Connection Server fails, then you essentially have a backup that steps in and takes over so that end users can still continue to connect to their virtual desktop machines. Just like with the other components, you cannot install the Replica Server role on the same machine that is running as a View Connection Server or any of the other Horizon View components. Persistent or nonpersistent desktops In this section, we are going to talk about the different types of desktop assignments that can be deployed with Horizon View; these could also potentially have an impact on storage requirements, and also the way in which desktops are provisioned to the end users. One of the questions that always get asked is about having a dedicated (persistent) or a floating desktop assignment (nonpersistent). Desktops can either be individual virtual machines, which are dedicated to a user on a 1:1 basis (as we have in a physical desktop deployment, where each user effectively has their own desktop), or a user has a new, vanilla desktop that gets provisioned, personalized, and then assigned at the time of login and can be chosen at random from a pool of available desktops. This is the model that is used to build the user's desktop. The two options are described in more detail as follows: Persistent desktop: Users are allocated a desktop that retains all of their documents, applications, and settings between sessions. The desktop is statically assigned the first time that the user connects and is then used for all subsequent sessions. No other user is permitted access to the desktop. Nonpersistent desktop: Users might be connected to different desktops from the pool, each time that they connect. Environmental or user data does not persist between sessions and is delivered as the user logs on to their desktop. The desktop is refreshed or reset when the user logs off. In most use cases, a nonpersistent configuration is the best option, the key reason is that, in this model, you don't need to build all the desktops upfront for each user. You only need to power on a virtual desktop as and when it's required. All users start with the same basic desktop, which then gets personalized before delivery. This helps with concurrency rates. For example, you might have 5,000 people in your organization, but only 2,000 ever login at the same time; therefore, you only need to have 2,000 virtual desktops available. Otherwise, you would have to build a desktop for each one of the 5,000 users that might ever log in, resulting in more server infrastructure and certainly a lot more storage capacity. We will talk about storage in the next section. The other thing that we often see some confusion over is the difference between dedicated and floating desktops, and how linked clones fit in. Just to make it clear, linked clones and full clones are not what we are talking about when we refer to dedicated and floating desktops. Cloning operations refer to how a desktop is built, whereas the terms persistent and nonpersistent refer to how a desktop is assigned to a user. Dedicated and floating desktops are purely about user assignment and whether they have a dedicated desktop or one allocated from a pool on-demand. Linked clones and full clones are features of Horizon View, which uses View Composer to create a desktop image for each user from a master or parent image. This means, regardless of having a floating or dedicated desktop assignment, the virtual desktop machine could still be a linked or full clone. So, here's a summary of the benefits: It is operationally efficient. All users start from a single or smaller number of desktop images. Organizations reduce the amount of image and patch management. It is efficient storage-wise. The amount of storage required to host the nonpersistent desktop images will be smaller than keeping separate instances of unique user desktops. In the next section, we are going to cover an in-depth overview of Horizon View Composer and linked clones, and the advantages the technology delivers. Horizon View Composer and linked clones One of the main reasons a virtual desktop project fails to deliver, or doesn't even get out of the starting blocks, is heavy infrastructure down to storage requirements. The storage requirements are often seen as a huge cost burden, which can be attributed to the fact that people are approaching this in the same way they would approach a physical desktop environment's requirements. This would mean that each user gets their own dedicated virtual desktop and the hard disk space that comes with it, albeit a virtual disk; this then gets scaled out for the entire user population, so each user is allocated a virtual desktop with some storage. Let's take an example. If you had 1,000 users and allocated 250 GB per user's desktop, you would need 1,000 * 250 GB = 2.5 TB for the virtual desktop environment. That's a lot of storage just for desktops and could result in significant infrastructure costs that could possibly mean that the cost to deploy this amount of storage in the data center would render the project cost-in effective, compared to physical desktop deployments. A new approach to deploying storage for a virtual desktop environment is needed and this is where linked clone technology comes into play. In a nutshell, linked clones are designed to reduce the amount of disk space required, and to simplify the deployment and management of images to multiple virtual desktop machines—a centralized and much easier process. Linked clone technology Starting at a high level, a clone is a copy of an existing or parent virtual machine. This parent virtual machine (VM) is typically your gold build from which you want to create new virtual desktop machines. When a clone is created, it becomes a separate, new virtual desktop machine with its own unique identity. This process is not unique to Horizon View; it's actually a function of vSphere and vCenter, and in the case of Horizon View, we add in another component, View Composer, to manage the desktop images. There are two types of clones that we can deploy, a full clone or a linked clone. We will explain the difference in the next sections. Full clones As the name implies, a full clone disk is an exact, full-sized copy of the parent machine. Once the clone has been created, the virtual desktop machine is unique, with its own identity, and has no links back to the parent virtual machine from which it was cloned. It can operate as a fully independent virtual desktop in its own right and is not reliant on its parent virtual machine. However, as it is a full-sized copy, be aware that it will take up the same amount of storage as its parent virtual machine, which leads back to our discussion earlier in this article about storage capacity requirements. Using a full clone will require larger amounts of storage capacity and will possibly lead to higher infrastructure costs. Before you completely dismiss the idea of using full clone virtual desktop machines, there are some use cases that rely on this model. For example, if you use VMware Mirage to deliver a base layer or application layer, it only works today with full clones, dedicated Horizon View virtual desktop machines. If you have software developers, then they probably need to install specialist tools and a trust code onto a desktop, and therefore, need to "own" their desktop. Or perhaps, the applications that you run in your environment need a dedicated desktop due to the way the applications are licensed. Linked clones Having now discussed full clones, we are going to talk about deploying virtual desktop machines with linked clones. In a linked clone deployment, a delta disk is created and then used by the virtual desktop machine to store the data differences between its own operating system and the operating system of its parent virtual desktop machine. Unlike the full clone method, the linked clone is not a full copy of the virtual disk. The term linked clone refers to the fact that the linked clone will always look to its parent in order to operate, as it continues to read from the replica disk. Basically, the replica is a copy of a snapshot of the parent virtual desktop machine. The linked clone itself could potentially grow to the same size as the replica disk if you allow it to. However, you can set limits on how big it can grow, and should it start to get too big, then you can refresh the virtual desktops that are linked to it. This essentially starts the cloning process again from the initial snapshot. Immediately after a linked clone virtual desktop is deployed, the difference between the parent virtual machine and the newly created virtual desktop machine is extremely small and therefore reduces the storage capacity requirements compared to that of a full clone. This is how linked clones are more space-efficient than their full clone brothers. The underlying technology behind linked clones is more like a snapshot than a clone, but with one key difference: View Composer. With View Composer, you can have more than one active snapshot linked to the parent virtual machine disk. This allows you to create multiple virtual desktop images from just one parent. Best practice would be to deploy an environment with linked clones so as to reduce the storage requirements. However, as we previously mentioned, there are some use cases where you will need to use full clones. One thing to be aware of, which still relates to the storage, is that, rather than capacity, we are now talking about performance. All linked clone virtual desktops are going to be reading from one replica and therefore, will drive a high number of Input /Output Operations Per Second (IOPS) on the storage where the replica lives. Depending on your desktop pool design, you are fairly likely to have more than one replica, as you would typically have more than one data store. This in turn depends on the number of users who will drive the design of the solution. In Horizon View, you are able to choose the location where the replica lives. One of the recommendations is that the replica should sit on fast storage such as a local SSD. Alternative solutions would be to deploy some form of storage acceleration technology to drive the IOPS. Horizon View also has its own integrated solution called View Storage Accelerator (VSA) or Content Based Read Cache (CBRC). This feature allows you to allocate up to 2 GB of memory from the underlying ESXi host server that can be used as a cache for the most commonly read blocks. As we are talking about booting up desktop operating systems, the same blocks are required; as these can be retrieved from memory, the process is accelerated. Another solution is View Composer Array Integration (VCAI), which allows the process of building linked clones to be offloaded to the storage array and its native snapshot mechanism rather than taking CPU cycles from the host server. There are also a number of other third-party solutions that resolve the storage performance bottleneck, such as Atlantis Computing and their ILIO product, Nutanix, Nimble, and Tintri to name a few others. In the next section, we will take a deeper look at how linked clones work. How do linked clones work? The first step is to create your master virtual desktop machine image, which should contain not only the operating system, core applications, and settings, but also the Horizon View Agent components. This virtual desktop machine will become your parent VM or your gold image. This image can now be used as a template to create any new subsequent virtual desktop machines. The gold image or parent image cannot be a VM template. An overview of the linked clone process is shown in the following diagram:   Once you have created the parent virtual desktop or gold image (1), you then take a snapshot (2). When you create your desktop pool, this snapshot is selected and will become the replica (3) and will be set to be read-only. Each virtual desktop is linked back to this replica; hence the term linked clone. When you start creating your virtual desktops, you create linked clones that are unique copies for each user. Try not to create too many snapshots for your parent VM. I would recommend having just a handful, otherwise this could impact the performance of your desktops and make it a little harder to know which snapshot is which. What does View Composer build? During the image building process, and once the replica disk has been created, View Composer creates a number of other virtual disks, including the linked clone (operating system disk) itself. These are described in the following sections. Linked clone disk Not wanting to state the obvious, the main disk that gets created is the linked clone disk itself. This linked clone disk is basically an empty virtual disk container that is attached to the virtual desktop machine as the user logs in and the desktop starts up. This disk will start off small in size, but will grow over time, depending on the block changes that are requested from the replica disk by the virtual desktop machine's operating system. These block changes are stored in the linked clone disk, and this disk is sometimes referred to as the delta disk, or differential disk, due to the fact that it stores all the delta changes that the desktop operating system requests from the parent VM. As mentioned before, the linked clone disk can grow to the maximum size, equal to the parent VM but, following best practice, you would never let this happen. Typically, you can expect the linked clone disk to only increase to a few hundred MBs. We will cover this in the Linked clone processes section later. The replica disk is set as read-only and is used as the primary disk. Any writes and/or block changes that are requested by the virtual desktop are written/read directly from the linked clone disk. It is a recommended best practice to allocate tier-1 storage, such as local SSD drives, to host the replica, as all virtual desktops in the cluster will be referencing this single read-only VMDK file as their base image. Keeping it high in the stack improves performance, by reducing the overall storage IOPS required in a VDI workload. As we mentioned at the start of this section, storage costs are seen as being expensive for VDI. Linked clones reduce the burden of storage capacity but they do drive the requirement to derive a huge amount of IOPS from a single LUN. Persistent disk or user data disk The persistent disk feature of View Composer allows you to configure a separate disk that contains just the user data and user settings, and not the operating system. This allows any user data to be preserved when you update or make changes to the operating system disk, such as a recompose action. It's worth noting that the persistent disk is referenced by the VM name and not username, so bear this in mind if you want to attach the disk to another VM. This disk is also used to store the user's profile. With this in mind, you need to size it accordingly, ensuring that it is large enough to store any user profile type data such as Virtual Desktop Assessments. Disposable disk With the disposable disk option, Horizon View creates what is effectively a temporary disk that gets deleted every time the user powers off their virtual desktop machine. If you think about how the Windows desktop operating system operates and the files it creates, there are several files that are used on a temporary basis. Files such as Temporary Internet files or the Windows pagefile are two such examples. As these are only temporary files, why would you want to keep them? With Horizon View, these type of files are redirected to the disposable disk and then deleted when the VM is powered off. Horizon View provides the option to have a disposable disk for each virtual desktop. This disposable disk is used to contain temporary files that will get deleted when the virtual desktop is powered off. These are files that you don't want to store on the main operating system disk as they would consume unnecessary disk space. For example, files on the disposable disk are things such as the pagefile, Windows system temporary files, and VMware log files. Note that here we are talking about temporary system files and not user files. A user's temporary files are still stored on the user data disk so that they can be preserved. Many applications use the Windows temp folder to store installation CAB files, which can be referenced post-installation. Having said that, you might want to delete the temporary user data to reduce the desktop image size, in which case you could ensure that the user's temporary files are directed to the disposable disk. Internal disk Finally, we have the internal disk. The internal disk is used to store important configuration information, such as the computer account password, that would be needed to join the virtual desktop machine back to the domain if you refreshed the linked clones. It is also used to store Sysprep and Quickprep configurations details. In terms of disk space, the internal disk is relatively small, averaging around 20 MB. By default, the user will not see this disk from their Windows Explorer, as it contains important configuration information that you wouldn't want them to delete. Understanding the linked clone process There are several complex steps performed by View Composer and View Manager and that occur when a user launches a virtual desktop session. So, what's the process to build a linked clone desktop, and what goes on behind the scenes? When a user logs into Horizon View and requests a desktop, View Manager, using vCenter and View Composer, will create a virtual desktop machine. This process is described in the following sections. Creating and provisioning a new desktop An entry for the virtual desktop machine is created in the Active Directory Application Mode (ADAM) database before it is put into provisioning mode: The linked clone virtual desktop machine is created by View Composer. A machine account created in AD with a randomly generated password. View Composer checks for a replica disk and creates one if one does not already exist. A linked clone is created by the vCenter Server API call from View Composer. An internal disk is created to store the configuration information and machine account password. Customizing the desktop Now that you have a newly created, linked clone virtual desktop machine, the next phase is to customize it. The customization steps are as follows: The virtual desktop machine is switched to customization mode. The virtual desktop machine is customized by vCenter Server using the customizeVM_Task command and is joined to the domain with the information you entered in the View Manager console. The linked clone virtual desktop is powered on. The View Composer Agent on the linked clone virtual desktop machine starts up for the first time and joins the machine to the domain, using the NetJoinDomain command and the machine account password that was created on the internal disk. The linked clone virtual desktop machine is now Sysprep'd. Once complete, View Composer tells View Agent that customization has finished, and View Agent tells View Manager that the customization process has finished. The linked clone virtual desktop machine is powered off and a snapshot is taken. The linked clone virtual desktop machine is marked as provisioned and is now available for use. When a linked clone virtual desktop machine is powered on with the View Composer Agent running, the agent tracks any changes that are made to the machine account password. Any changes will be updated and stored on the internal disk. In many AD environments, the machine account password is changed periodically. If the View Composer Agent detects a password change, it updates the machine account password on the internal disk that was created with the linked clone. This is important, as a the linked clone virtual desktop machine is reverted to the snapshot taken after the customization during a refresh operation. For example, the agent will be able to reset the machine account password to the latest one. The linked clone process is depicted in the following diagram:   Additional features and functions of linked clones There are a number of other management functions that you can perform on a linked clone disk from View Composer; these are outlined in this section and are needed in order to deliver the ongoing management of the virtual desktop machines. Recomposing a linked clone Recomposing a linked clone virtual desktop machine or desktop pool allows you to perform updates to the operating system disk, such as updating the image with the latest patches, or software updates. You can only perform updates on the same version of an operating system, so you cannot use the recompose feature to migrate from one operating system to another, such as going from Windows XP to Windows 7. As we covered in the What does View Composer Build? section, we have separate disks for items such as user's data. These disks are not affected during a recompose operation, so all user-specific data on them is preserved. When you initiate the recompose operation, View Composer essentially starts the linked clone building process over again; thus, a new operating system disk is created, which then gets customized and a snapshot, such as the ones shown in the preceding sections, is taken. During the recompose operation, the MAC addresses of the network interface and the Windows SID are not preserved. There are some management tools and security-type solutions that might not work due to this change. However, the UUID will remain the same. The recompose process is described in the following steps: View Manager puts the linked clone into maintenance mode. View Manager calls the View Composer resync API for the linked clones being recomposed, directing View Composer to use the new base image and the snapshot. If there isn't a replica for the base image and snapshot yet, in the target datastore for the linked clone, View Composer creates the replica in the target datastore (unless a separate datastore is being used for replicas, in which case a replica is created in the replica datastore). View Composer destroys the current OS disk for the linked clone and creates a new OS disk linked to the new replica. The rest of the recompose cycle is identical to the customization phase of the provisioning and customization cycles. The following diagram shows a graphical representation of the recompose process. Before the process begins, the first thing you need to do is update your Gold Image (1) with the patch updates or new applications you want to deploy as the virtual desktops. As described in the preceding steps, the snapshot is then taken (2) to create the new replica, Replica V2 (3). The existing OS disk is destroyed, but the User Data disk (4) is maintained during the recompose process:   Refreshing a linked clone By carrying out a refresh of the linked clone virtual desktop, you are effectively reverting it to its initial state, when its original snapshot was taken after it had completed the customization phase. This process only applies to the operating system disk and no other disks are affected. An example use case for refresh operations would be recomposing a nonpersistent desktop two hours after logoff, to return it to its original state and make it available for the next user. The refresh process performs the following tasks: The linked clone virtual desktop is switched into maintenance mode. View Manager reverts the linked clone virtual desktop to the snapshot taken after customization was completed: - vdm-initial-checkpoint. The linked clone virtual desktop starts up, and View Composer Agent detects if the machine account password needs to be updated. If not, and the password on the internal disk is newer than the one in the registry, the agent will update the machine account password using the one on the internal disk. One of the reasons why you would perform a refresh operation is if the linked clone OS disk starts to become bloated. As we previously discussed, the OS-linked clone disk could grow to the full size of its parent image. This means it would be taking up more disk space than is really necessary, which kind of defeats the objective of linked clones. The refresh operation effectively resets the linked clone to a small delta between it and its parent image. The following diagram shows a representation of the refresh operation:   The linked clone on the left-hand side of the diagram (1) has started to grow in size. Refreshing reverts it back to the snapshot as if it was a new virtual desktop, as shown on the right-hand side of the diagram (2). Rebalancing operations with View Composer The rebalance operation in View Composer is used to evenly distribute the linked clone virtual desktop machines across multiple datastores in your environment. You would perform this task in the event that one of your datastores was becoming full while others have ample free space. It might also help with the performance of that particular datastore. For example, if you had 10 virtual desktop machines in one datastore and only two in another, then running a rebalance operation would potentially even this out and leave you with six virtual desktop machines per datastore. You must use the View Administrator console to initiate the rebalance operation in View Composer. If you simply try to vMotion any of your virtual desktop machines, then View Composer will not be able to keep track of them. On the other hand, if you have six virtual desktop machines on one datastore and seven on another, then it is highly likely that initiating a rebalance operation will have no effect, and no virtual desktop machines will be moved, as doing so has no benefit. A virtual desktop machine will only be moved to another datastore if the target datastore has significantly more spare capacity than the source. The rebalance process is described in the following steps: The linked clone is switched to maintenance mode. Virtual machines to be moved are identified based on the free space in the available datastores. The operating system disk and persistent disk are disconnected from the virtual desktop machine. The detached operating system disk and persistent disk are moved to the target datastore. The virtual desktop machine is moved to the target datastore. The operating system disk and persistent disk are reconnected to the linked clone virtual desktop machine. View Composer resynchronizes the linked clone virtual desktop machines. View Composer checks for the replica disk in the datastore and creates one if one does not already exist as per the provisioning steps covered earlier in this article. As per the recompose operation, the operating system disk for the linked clone gets deleted and a new one is created and then customized. The following diagram shows the rebalance operation:   Summary In this article, we discussed the Horizon View architecture and the different components that make up the complete solution. We covered the key technologies, such as how linked clones work to optimize storage. Resources for Article: Further resources on this subject: Importance of Windows RDS in Horizon View [article] Backups in the VMware View Infrastructure [article] Design, Install, and Configure [article]
Read more
  • 0
  • 0
  • 10696

article-image-creating-custom-reports-and-notifications-vsphere
Packt
24 Mar 2015
34 min read
Save for later

Creating Custom Reports and Notifications for vSphere

Packt
24 Mar 2015
34 min read
In this article by Philip Sellers, the author of PowerCLI Cookbook, you will cover the following topics: Getting alerts from a vSphere environment Basics of formatting output from PowerShell objects Sending output to CSV and HTML Reporting VM objects created during a predefined time period from VI Events object Setting custom properties to add useful context to your virtual machines Using PowerShell native capabilities to schedule scripts (For more resources related to this topic, see here.) This article is all about leveraging the information available to you in PowerCLI. As much as any other topic, figuring out how to tap into the data that PowerCLI offers is as important as understanding the cmdlets and syntax of the language. However, once you obtain your data, you will need to alter the formatting and how it's returned to be used. PowerShell, and by extension PowerCLI, offers a big set of ways to control the formatting and the display of information returned by its cmdlets and data objects. You will explore all of these topics with this article. Getting alerts from a vSphere environment Discovering the data available to you is the most difficult thing that you will learn and adopt in PowerCLI after learning the initial cmdlets and syntax. There is a large amount of data available to you through PowerCLI, but there are techniques to extract the data in a way that you can use. The Get-Member cmdlet is a great tool for discovering the properties that you can use. Sometimes, just listing the data returned by a cmdlet is enough; however, when the property contains other objects, Get-Member can provide context to know that the Alarm property is a Managed Object Reference (MoRef) data type. As your returned objects have properties that contain other objects, you can have multiple layers of data available for you to expose using PowerShell dot notation ($variable.property.property). The ExtensionData property found on most objects has a lot of related data and objects to the primary data. Sometimes, the data found in the property is an object identifier that doesn't mean much to an administrator but represents an object in vSphere. In these cases, the Get-View cmdlet can refer to that identifier and return human-readable data. This topic will walk you through the methods of accessing data and converting it to usable, human-readable data wherever needed so that you can leverage it in scripts. To explore these methods, we will take a look at vSphere's built-in alert system. While PowerCLI has native cmdlets to report on the defined alarm states and actions, it doesn't have a native cmdlet to retrieve the triggered alarms on a particular object. To do this, you must get the datacenter, VMhost, VM, and other objects directly and look at data from the ExtensionData property. Getting ready To begin this topic, you will need a PowerCLI window and an active connection to vCenter. You should also check the vSphere Web Client or the vSphere Windows Client to see whether you have any active alarms and to know what to expect. If you do not have any active VM alarms, you can simulate an alarm condition using a utility such as HeavyLoad. For more information on generating an alarm, see the There's more... section of this topic. How to do it… In order to access data and convert it to usable, human-readable data, perform the following steps: The first step is to retrieve all of the VMs on the system. A simple Get-VM cmdlet will return all VMs on the vCenter you're connected to. Within the VM object returned by Get-VM, one of the properties is ExtensionData. This property is an object that contains many additional properties and objects. One of the properties is TriggeredAlarmState: Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} To dig into TriggeredAlarmState more, take the output of the previous cmdlet and store it into a variable. This will allow you to enumerate the properties without having to wait for the Get-VM cmdlet to run. Add a Select -First 1 cmdlet to the command string so that only a single object is returned. This will help you look inside without having to deal with multiple VMs in the variable: $alarms = Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} | Select -First 1 Now that you have extracted an alarm, how do you get useful data about what type of alarm it is and which vSphere object has a problem? In this case, you have VM objects since you used Get-VM to find the alarms. To see what is in the TriggeredAlarmState property, output the contents of TriggeredAlarmState and pipe it to Get-Member or its shortcut GM: $alarms.ExtensionData.TriggeredAlarmState | GM The following screenshot shows the output of the preceding command line: List the data in the $alarms variable without the Get-Member cmdlet appended and view the data in a real alarm. The data returned does tell you the time when the alarm was triggered, the OverallStatus property or severity of the alarm, and whether the alarm has been acknowledged by an administrator, who acknowledged it and at what time. You will see that the Entity property contains a reference to a virtual machine. You can use the Get-View cmdlet on a reference to a VM, in this case, the Entity property, and return the virtual machine name and other properties. You will also see that Alarm is referred to in a similar way and we can extract usable information using Get-View also: Get-View $alarms.ExtensionData.TriggeredAlarmState.Entity Get-View $alarms.ExtensionData.TriggeredAlarmState.Alarm You can see how the output from these two views differs. The Entity view provides the name of the VM. You don't really need this data since the top-level object contains the VM name, but it's good to understand how to use Get-View with an entity. On the other hand, the data returned by the Alarm view does not show the name or type of the alarm, but it does include an Info property. Since this is the most likely property with additional information, you should list its contents. To do so, enclose the Get-View cmdlet in parenthesis and then use dot notation to access the Info variable: (Get-View $alarms.ExtensionData.TriggeredAlarmState.Alarm).Info In the output from the Info property, you can see that the example alarm in the screenshot is a Virtual Machine CPU usage alarm. Your alarm can be different, but it should appear similar to this. After retrieving PowerShell objects that contain the data that you need, the easiest way to return the data is to use calculated expressions. Since the Get-VM cmdlet was the source for all lookup data, you will need to use this object with the calculated expressions to display the data. To do this, you will append a Select statement after the Get-VM and Where statement. Notice that you use the same Get-View statement, except that you change your variable to $_, which is the current object being passed into Select: Get-VM | Where {$_.ExtensionData.TriggeredAlarmState -ne $null} | Select Name, @{N="AlarmName";E={(Get-View $_.ExtensionData. TriggeredAlarmState.Alarm).Info.Name}}, @{N="AlarmDescription";E={(Get-View $_.ExtensionData. TriggeredAlarmState.Alarm).Info.Description}}, @ {N="TimeTriggered"; E={$_.ExtensionData.TriggeredAlarmState. Time}}, @{N="AlarmOverallStatus"; E={$_.ExtensionData. TriggeredAlarmState. OverallStatus}} How it works When the data you really need is several levels below the top-level properties of a data object, you need to use calculated expressions to return these at the top level. There are other techniques where you can build your own object with only the data you want returned, but in a large environment with thousands of objects in vSphere, the method in this topic will execute faster than looping through many objects to build a custom object. Calculated expressions are extremely powerful since nearly anything can be done with expressions. More than that, you explored techniques to discover the data you want. Data exploration can provide you with incredible new capabilities. The point is you need to know where the data is and how to pull that data back to the top level. There's more… It is likely that your test environment has no alarms and in this case, it might be up to you to create an alarm situation. One of the easiest to control and create is heavy CPU load with a CPU load-testing tool. JAM Software created software named HeavyLoad that is a stress-testing tool. This utility can be loaded into any Windows VM on your test systems and can consume all of the available CPU that the VM is configured with. To be safe, configure the VM with a single vCPU and the utility will consume all of the available CPU. Once you install the utility, go to the Test Options menu and you can uncheck the Stress GPU option, ensure that Stress CPU and Allocate Memory are checked. The utility also has shortcut buttons on the Menu bar to allow you to set these options. Click on the Start button (which looks like a Play button) and the utility begins to stress the VM. For users who wish to do the same test, but utilize Linux, StressLinux is a great option. StressLinux is a minimal distribution designed to create high load on an operating system. See also You can read more about the HeavyLoad Utility available under the JAM Software page at http://www.jam-software.com/heavyload/ You can read more about StressLinux at http://www.stresslinux.org/sl/ Basics of formatting output from PowerShell objects Anything that exists in a PowerShell object can be output as a report, e-mail, or editable file. Formatting the output is a simple task in PowerShell. Sometimes, the information you receive in the object is in a long decimal number format, but to make it more readable, you want to truncate the output to just a couple decimal places. In this section, you will take a look at the Format-Table, Format-Wide, and Format-List cmdlets. You will dig into the Format-Custom cmdlet and also take a look at the -f format operator. The truth is that native cmdlets do a great job returning data using default formatting. When we start changing and adding our own data to the list of properties returned, the formatting can become unoptimized. Even in the returned values of a native cmdlet, some columns might be too narrow to display all of the information. Getting ready To begin this topic, you will need the PowerShell ISE. How to do it… In order to format the output from PowerShell objects, perform the following steps: Run Add-PSSnapIn VMware.VimAutomation.Core in the PowerShell ISE to initialize a PowerCLI session and bring in the VMware cmdlet. Connect to your vCenter server. Start with a simple object from a Get-VM cmdlet. The default output is in a table format. If you pipe the object to Format-Wide, it will change the default output into a multicolumn with a single property, just like running a dir /w command at the Windows Command Prompt. You can also use FW, an alias for Format-Wide: Get-VM | Format-Wide Get-VM | FW If you take the same object and pipe it to Format-Table or its alias FT, you will receive the same output if you use the default output for Get-VM: Get-VM Get-VM | Format-Table However, as soon as you begin to select a different order of properties, the default formatting disappears. Select the same four properties and watch the formatting change. The default formatting disappears. Get-VM | Select Name, PowerState, NumCPU, MemoryGB | FT To restore formatting to table output, you have a few choices. You can change the formatting on the data in the object using the Select statement and calculated expressions. You can also pass formatting through the Format-Table cmdlet. While setting formatting in the Select statement changes the underlying data, using Format-Table doesn't change the data, but only its display. The formatting looks essentially like a calculated expression in a Select statement. You provide Label, Expression, and formatting commands: Get-VM | Select * | FT Name, PowerState, NumCPU, @{Label="MemoryGB"; Expression={$_.MemoryGB}; FormatString="N2"; Alignment="left"} If you have data in a number data type, you can convert it into a string using the ToString() method on the object. You can try this method on NumCPU: Get-VM | Select * | FT Name, PowerState, @{Label="Num CPUs"; Expression={($_.NumCpu).ToString()}; Alignment="left"}, @{Label="MemoryGB"; Expression={$_.MemoryGB}; FormatString="N2"; Alignment="left"} The other method is to format with the -f operator, which is basically a .NET derivative. To better understand the formatting and string, the structure is {<index>[,<alignment>][:<formatString>]}. Index sets that are a part of the data being passed, will be transformed. The alignment is a numeric value. A positive number will right-align those number of characters. A negative number will left-align those number of characters. The formatString parameter is the part that defines the format to apply. In this example, let's take a datastore and compute the percentage of free disk space. The format for percent is p: Get-Datastore | Select Name, @{N="FreePercent";E={"{0:p} -f ($_.FreeSpaceGB / $_.CapacityGB)}} To make the FreePercent column 15 characters wide, you add 0,15:p to the format string: Get-Datastore | Select Name, @{N="FreePercent";E={"{0,15:p} -f ($_.FreeSpaceGB / $_.CapacityGB)}} How it works… With the Format-Table, Format-List, and Format-Wide cmdlets, you can change the display of data coming from a PowerCLI object. These cmdlets all apply basic transformations without changing the data in the object. This is important to note because once the data is changed, it can prevent you from making changes. For instance, if you take the percentage example, after transforming the FreePercent property, it is stored as a string and no longer as a number, which means that you can't reformat it again. Applying a similar transformation from the Format-Table cmdlet would not alter your data. This doesn't matter when you're performing a one-liner, but in a more complex script or in a routine, where you might need to not only output the data but also reuse it, changing the data in the object is a big deal. There's more… This topic only begins to tap the full potential of PowerShell's native -f format operator. There are hundreds of blog posts about this topic, and there are use cases and examples of how to produce the formatting that you are looking for. The following link also gives you more details about the operator and formatting strings that you can use in your own code. See also For more information, refer to the PowerShell -f Format operator page available at http://ss64.com/ps/syntax-f-operator.html Sending output to CSV and HTML On the screen the output is great, but there are many times when you need to share your results with other people. When looking at sharing information, you want to choose a format that is easy to view and interpret. You might also want a format that is easy to manipulate and change. Comma Separated Values (CSV) files allow the user to take the output you generate and use it easily within a spreadsheet software. This allows you the ability to compare the results from vSphere versus internal tracking databases or other systems easily to find differences. It can also be useful to compare against service contracts for physical hosts as examples. HTML is a great choice for displaying information for reading, but not manipulation. Since e-mails can be in an HTML format, converting the output from PowerCLI (or PowerShell) into an e-mail is an easy way to assemble an e-mail to other areas of the business. What's even better about these cmdlets is the ease of use. If you have a data object in PowerCLI, all that you need to do is pipe that data object into the ConvertTo-CSV or ConvertTo-HTML cmdlets and you instantly get the formatted data. You might not be satisfied with the HTML-generated version alone, but like any other HTML, you can transform the look and formatting of the HTML using CSS by adding a header. In this topic, you will examine the conversion cmdlets with a simple set of Get- cmdlets. You will also take a look at trimming results using the Select statements and formatting HTML results with CSS. This topic will pull a list of virtual machines and their basic properties to send to a manager who can reconcile it against internal records or system monitoring. It will export to a CSV file that will be attached to the e-mail and you will use the HTML to format a list in an e-mail to send to the manager. Getting ready To begin this topic, you will need to use the PowerShell ISE. How to do it… In order to examine the conversion cmdlets using Get- cmdlets, trim results using the Select statements, and format HTML results with CSS, perform the following steps: Open the PowerShell ISE and run Add-PSSnapIn VMware.VimAutomation.Core to initialize a PowerCLI session within the ISE. Again, you will use the Get-VM cmdlet as the base for this topic. The fields that we care about are the name of the VM, the number of CPUs, the amount of memory, and the description: $VMs = Get-VM | Select Name, NumCPU, MemoryGB, Description In addition to the top-level data, you also want to provide the IP address, hostname, and the operating system. These are all available from the ExtensionData.Guest property: $VMs = Get-VM | Select Name, NumCPU, MemoryGB, Description, @{N="Hostname";E={$_.ExtensionData.Guest.HostName}}, @{N="IP";E={$_.ExtensionData.Guest.IPAddress}}, @{N="OS";E={$_.ExtensionData.Guest.GuestFullName}} The next step is to take this data and format it to be sent as an HTML e-mail. Converting the information to HTML is actually easy. Pipe the variable you created with the data into ConvertTo-HTML and store in a new variable. You will need to reuse the data to convert it to a CSV file to attach: $HTMLBody = $VMs | ConvertTo-HTML If you were to output the contents of $HTMLBody, you will see that it is very plain, inheriting the defaults of the browser or e-mail program used to display it. To dress this up, you need to define some basic CSS to add some style for the <body>, <table>, <tr>, <td>, and <th> tags. You can add this by running the ConvertTo-HTML cmdlet again with the -PreContent parameter: $css = "<style> body { font-family: Verdana, sans-serif; fontsize: 14px; color: #666; background: #FFF; } table{ width:100%; border-collapse:collapse; } table td, table th { border:1px solid #333; padding: 4px; } table th { text-align:left; padding: 4px; background-color:#BBB; color:#FFF;} </style>" $HTMLBody = $VMs | ConvertTo-HTML -PreContent $css It might also be nice to add the date and time generated to the end of the file. You can use the -PostContent parameter to add this: $HTMLBody = $VMs | ConvertTo-HTML -PreContent $css -PostContent "<div><strong>Generated:</strong> $(Get-Date)</div>" Now, you have the HTML body of your message. To take the same data from $VMs and save it to a CSV file that you can use, you will need a writable directory, and a good choice is to use your My Documents folder. You can obtain this using an environment variable: $tempdir = [environment]::getfolderpath("mydocuments") Now that you have a temp directory, you can perform your export. Pipe $VMs to Export-CSV and specify the path and filename: $VMs | Export-CSV $tempdirVM_Inventory.csv At this point, you are ready to assemble an e-mail and send it along with your attachment. Most of the cmdlets are straightforward. You set up a $msg variable that is a MailMessage object. You create an Attachment object and populate it with your temporary filename and then create an SMTP server with the server name: $msg = New-Object Net.Mail.MailMessage $attachment = new-object Net.Mail.Attachment("$tempdirVM_Inventory.csv") $smtpServer = New-Object Net.Mail.SmtpClient("hostname") You set the From, To, and Subject parameters of the message variable. All of these are set with dot notation on the $msg variable: $msg.From = "fromaddress@yourcompany.com" $msg.To.Add("admin@yourcompany.com") $msg.Subject = "Weekly VM Report" You set the body you created earlier, as $HTMLBody, but you need to run it through Out-String to convert any other data types to a pure string for e-mailing. This prevents an error where System.String[] appears instead of your content in part of the output: $msg.Body = $HTMLBody | Out-String You need to take the attachment and add it to the message: $msg.Attachments.Add($attachment) You need to set the message to an HTML format; otherwise, the HTML will be sent as plain text and not displayed as an HTML message: $msg.IsBodyHtml = $true Finally, you are ready to send the message using the $smtpServer variable that contains the mail server object. Pass in the $msg variable to the server object using the Send method and it transmits the message via SMTP to the mail server: $smtpServer.Send($msg) Don't forget to clean up the temporary CSV file you generated. To do this, use the PowerShell Remove-Item cmdlet that will remove the file from the filesystem. Add a -Confirm parameter to suppress any prompts: Remove-Item $tempdirVM_Inventory.csv -Confirm:$false How it works… Most of this topic relies on native PowerShell and less on the PowerCLI portions of the language. This is the beauty of PowerCLI. Since it is based on PowerShell and only an extension, you lose none of the functions of PowerShell, a very powerful set of commands in its own right. The ConvertTo-HTML cmdlet is very easy to use. It requires no parameters to produce HTML, but the HTML it produces isn't the most legible if you display it. However, a bit of CSS goes a long way to improve the look of the output. Add some colors and style to the table and it becomes a really easy and quick way to format a mail message of data to be sent to a manager on a weekly basis. The Export-CSV cmdlet lets you take the data returned by a cmdlet and convert that into an editable format for use. You can place this onto a file share for use or you can e-mail it along, as you did in this topic. This topic takes you step by step through the process of creating a mail message, formatting it in HTML, and making sure that it's relayed as an HTML message. You also looked at how to attach a file. To send a mail, you define a mail server as an object and store it in a variable for reuse. You create a message object and store it in a variable and then set all of the appropriate configuration on the message. For an attachment, you create a third object and define a file to be attached. That is set as a property on the message object and then finally, the message object is sent using the server object. There's more… ConvertTo-HTML is just one of four conversion cmdlets in PowerShell. In addition to ConvertTo-HTML, you can convert data objects into XML. ConvertTo-JSON allows you to convert a data object into an XML format specific for web applications. ConvertTo-CSV is identical to Export-CSV except that it doesn't save the content immediately to a defined file. If you had a use case to manipulate the CSV before saving it, such as stripping the double quotes or making other alternations to the contents, you can use ConvertTo-CSV and then save it to a file at a later point in your script. Reporting VM objects created during a predefined time period from VI Events object An important auditing tool in your environment can be a report of when virtual machines were created, cloned, or deleted. Unlike snapshots, that store a created date on the snapshot, virtual machines don't have this property associated with them. Instead, you have to rely on the events log in vSphere to let you know when virtual machines were created. PowerCLI has the Get-VIEvents cmdlet that allows you to retrieve the last 1,000 events on the vCenter, by default. The cmdlet can accept a parameter to include more than the last 1,000 events. The cmdlet also allows you to specify a start date, and this can allow you to search for things within the past week or the past month. At a high level, this topic works the same in both PowerCLI and the vSphere SDK for Perl (VIPerl). They both rely on getting the vSphere events and selecting the specific events that match your criteria. Even though you are looking for VM creation events in this topic, you will see that the code can be easily adapted to look for many other types of events. Getting ready To begin this topic, you will need a PowerCLI window and an active connection to a vCenter server. How to do it… In order to report VM objects created during a predefined time period from VI Events object, perform the following steps: You will use the Get-VIEvent cmdlet to retrieve the VM creation events for this topic. To begin, get a list of the last 50 events from the vCenter host using the -MaxSamples parameter: Get-VIEvent -MaxSamples 50 If you pipe the output from the preceding cmdlet to Get-Member, you will see that this cmdlet can return a lot of different objects. However, the type of object isn't really what you need to find the VM's created events. Looking through the objects, they all include a GetType() method that returns the type of event. Inside the type, there is a name parameter. Create a calculated expression using GetType() and then group it based on this expression, you will get a usable list of events you can search for. This list is also good for tracking the number of events your systems have encountered or generated: Get-VIEvent -MaxSamples 2000 | Select @{N="Type";E={$_.GetType().Name}} | Group Type In the preceding screenshot, you will see that there are VMClonedEvent, VmRemovedEvent, and VmCreatedEvent listed. All of these have to do with creating or removing virtual machines in vSphere. Since you are looking for created events, VMClonedEvent and VmCreatedEvent are the two needed for this script. Write a Where statement to return only these events. To do this, we can use a regular expression with both the event names and the -match PowerShell comparison parameter: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} Next, you want to select just the properties that you want in your output. To do this, add a Select statement and you will reuse the calculated expression from Step 3. If you want to return the VM name, which is in a Vm property with the type of VMware.Vim.VVmeventArgument, you can create a calculated expression to return the VM name. To round out the output, you can include the FullFormattedMessage, CreatedTime, and UserName properties: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} | Select @{N="Type",E={$_.GetType().Name}}, @{N="VMName",E={$_.Vm.Name}},FullFormattedMessage, CreatedTime, UserName The last thing you will want to do is go back and add a time frame to the Get-VIEvent cmdlet. You can do this by specifying the -Start parameter along with (Get-Date).AddMonths(-1) to return the last month's events: Get-VIEvent -MaxSamples 2000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} | Select @{N="Type",E={$_.GetType().Name}}, @{N="VMName",E={$_.Vm.Name}},FullFormattedMessage, CreatedTime, UserName How it works… The Get-VIEvent cmdlet drives a majority of this topic, but in this topic you only scratched the surface of the information you can unearth with Get-VIEvent. As you saw in the screenshot, there are so many different types of events that can be reported, queried, and acted upon from the vCenter server. Once you discover and know which events you are looking for specifically, then it's a matter of scoping down the results with a Where statement. Last, you use calculated expressions to pull data that is several levels deep in the returned data object. One of the primary things employed here is a regular expression used to search for the types of events you were interested in: VmCreatedEvent and VmClonedEvent. By combining a regular expression with the -match operator, you were able to use a quick and very understandable bit of code to find more than one type of object you needed to return. There's more… Regular Expressions (RegEx) are big topics on their own. These types of searches can match any type of pattern that you can establish or in the case of this topic, a number of defined values that you are searching for. RegEx are beyond the scope of this article, but they can be a big help anytime you have a pattern you need to search for and match, or perhaps more importantly, replace. You can use the -replace operator instead of –match to not only to find things that match your pattern, but also change them. See also For more information on Regular Expressions refer to http://ss64.com/ps/syntax-regex.html The PowerShell.com page: Text and Regular Expressions is available at http://powershell.com/cs/blogs/ebookv2/archive/2012/03/20/chapter-13-text-and-regular-expressions.aspx Setting custom properties to add useful context to your virtual machines Building on the use case for the Get-VIEvent cmdlet, Alan Renouf of VMware's PowerCLI team has a useful script posted on his personal blog (refer to the See also section) that helps you pull the created date and the user who created a virtual machine and populate this into a custom attribute. This is a great use for a custom attribute on virtual machines and makes some useful information available that is not normally visible. This is a process that needs to be run often to pick up details for virtual machines that have been created. Rather than looking specifically at a VM and trying to go back and find its creation date as Alan's script does, in this article, you will take a different approach building on the previous article and populate the information from the found creation events. Maintenance in this form would be easier by finding creation events for the last week, running the script weekly, and updating the VMs with the data in the object rather than looking for VMs with missing data and searching through all of the events. This article assumes that you are using a Windows system that is joined to AD on the same domain as your vCenter. It also assumes that you have loaded the Remote Server Administration Tools for Windows so that the Active Directory PowerShell modules are available. This is a separate download for Windows 7. The Active Directory Module for PowerShell can be enabled on Windows 7, Windows 8, Windows Server 2008, and Windows Server 2012 in the Programs and Features control panel under Turn Windows features on or off. Getting ready To begin this script, you will need the PowerShell ISE. How to do it… I order to set custom properties to add useful context to your virtual machines, perform the following steps: Open the PowerShell ISE and run Add-PSSnapIn VMware.VimAutomation.Core to initialize a PowerCLI session within the ISE. The first step is to create a custom attribute in vCenter for the CreatedBy and CreateDate attributes: New-CustomAttribute -TargetType VirtualMachine -Name CreatedBy New-CustomAttribute -TargetType VirtualMachine -Name CreateDate Before you begin the scripting, you will need to run ImportSystemModules to bring in the Active Directory cmdlets that you will use later to lookup the username and reference it back to a display name: ImportSystemModules Next, you need to locate and pull out all of the creation events with the same code as the Reporting VM objects created during a predefined time period from VI Events object topic. You will assign the events to a variable for processing in a loop in this case; however, you will also want to change the period to 1 week (7 days) instead of 1 month: $Events = Get-VIEvent -Start (Get-Date).AddDays(-7) -MaxSamples25000 | Where {$_.GetType().Name -match "(VmCreatedEvent|VmClonedEvent)"} The next step is to begin a ForEach loop to pull the data and populate it into a custom attribute: ForEach ($Event in $Events) { The first thing to do in the loop is to look up the VM referenced in the Event's Vm parameter by name using Get-VM: $VM = Get-VM -Name $Event.Vm.Name Next, you can use the CreatedTime parameter on the event and set this as a custom attribute on the VM using the Set-Annotation cmdlet: $VM | Set-Annotation -CustomAttribute "CreateDate" -Value $Event.CreatedTime Next, you can use the Username parameter to lookup the display name of the user account who created the VM using Active Directory cmdlets. For the Active Directory cmdlets to be available, your client system or server needs to have the Microsoft Remote Server Administration Tools (RSAT) installed to make the Active Directory cmdlets available. The data coming from $Event.Username is in DOMAINusername format. You need just the username to perform a lookup with Get-AdUser, so that you can split on the backslash and return only the second item in the array resulting from the split command. After the lookup, the display name that you will want to use is in the Name property. You can retrieve it with dot notation: $User = (($Event.UserName.split(""))[1]) $DisplayName = (Get-AdUser $User).Name To do this, you need to use a built-in on the event and set this as a custom attribute on the VM using the Set-Annotation cmdlet: $VM | Set-Annotation -CustomAttribute "CreatedBy" -Value $DisplayName Finally, close the ForEach loop. } <# End ForEach #> How it works… This topic works by leveraging the Get-VIEvent cmdlet to search for events in the log from the last number of days. In larger environments, you might need to expand the -MaxSamples cmdlet well beyond the number in this example. There might be thousands of events per day in larger environments. The topic looks through the log and the Where statement returns only the creation events. Once you have the object with all of the creation events, you can loop through this and pull out the username of the person who created each virtual machine and the time they were created. Then, you just need to populate the data into the custom attributes created. There's more… Combine this script with the next topic and you have a great solution for scheduling this routine to run on a daily basis. Running it daily would certainly cut down on the number of events you need to process through to find and update the virtual machines that have been created with the information. You should absolutely go and read Alan Renouf's blog post on which this topic is based. This primary difference between this topic and the one Alan presents is the use of native Windows Active Directory PowerShell lookups in this topic instead of the Quest Active Directory PowerShell cmdlets. See also Virtu-Al.net: Who created that VM? is available at http://www.virtu-al.net/2010/02/23/who-created-that-vm/ Using PowerShell native capabilities to schedule scripts There is potentially a better and easier way to schedule your processes to run from PowerShell and PowerCLI and those are known as Scheduled Jobs. Scheduled Jobs were introduced in PowerShell 3.0 and distributed as part of the Windows Management Framework 3.0 and higher. While Scheduled Tasks can execute any Windows batch file or executable, Scheduled Jobs are specific to PowerShell and are used to generate and create background jobs that run once or on a specified schedule. Scheduled Jobs appear in the Windows Task Scheduler and can be managed with the scheduled task cmdlets of PowerShell. The only difference is that the scheduled jobs cmdlets cannot manage scheduled tasks. These jobs are stored in the MicrosoftWindowsPowerShellScheduledJobs path of the Windows Task Scheduler. You can see and edit them through the management console in Windows after creation. What's even greater about Scheduled Jobs in PowerShell is that you are not forced into creating a .ps1 file for every new job you need to run. If you have a PowerCLI one-liner that provides all of the functionality you need, you can simply include it in a job creation cmdlet without ever needing to save it anywhere. Getting ready To being this topic, you will need a PowerCLI window with an active connection to a vCenter server. How to do it… In order to schedule scripts using the native capabilities of PowerShell, perform the following steps: If you are running PowerCLI on systems lower than Windows 8 or Windows Server 2012, there's a chance that you are running PowerShell 2.0 and you will need to upgrade in order to use this. To check, run Get-PSVersion to see which version is installed on your system. If less than version 3.0, upgrade before continuing this topic. Throw back a script you have already written, the script to find and remove snapshots over 30 days old: Get-Snapshot -VM * | Where {$_.Created -LT (Get-Date).AddDays(-30)} | Remove-Snapshot -Confirm:$false To schedule a new job, the first thing you need to think about is what triggers your job to run. To define a new trigger, you use the New-JobTrigger cmdlet: $WeeklySundayAt6AM = New-JobTrigger -Weekly -At "6:00 AM" -DaysOfWeek Sunday –WeeksInterval 1 Like scheduled tasks, there are some options that can be set for a scheduled job. These include whether to wake the system to run: $Options = New-ScheduledJobOption –WakeToRun –StartIfIdle –MultipleInstancePolicy Queue Next, you will use the Register-ScheduledJob cmdlet. This cmdlet accepts a parameter named ScriptBlock and this is where you will specify the script that you have written. This method works best with one-liners, or scripts that execute in a single line of piped cmdlets. Since this is PowerCLI and not just PowerShell, you will need to add the VMware cmdlets and connect to vCenter at the beginning of the script block. You also need to specify the -Trigger and -ScheduledJobOption parameters that are defined in the previous two steps: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"-ScriptBlock { Add-PSSnapIn VMware.VimAutomation.Core; Connect-VIServer servers; Get-Snapshot -VM * | Where {$_.Created -LT (Get-Date).AddDays(-30)} | Remove-Snapshot -Confirm:$false} -Trigger$WeeklySundayAt6AM-ScheduledJobOption $Options You are not restricted to only running a script block. If you have a routine in a .ps1 file, you can easily run it from ScheduledJob also. For illustration, if you have a .ps1 file stored in c:Scripts named 30DaySnaps.ps1, you can use the following cmdlet to register a job: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"–FilePath c:Scripts30DaySnaps.ps1 -Trigger $WeeklySundayAt6AM-ScheduledJobOption $Options Rather than scheduling the scheduled job and defining the PowerShell in the job, a more maintainable method can be to write the module and then call the function from the scheduled job. One other requirement is that Single Sign-On should be configured so that the Connect-VIServer works correctly in the script: Register-ScheduledJob -Name "Cleanup 30 Day Snapshots"-ScriptBlock {Add-PSSnapIn VMware.VimAutomation.Core; Connect-ViServer server; Import-Module 30DaySnaps; Remove-30DaySnaps -VM*} - How it works… This topic leverages the scheduled jobs framework developed specifically for running PowerShell as scheduled tasks. It doesn't require you to configure all of the extra settings as you have seen in previous examples of scheduled tasks. These are PowerShell native cmdlets that know how to implement PowerShell on a schedule. One thing to keep in mind is that these jobs will begin with a normal PowerShell session—one that knows nothing about PowerCLI, by default. You will need to include Add-PSSnapIn VMware.VimAutomation.Core in each script block or the .ps1 file that you use with a scheduled job. There's more… There is a full library of cmdlets to implement and maintain scheduled jobs. You have Set-ScheduleJob that allows you to change the settings of a registered scheduled job on a Windows system. You can disable and enable scheduled jobs using the Disable-ScheduledJob and Enable-Scheduled job cmdlets. This allows you to pause the execution of a job during maintenance, or for other reasons, without needing to remove and resetup the job. This is especially helpful since the script blocks are inside the job and not saved in a separate .ps1 file. You can also configure remote scheduled jobs on other systems using the Invoke-Command PowerShell cmdlet. This concept is shown in examples on Microsoft TechNet in the documentation for the Register-ScheduledJob cmdlet. In addition to scheduling new jobs, you can remove jobs using the Unregister-ScheduledJob cmdlet. This cmdlet requires one of three identifying properties to unschedule a job. You can pass -Name with a string, -ID with the number identifying the job, or an object reference to the scheduled job with -InputObject. You can combine the Get-ScheduledJob cmdlet to find and pass the object by pipeline. See also To read more about Microsoft TechNet: PSScheduledJob Cmdlets, refer to http://technet.microsoft.com/en-us/library/hh849778.aspx Summary This article was all about leveraging the information and data from PowerCLI as well as how we can format and display this information. Resources for Article: Further resources on this subject: Introduction to vSphere Distributed switches [article] Creating an Image Profile by cloning an existing one [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 4489

article-image-working-vmware-infrastructure
Packt
04 Mar 2015
21 min read
Save for later

Working with VMware Infrastructure

Packt
04 Mar 2015
21 min read
In this article by Daniel Langenhan, the author of VMware vRealize Orchestrator Cookbook, we will take a closer look at how Orchestrator interacts with vCenter Server and vRealize Automation (vRA—formerly known as vCloud Automation Center, vCAC). vRA uses Orchestrator to access and automate infrastructure using Orchestrator plugins. We will take a look at how to make Orchestrator workflows available to vRA. We will investigate the following recipes: Unmounting all the CD-ROMs of all VMs in a cluster Provisioning a VM from a template An approval process for VM provisioning (For more resources related to this topic, see here.) There are quite a lot of plugins for Orchestrator to interact with VMware infrastructure and programs: vCenter Server vCloud Director (vCD) vRealize Automation (vRA—formally known as vCloud Automation Center, vCAC) Site Recovery Manager (SRM) VMware Auto Deploy Horizon (View and Virtual Desktops) vRealize Configuration Manager (earlier known as vCenter Configuration Manager) vCenter Update Manager vCenter Operation Manager, vCOPS (only example packages) VMware, as of writing of this article, is still renaming its products. An overview of all plugins and their names and download links can be found at http://www.vcoteam.info/links/plug-ins.html. There are quite a lot of plugins, and we will not be able to cover all of them, so we will focus on the one that is most used, vCenter. Sadly, vCloud Director is earmarked by VMware to disappear for everyone but service providers, so there is no real need to show any workflow for it. We will also work with vRA and see how it interacts with Orchestrator. vSphere automation The interaction between Orchestrator and vCenter is done using the vCenter API. Here is the explanation of the interaction, which you can refer to in the following figure. A user starts an Orchestrator workflow (1) either in an interactive way via the vSphere Web Client, the Orchestrator Web Operator, the Orchestrator Client, or via the API. The workflow in Orchestrator will then send a job (2) to vCenter and receive a task ID back (type VC:Task). vCenter will then start enacting the job (3). Using the vim3WaitTaskEnd action (4), Orchestrator pauses until the task has been completed. If we do not use the wait task, we can't be certain whether the task has ended or failed. It is extremely important to use the vim3WaitTaskEnd action whenever we send a job to vCenter. When the wait task reports that the job has finished, the workflow will be marked as finished. The vCenter MoRef The MoRef (Managed Object Reference) is a unique ID for every object inside vCenter. MoRefs are basically strings; some examples are shown here: VM Network Datastore ESXi host Data center Cluster vm-301 network-312 dvportgroup-242 datastore-101 host-44 data center-21 domain-c41 The MoRefs are typically stored in the attribute .id or .key of the Orchestrator API object. For example, the MoRef of a vSwitch Network is VC:Network.id. To browse for MoRefs, you can use the Managed Object Browser (MOB), documented at https://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.pg.doc/PG_Appx_Using_MOB.20.1.html. The vim3WaitTaskEnd action As already said, vim3WaitTaskEnd is one of the most central actions while interacting with vCenter. The action has the following variables: Category Name Type Usage IN vcTask VC:Task Carries the reconfiguration task from the script to the wait task IN progress Boolean Write to the logs the progress of a task in percentage IN pollRate Number How often the action should be checked for task completion in vCenter OUT ActionResult Any Returns the task's result The wait task will check in regular intervals (pollRate) the status of a task that has been submitted to vCenter. The task can have the following states: State Meaning Queued The task is queued and will be executed as soon as possible. Running The task is currently running. If the progress is set to true, the progress in percentage will be displayed in the logs. Success The task is finished successfully. Error The task has failed and an error will be thrown. Other vCenter wait actions There are actually five waiting tasks that come with the vCenter Server plugin. Here's an overview of the other four: Task Description vim3WaitToolsStarted This task waits until the VMware tools are started on a VM or until a timeout is reached. Vim3WaitForPrincipalIP This task waits until the VMware tools report the primary IP of a VM or until a timeout is reached. This typically indicates that the operating system is ready to receive network traffic. The action will return the primary IP. Vim3WaitDnsNameInTools This task waits until the VMware tools report a given DNS name of a VM or until a timeout is reached. The in-parameter addNumberToName is not used and can be set to Null. WaitTaskEndOrVMQuestion This task waits until a task is finished or if a VM develops a question. A vCenter question is related to user interaction. vRealize Automation (vRA) Automation has changed since the beginning of Orchestrator. Before, tools such as vCloud Director or vCloud Automation Center (vCAC)/vRealize Automation (vRA), Orchestrator was the main tool for automating vCenter resources. With version 6.2 of vCloud Automation Center (vCAC), the product has been renamed vRealize Automation. Now vRA is deemed to become the central cornerstone in the VMware automation effort. vRealize Orchestrator (vRO), is used by vRA to interact with and automate VMware and non-VMware products and infrastructure elements. Throughout the various vCAC/vRA interactions, the role of Orchestrator has changed substantially. Orchestrator started off as an extension to vCAC and became a central part of vRA. In vCAC 5.x, Orchestrator was only an extension of the IaaS life cycle. Orchestrator was tied in using the stubs vCAC 6.0 integrated Orchestrator as an XaaS service (Everything as a Service) using the Advanced Service Designer (ASD) In vCAC 6.1, Orchestrator is used to perform all VMware NSX operations (VMware's new network virtualization and automation), meaning that it became even more of a central part of the IaaS services. With vCAC 6.2, the Advance Service Designer (ASD) was enhanced to allow more complex form of designs, allowing better leverage of Orchestrator workflows. As you can see in the following figure, vRA connects to the vCenter Server using an infrastructure endpoint that allows vRA to conduct basic infrastructure actions, such as power operations, cloning, and so on. It doesn't allow any complex interactions with the vSphere infrastructure, such as HA configurations. Using the Advanced Service Endpoints, vRA integrates the Orchestrator (vRO) plugins as additional services. This allows vRA to offer the entire plugin infrastructure as services to vRA. The vCenter Server, AD, and PowerShell plugins are typical integrations that are used with vRA. Using Advance Service Designer (ASD), you can create integrations that use Orchestrator workflows. ASD allows you to offer Orchestrator workflows as vRA catalog items, making it possible for tenants to access any IT service that can be configured with Orchestrator via its plugins. The following diagram shows an example using the Active Directory plugin. The Orchestrator Plugin provides access to the AD services. By creating a custom resource using the exposed AD infrastructure, we can create a service blueprint and resource actions, both of which are based on Orchestrator workflows that use the AD plugin. The other method of integrating Orchestrator into the IaaS life cycle, which was predominately used in vCAC 5.x was to use the stubs. The build process of a VM has several steps; each step can be assigned a customizable workflow (called a stub). You can configure vRA to run an Orchestrator workflow at these stubs in order to facilitate a few customized actions. Such actions could be taken to change the VMs HA or DRS configuration, or to use the guest integration to install or configure a program on a VM. Installation How to install and configure vRA is out of the scope of this article, but take a look at http://www.kendrickcoleman.com/index.php/Tech-Blog/how-to-install-vcloud-automation-center-vcac-60-part-1-identity-appliance.html for more information. If you don't have the hardware or the time to install vRA yourself, you can use the VMware Hands-on Labs, which can be accessed after clicking on Try for Free at http://hol.vmware.com. The vRA Orchestrator plugin Due to the renaming, the vRA plugin is called vRealize Orchestrator vRA Plug-in 6.2.0, however the file you download and use is named o11nplugin-vcac-6.2.0-2287231.vmoapp. The plugin currently creates a workflow folder called vCloud Automation Center. vRA-integrated Orchestrator The vRA appliance comes with an installed and configured vRO instance; however, the best practice for a production environment is to use a dedicated Orchestrator installation, even better would be an Orchestrator cluster. Dynamic Types or XaaS XaaS means Everything (X) as a Service. The introduction of Dynamic Types in Orchestrator Version 5.5.1 does exactly that; it allows you to build your own plugins and interact with infrastructure that has not yet received its own plugin. Take a look at this article by Christophe Decanini; it integrates Twitter with Orchestrator using Dynamic Types at http://www.vcoteam.info/articles/learn-vco/282-dynamic-types-tutorial-implement-your-own-twitter-plug-in-without-any-scripting.html. Read more… To read more about Orchestrator integration with vRA, please take a look at the official VMware documentation. Please note that the official documentation you need to look at is about vRealize Automation, and not about vCloud Automation Center, but, as of writing this article, the documentation can be found at https://www.vmware.com/support/pubs/vrealize-automation-pubs.html. The document called Advanced Service Design deals with vRO and Advanced Service Designer The document called Machine Extensibility discusses customization using subs Unmounting all the CD-ROMs of all VMs in a cluster This is an easy recipe to start with, but one you can really make it work for your existing infrastructure. The workflow will unmount all CD-ROMs from a running VM. A mounted CD-ROM may block a VM from being vMotioned. Getting ready We need a VM that can mount a CD-ROM either as an ISO from a host or from the client. Before you start the workflow, make sure that the VM is powered on and has an ISO connected to it. How to do it... Create a new workflow with the following variables: Name Type Section Use cluster VC:ClusterComputerResource IN Used to input the cluster clusterVMs Array of VC:VirtualMachine Attribute Use to capture all VMs in a cluster Add the getAllVMsOfCluster action to the schema and assign the cluster in-parameter and the clusterVMs attribute to it as actionResult. Now, add a Foreach element to the schema and assign the workflow Disconnect all detachable devices from a running virtual machine. Assign the Foreach element clusterVMs as a parameter. Save and run the workflow. How it works... This recipe shows how fast and easily you can design solutions that help you with everyday vCenter problems. The problem is that VMs that have CD-ROMs or floppies mounted may experience problems using vMotion, making it impossible for them to be used with DRS. The reality is that a lot of admins mount CD-ROMs and then forget to disconnect them. Scheduling this script every evening just before the nighttime backups will make sure that a production cluster is able to make full use of DRS and is therefore better load-balanced. You can improve this workflow by integrating an exclusion list. See also Refer to the example workflow, 7.01 UnMount CD-ROM from Cluster. Provisioning a VM from a template In this recipe, we will build a deployment workflow for Windows and Linux VMs. We will learn how to create workflows and reduce the amount of input variables. Getting ready We need a Linux or Windows template that we can clone and provision. How to do it… We have split this recipe in two sections. In the first section, we will create a configuration element, and in the second, we will create the workflow. Creating a configuration We will use a configuration for all reusable variables. Build a configuration element that contains the following items: Name Type Use productId String This is the Windows product ID—the licensing code joinDomain String This is the Windows domain FQDN to join domainAdmin Credential These are the credentials to join the domain licenseMode VC:CustomizationLicenseDataMode Example, perServer licenseUsers Number This denotes the number of licensed concurrent users inTimezone Enums:MSTimeZone Time zone fullName String Full name of the user orgName String Organization name newAdminPassword String New admin password dnsServerList Array of String List of DNS servers dnsDomain String DNS domain gateway Array of String List of gateways Creating the base workflow Now we will create the base workflow: Create the workflow as shown in the following figure by adding the given elements:      Clone, Windows with single NIC and credential      Clone, Linux with single NIC      Custom decision Use the Clone, Windows… workflow to create all variables. Link up the ones that you have defined in the configuration as attributes. The rest are defined as follows: Name Type Section Use vmName String IN This is the new virtual machine's name vm VC:VirtualMachine IN Virtual machine to clone folder VC:VmFolder IN This is the virtual machine folder datastore VC:Datastore IN This is the datastore in which you store the virtual machine pool VC:ResourcePool IN This is the resource pool in which you create the virtual machine network VC:Network IN This is the network to which you attach the virtual network interface ipAddress String IN This is the fixed valid IP address subnetMask String IN This is the subnet mask template Boolean Attribute For value No, mark new VM as template powerOn Boolean Attribute For value Yes, power on the VM after creation doSysprep Boolean Attribute For value Yes, run Windows Sysprep dhcp Boolean Attribute For value No, use DHCP newVM VC:VirtualMachine OUT This is the newly-created VM The following sub-workflow in-parameters will be set to special values: Workflow In-parameter value Clone, Windows with single NIC and credential host Null joinWorkgroup Null macAddress Null netBIOS Null primaryWINS Null secondaryWINS Null name vmName clientName vmName Clone, Linux with single NIC host Null macAddress Null name vmName clientName vmName Define the in-parameter VM as input for the Custom decision and add the following script. The script will check whether the name of the OS contains the word Microsoft: guestOS=vm.config.guestFullName; System.log(guestOS);if (guestOS.indexOf("Microsoft") >=0){return true;} else {return false} Save and run the workflow. This workflow will now create a new VM from an existing VM and customize it with a fixed IP. How it works… As you can see, creating workflows to automate vCenter deployments is pretty straightforward. Dealing with the various in-parameters of workflows can be quite overwhelming. The best way to deal with this problem is to hide away variables by defining them centrally using a configuration, or define them locally as attributes. Using configurations has the advantage that you can create them once and reuse them as needed. You can even push the concept a bit further by defining multiple configurations for multiple purposes, such as different environments. While creating a new workflow for automation, a typical approach is as follows: Look for a workflow that you need. Run the workflow normally to check out what it actually does. Either create a new workflow that uses the original or duplicate and edit the one you tried, modifying it until it does what you want. A fast way to deal with a lot of variables is to drag every element you need into the schema and then use the binding to create the variables as needed. You may have noticed that this workflow only lets you select vSwitch networks, not distributed vSwitch networks. You can improve this workflow with the following features: Read the existing Sysprep information stored in your vCenter Server Generate different predefined configurations (for example DEV or Prod) There's more... We can improve the workflow by implementing the ability to change the vCPU and the memory of the VM. Follow these steps to implement it: Move the out-parameter newVM to be an attribute. Add the following variables: Name Type Section Use vCPU Number IN This variable denotes the amount of vCPUs Memory Number IN This variable denotes the amount of VM memory vcTask VC:Task Attribute This variable will carry the reconfiguration task from the script to the wait task progress Boolean Attribute Value NO, vim3WaitTaskEnd pollRate Number Attribute Value 5, vim3WaitTaskEnd ActionResult Any Attribute vim3WaitTaskEnd Add the following actions and workflows according to the next figure:      shutdownVMAndForce      changeVMvCPU      vim3WaitTaskEnd      changeVMRAM      Start virtual machine Bind newVM to all the appropriate input parameters of the added actions and workflows. Bind actionResults (VC:tasks) of the change actions to vim3WaitTasks. See also Refer to the example workflows, 7.02.1 Provision VM (Base), 7.02.2 Provision VM (HW custom), as well as the configuration element, 7 VM provisioning. An approval process for VM provisioning In this recipe, we will see how to create a workflow that waits for an approver to approve the VM creation before provisioning it. We will learn how to combine mail and external events in a workflow to make it interact with different users. Getting ready For this recipe, we first need the provisioning workflow that we have created in the Provisioning a VM from a template recipe. You can use the example workflow, 7.02.1 Provision VM (Base). Additionally, we need a functional e-mail system as well as a workflow to send e-mails. You can use the example workflow, 4.02.1 SendMail as well as its configuration item, 4.2.1 Working with e-mail. How to do it… We will split this recipe in three parts. First, we will create a configuration element then, we will create the workflow, and lastly, we will use a presentation to make the workflow usable. Creating a configuration element We will use a configuration for all reusable variables. Build a configuration element that contains the following items: Name Type Use templates Array/VC:VirtualMachine This contains all the VMs that serve as templates folders Array/VC:VmFolder This contains all the VM folders that are targets for VM provisioning networks Array/VC:Network This contains all VM networks that are targets for VM provisioning resourcePools Array/VC:ResourcePool This contains all resource pools that are targets for VM provisioning datastores Array/VC:Datastore This contains all datastores that are targets for VM provisioning daysToApproval Number These are the number of days the approval should be available for approver String This is the e-mail of the approver Please note that you also have to define or use the configuration elements for SendMail, as well as the Provision VM workflows. You can use the examples contained in the example package. Creating a workflow Create a new workflow and add the following variables: Name Type Section Use mailRequester String IN This is the e-mail address of the requester vmName String IN This is the name of the new virtual machine vm VC:VirtualMachine IN This is the virtual machine to be cloned folder VC:VmFolder IN This is the virtual machine folder datastore VC:Datastore IN This is the datastore in which you store the virtual machine pool VC:ResourcePool IN This is the resource pool in which you create the virtual machine network VC:Network IN This is the network to which you attach the virtual network interface ipAddress String IN This is the fixed valid IP address subnetMask String IN This is the subnet mask isExternalEvent Boolean Attribute A value of true defines this event as external mailApproverSubject String Attribute This is the subject line of the mail sent to the approver mailApproverContent String Attribute This is the content of the mail that is sent to the approver mailRequesterSubject String Attribute This is the subject line of the mail sent to the requester when the VM is provisioned mailRequesterContent String Attribute This is the content of the mail that is sent to the requester when the VM is provisioned mailRequesterDeclinedSubject String Attribute This is the subject line of the mail sent to the requester when the VM is declined mailRequesterDeclinedContent String Attribute This is the content of the mail that is sent to the requester when the VM is declined eventName String Attribute This is the name of the external event endDate Date Attribute This is the end date for the wait of external event approvalSuccess Boolean Attribute This checks whether the VM has been approved Now add all the attributes we defined in the configuration element and link them to the configuration. Create the workflow as shown in the following figure by adding the given elements:      Scriptable task      4.02.1 SendMail (example workflow)       Wait for custom event       Decision       Provision VM (example workflow) Edit the scriptable task and bind the following variables to it: In Out vmName ipAddress mailRequester template approver days to approval mailApproverSubject mailApproverContent mailRequesterSubject mailRequesterContent mailRequesterDeclinedSubject mailRequesterDeclinedContent eventName endDate Add the following script to the scriptable task: //construct event name eventName="provision-"+vmName; //add days to today for approval var today = new Date(); var endDate = new Date(today); endDate.setDate(today.getDate()+daysToApproval); //construct external URL for approval var myURL = new URL() ; myURL=System.customEventUrl(eventName, false); externalURL=myURL.url; //mail to approver mailApproverSubject="Approval needed: "+vmName; mailApproverContent="Dear Approver,n the user "+mailRequester+" would like to provision a VM from template "+template.name+".n To approve please click here: "+externalURL; //VM provisioned mailRequesterSubject="VM ready :"+vmName; mailRequesterContent="Dear Requester,n the VM "+vmName+" has been provisioned and is now available under IP :"+ipAddress; //declined mailRequesterDeclinedSubject="Declined :"+vmName; mailRequesterDeclinedContent="Dear Requester,n the VM "+vmName+" has been declined by "+approver; Bind the out-parameter of Wait for customer event to approvalSuccess. Configure the Decision element with approvalSuccess as true. Bind all the other variables to the workflow elements. Improving with the presentation We will now edit the workflow's presentation in order to make it workable for the requester. To do so, follow the given steps: Click on Presentation and follow the steps to alter the presentation, as seen in the following screenshot: Add the following properties to the in-parameters: In-parameter Property Value template Predefined list of elements #templates folder Predefined list of elements #folders datastore Predefined list of elements #datastores pool Predefined list of elements #resourcePools network Predefined list of elements #networks You can now use the General tab of each in-parameter to change the displayed text. Save and close the workflow. How it works… This is a very simplified example of an approval workflow to create VMs. The aim of this recipe is to introduce you to the method and ideas of how to build such a workflow. This workflow will only give a requester the choices that are configured in the configuration element, making the workflow quite safe for users that have only limited knowhow of the IT environment. When the requester submits the workflow, an e-mail is sent to the approver. The e-mail contains a link, which when clicked, triggers the external event and approves the VM. If the VM is approved it will get provisioned, and when the provisioning has finished an e-mail is sent to the requester stating that the VM is now available. If the VM is not approved within a certain timeframe, the requester will receive an e-mail that the VM was not approved. To make this workflow fully functional, you can add permissions for a requester group to the workflow and Orchestrator so that the user can use the vCenter to request a VM. Things you can do to improve the workflow are as follows: Schedule the provisioning to a future date. Use the resources for the e-mail and replace the content. Add an error workflow in case the provisioning fails. Use AD to read out the current user's e-mail and full name to improve the workflow. Create a workflow that lets an approver configure the configuration elements that a requester can chose from. Reduce the selections by creating, for instance, a development and production configuration that contains the correct folders, datastores, networks, and so on. Create a decommissioning workflow that is automatically scheduled so that the VM is destroyed automatically after a given period of time. See also Refer to the example workflow, 7.03 Approval and the configuration element, 7 approval. Summary In this article, we discussed one of the important aspects of the interaction of Orchestrator with vCenter Server and vRealize Automation, that is VM provisioning. Resources for Article: Further resources on this subject: Importance of Windows RDS in Horizon View [article] Metrics in vRealize Operations [article] Designing and Building a Horizon View 6.0 Infrastructure [article]
Read more
  • 0
  • 0
  • 4457
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-speeding-vagrant-development-docker
Packt
03 Mar 2015
13 min read
Save for later

Speeding Vagrant Development With Docker

Packt
03 Mar 2015
13 min read
In this article by Chad Thompson, author of Vagrant Virtual Development Environment Cookbook, we will learn that many software developers are familiar with using Vagrant (http://vagrantup.com) to distribute and maintain development environments. In most cases, Vagrant is used to manage virtual machines running in desktop hypervisor software such as VirtualBox or the VMware Desktop product suites. (VMware Fusion for OS X and VMware Desktop for Linux and Windows environments.) More recently, Docker (http://docker.io) has become increasingly popular for deploying containers—Linux processes that can run in a single operating system environment yet be isolated from one another. In practice, this means that a container includes the runtime environment for an application, down to the operating system level. While containers have been popular for deploying applications, we can also use them for desktop development. Vagrant can use Docker in a couple of ways: As a target for running a process defined by Vagrant with the Vagrant provider. As a complete development environment for building and testing containers within the context of a virtual machine. This allows you to build a complete production-like container deployment environment with the Vagrant provisioner. In this example, we'll take a look at how we can use the Vagrant provider to build and run a web server. Running our web server with Docker will allow us to build and test our web application without the added overhead of booting and provisioning a virtual machine. (For more resources related to this topic, see here.) Introducing the Vagrant Provider The Vagrant Docker provider will build and deploy containers to a Docker runtime. There are a couple of cases to consider when using Vagrant with Docker: On a Linux host machine, Vagrant will use a native (locally installed) Docker environment to deploy containers. Make sure that Docker is installed before using Vagrant. Docker itself is a technology built on top of Linux Containers (LXC) technology—so Docker itself requires an operating system with a recent version (newer than Linux 3.8 which was released in February, 2013) of the Linux kernel. Most recent Linux distributions should support the ability to run Docker. On nonLinux environments (namely OS X and Windows), the provider will require a local Linux runtime to be present for deploying containers. When running the Docker provisioner in these environments, Vagrant will download and boot a version of the boot2docker (http://boot2docker.io) environment—in this case, a repackaging of boot2docker in Vagrant box format. Let's take a look at two scenarios for using the Docker provider. In each of these examples, we'll start these environments from an OS X environment so we will see some tasks that are required for using the boot2docker environment. Installing a Docker image from a repository We'll start with a simple case: installing a Docker container from a repository (a MySQL container) and connecting it to an external tool for development (the MySQL Workbench or a client tool of your choice). We'll need to initialize the boot2docker environment and use some Vagrant tools to interact with the environment and the deployed containers. Before we can start, we'll need to find a suitable Docker image to launch. One of the unique advantages to use Docker as a development environment is its ability to select a base Docker image, then add successive build steps on top of the base image. In this simple example, we can find a base MySQL image on the Docker Hub registry. (https://registry.hub.docker.com).The MySQL project provides an official Docker image that we can build from. We'll note from the repository the command for using the image: docker pull mysql and note that the image name is mysql. Start with a Vagrantfile that defines the docker: # -*- mode: ruby -*- # vi: set ft=ruby :   VAGRANTFILE_API_VERSION = "2" ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_fusion' Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define"database" do |db|    db.vm.provider"docker"do |d|      d.image="mysql"    end end end An important thing to note immediately is that when we define the database machine and the provider with the Docker provider, we do not specify a box file. The Docker provider will start and launch containers into a boot2docker environment, negating the need for a Vagrant box or virtual machine definition. This will introduce a bit of a complication in interacting with the Vagrant environment in later steps. Also note the mysql image taken from the Docker Hub Registry. We'll need to launch the image with a few basic parameters. Add the following to the Docker provider block:    db.vm.provider "docker" do |d|      d.image="mysql"      d.env = {        :MYSQL_ROOT_PASSWORD => ""root",        :MYSQL_DATABASE     => ""dockertest",        :MYSQL_USER         => ""dockertest",        :MYSQL_PASSWORD     => ""d0cker"      }      d.ports =["3306:3306"]      d.remains_running = "true"    end The environment variables (d.env) are taken from the documentation on the MySQL Docker image page (https://registry.hub.docker.com/_/mysql/). This is how the image expects to set certain parameters. In this case, our parameters will set the database root password (for the root user) and create a database with a new user that has full permissions to that database. The d.ports parameter is an array of port listings that will be forwarded from the container (the default MySQL port of 3306) to the host operating system, in this case also 3306.The contained application will, thus, behave like a natively installed MySQL installation. The port forwarding here is from the container to the operating system that hosts the container (in this case, the container host is our boot2docker image). If we are developing and hosting containers natively with Vagrant on a Linux distribution, the port forwarding will be to localhost, but boot2docker introduces something of a wrinkle in doing Docker development on Windows or OS X. We'll either need to refer to our software installation by the IP of the boot2docker container or configure a second port forwarding configuration that allows a Docker contained application to be available to the host operating system as localhost. The final parameter (d.remains_running = true) is a flag for Vagrant to note that the Vagrant run should mark as failed if the Docker container exits on start. In the case of software that runs as a daemon process (such as the MySQL database), a Docker container that exits immediately is an error condition. Start the container using the vagrant up –provider=docker command. A few things will happen here: If this is the first time you have started the project, you'll see some messages about booting a box named mitchellh/boot2docker. This is a Vagrant-packaged version of the boot2docker project. Once the machine boots, it becomes a host for all Docker containers managed with Vagrant. Keep in mind that boot2doocker is necessary only for nonLinux operating systems that are running Docker through a virtual machine. On a Linux system running Docker natively, you will not see information about boot2docker. After the container is booted (or if it is already running), Vagrant will display notifications about rsyncing a folder (if we are using boot2docker) and launching the image: Docker generates unique identifiers for containers and notes any port mapping information. Let's take a look at some details on the containers that are running in the Docker host. We'll need to find a way to gain access to the Vagrant boot2docker image (and only if we are using boot2docker and not a native Linux environment), which is not quite as straightforward as a vagrant ssh; we'll need to identify the Vagrant container to access. First, identify the Docker Vagrant machine from the global Vagrant status. Vagrant keeps track of running instances that can be accessed from Vagrant itself. In this case, we are only interested in the Vagrant instance named docker-host. The instance we're interested in can be found with the vagrant global-status command: In this case, Vagrant identifies the instance as d381331 (a unique value for every Vagrant machine launched). We can access this instance with a vagrant ssh command: vagrant ssh d381331 This will display an ASCII-art boot2docker logo and a command prompt for the boot2docker instance. Let's take a look at Docker containers running on the system with the docker psps command: The docker ps command will provide information about the running Docker containers on the system; in this case, the unique ID of the container (output during the Vagrant startup) and other information about the container. Find the IP address of the boot2docker (only if we're using boot2docker) to connect to the MySQL instance. In this case, execute the ifconfig command: docker@boot2docker:~$ ifconfig This will output information about the network interfaces on the machine; we are interested in the eth0 entry. In particular, we can note the IP address of the machine on the eth0 interface: Make a note of the IP address noted as the inet addr; in this case 192.168.30.129. Connect a MySQL client to the running Docker container. In this case, we'll need to note some information to the connection: The IP address of the boot2docker virtual machine (if using boot2docker). In this case, we'll note 192.168.30.129. The port that the MySQL instance will respond to on the Docker host. In this case, the Docker container is forwarding port 3306 in the container to port 3306 on the host. Information noted in the Vagrantfile for the username or password on the MySQL instance. With this information in hand, we can configure a MySQL client. The MySQL project provides a supported GUI client named MySQL Workbench (http://www.mysql.com/products/workbench/). With the client installed on our host operating system, we can create a new connection in the Workbench client (consult the documentation for your version of Workbench, or use a MySQL client of your choice). In this case, we're connecting to the boot2docker instance. If you are running Docker natively on a Linux instance, the connection should simply forward to localhost. If the connection is successful, the Workbench client once connected will display an empty database: Once we've connected, we can use the MySQL database as we would for any other MySQL instance that is hosted this time in a Docker container without having to install and configure the MySQL package itself. Building a Docker image with Vagrant While launching packaged Docker, applications can be useful (particularly in the case where launching a Docker container is simpler than native installation steps), Vagrant becomes even more useful when used to launch containers that are being developed. On OS X and Windows machines, the use of Vagrant can make managing the container deployment somewhat simpler through the boot2docker containers, while on Linux, using the native Docker tools could be somewhat simpler. In this example, we'll use a simple Dockerfile to modify a base image. First, start with a simple Vagrantfile. In this case, we'll specify a build directory rather than a image file: # -*- mode: ruby -*- # vi: set ft=ruby :   # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! VAGRANTFILE_API_VERSION = "2" ENV['VAGRANT_DEFAULT_PROVIDER'] = 'vmware_fusion'   Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.define "nginx" do |nginx|    nginx.vm.provider "docker" do |d|      d.build_dir = "build"      d.ports = ["49153:80"]    end end end This Vagrantfile specifies a build directory as well as the ports forwarded to the host from the container. In this case, the standard HTTP port (80) forwards to port 49153 on the host machine, which in this case is the boot2docker instance. Create our build directory in the same directory as the Vagrantfile. In the build directory, create a Dockerfile. A Dockerfile is a set of instructions on how to build a Docker container. See https://docs.docker.com/reference/builder/ or James Turnbull's The Docker Book for more information on how to construct a Dockerfile. In this example, we'll use a simple Dockerfile to copy a working HTML directory to a base NGINX image: FROM nginx COPY content /usr/share/nginx/html Create a directory in our build directory named content. In the directory, place a simple index.html file that will be served from the new container: <html> <body>    <div style="text-align:center;padding-top:40px;border:dashed 2px;">      This is an NGINX build.    </div> </body> </html> Once all the pieces are in place, our working directory will have the following structure: . ├── Vagrantfile └── build ├── Dockerfile    └── content        └── index.html Start the container in the working directory with the command: vagrant up nginx --provider=docker This will start the container build and deploy process. Once the container is launched, the web server can be accessed using the IP address of the boot2docker instance (see the previous section for more information on obtaining this address) and the forwarded port. One other item to note, especially, if you have completed both steps in this section without halting or destroying the Vagrant project is that when using the Docker provider, containers are deployed to a single shared virtual machine. If the boot2docker instance is accessed and the docker ps command is executed, it can be noted that two separate Vagrant projects deploy containers to a single host. When using the Docker provider, the single instance has a few effects: The single virtual machine can use fewer resources on your development workstation Deploying and rebuilding containers is a process that is much faster than booting and shutting down entire operating systems Docker development with the Docker provider can be a useful technique to create and test Docker containers, although Vagrant might not be of particular help in packaging and distributing Docker containers. If you wish to publish containers, consult the documentation or The Docker Book on getting started with packaging and distributing Docker containers. See also Docker: http://docker.io boot2docker: http://boot2docker.io The Docker Book: http://www.dockerbook.com The Docker repository: https://registry.hub.docker.com Summary In this article, we learned how to use Docker provisioner with Vagrant by covering the topics mentioned in the preceding paragraphs. Resources for Article: Further resources on this subject: Going Beyond the Basics [article] Module, Facts, Types and Reporting tools in Puppet [article] Setting Up a Development Environment [article]
Read more
  • 0
  • 0
  • 2749

article-image-hyper-v-basics
Packt
06 Feb 2015
10 min read
Save for later

Hyper-V Basics

Packt
06 Feb 2015
10 min read
This article by Vinith Menon, the author of Microsoft Hyper-V PowerShell Automation, delves into the basics of Hyper-V, right from installing Hyper-V to resizing virtual hard disks. The Hyper-V PowerShell module includes several significant features that extend its use, improve its usability, and allow you to control and manage your Hyper-V environment with more granular control. Various organizations have moved on from Hyper-V (V2) to Hyper-V (V3). In Hyper-V (V2), the Hyper-V management shell was not built-in and the PowerShell module had to be manually installed. In Hyper-V (V3), Microsoft has provided an exhaustive set of cmdlets that can be used to manage and automate all configuration activities of the Hyper-V environment. The cmdlets are executed across the network using Windows Remote Management. In this article, we will cover: The basics of setting up a Hyper-V environment using PowerShell The fundamental concepts of Hyper-V management with the Hyper-V management shell The updated features in Hyper-V (For more resources related to this topic, see here.) Here is a list of all the new features introduced in Hyper-V in Windows Server 2012 R2. We will be going in depth through the important changes that have come into the Hyper-V PowerShell module with the following features and functions: Shared virtual hard disk Resizing the live virtual hard disk Installing and configuring your Hyper-V environment Installing and configuring Hyper-V using PowerShell Before you proceed with the installation and configuration of Hyper-V, there are some prerequisites that need to be taken care of: The user account that is used to install the Hyper-V role should have administrative privileges on the computer There should be enough RAM on the server to run newly created virtual machines Once the prerequisites have been taken care of, let's start with installing the Hyper-V role: Open a PowerShell prompt in Run as Administrator mode: Type the following into the PowerShell prompt to install the Hyper-V role along with the management tools; once the installation is complete, the Hyper-V Server will reboot and the Hyper-V role will be successfully installed: Install-WindowsFeature –Name Hyper-V -IncludeManagementTools - Restart Once the server boots up, verify the installation of Hyper-V using the Get-WindowsFeature cmdlet: Get-WindowsFeature -Name hyper* You will be able to see that the Hyper-V role, Hyper-V PowerShell management shell, and the GUI management tools are successfully installed:   Fundamental concepts of Hyper-V management with the Hyper-V management shell In this section, we will look at some of the fundamental concepts of Hyper-V management with the Hyper-V management shell. Once you get the Hyper-V role installed as per the steps illustrated in the previous section, a PowerShell module to manage your Hyper-V environment will also get installed. Now, perform the following steps: Open a PowerShell prompt in the Run as Administrator mode. PowerShell uses cmdlets that are built using a verb-noun naming system (for more details, refer to Learning Windows PowerShell Names at http://technet.microsoft.com/en-us/library/dd315315.aspx). Type the following command into the PowerShell prompt to get a list of all the cmdlets in the Hyper-V PowerShell module: Get-Command -Module Hyper-V Hyper-V in Windows Server 2012 R2 ships with about 178 cmdlets. These cmdlets allow a Hyper-V administrator to handle very simple, basic tasks to advanced ones such as setting up a Hyper-V replica for virtual machine disaster recovery. To get the count of all the available Hyper-V cmdlets, you can type the following command in PowerShell: Get-Command -Module Hyper-V | Measure-Object The Hyper-V PowerShell cmdlets follow a very simple approach and are very user friendly. The cmdlet name itself indirectly communicates with the Hyper-V administrator about its functionality. The following screenshot shows the output of the Get command: For example, in the following screenshot, the Remove-VMSwitch cmdlet itself says that it's used to delete a previously created virtual machine switch: If the administrator is still not sure about the task that can be performed by the cmdlet, he or she can get help with detailed examples using the Get-Help cmdlet. To get help on the cmdlet type, type the cmdlet name in the prescribed format. To make sure that the latest version of help files are installed on the server, run the Update-Help cmdlet before executing the following cmdlet: Get-Help <Hyper-V cmdlet> -Full The following screenshot is an example of the Get-Help cmdlet: Shared virtual hard disks This new and improved feature in Windows Server 2012 R2 allows an administrator to share a virtual hard disk file (the .vhdx file format) between multiple virtual machines. These .vhdx files can be used as shared storage for a failover cluster created between virtual machines (also known as guest clustering). A shared virtual hard disk allows you to create data disks and witness disks using .vhdx files with some advantages: Shared disks are ideal for SQL database files and file servers Shared disks can be run on generation 1 and generation 2 virtual machines This new feature allows you to save on storage costs and use the .vhdx files for guest clustering, enabling easier deployment rather than using virtual Fibre Channel or Internet Small Computer System Interface (iSCSI), which are complicated and require storage configuration changes such as zoning and Logic Unit Number (LUN) masking. In Windows Server 2012 R2, virtual iSCSI disks (both shared and unshared virtual hard disk files) show up as virtual SAS disks when you add an iSCSI hard disk to a virtual machine. Shared virtual hard disks (.vhdx) files can be placed on Cluster Shared Volumes (CSV) or a Scale-Out File Server cluster Let's look at the ways you can automate and manage your shared .vhdx guest clustering configuration using PowerShell. In the following example, we will demonstrate how you can create a two-node file server cluster using the shared VHDX feature. After that, let's set up a testing environment within which we can start learning these new features. The steps are as follows: We will start by creating two virtual machines each with 50 GB OS drives, which contains a sysprep image of Windows Server 2012 R2. Each virtual machine will have 4 GB RAM and four virtual CPUs. D:vhdbase_1.vhdx and D:vhdbase_2.vhdx are already existing VHDX files with sysprepped image of Windows Server 2012 R2. The following code is used to create two virtual machines: New-VM –Name "Fileserver_VM1" –MemoryStartupBytes 4GB – NewVHDPath d:vhdbase_1.vhdx -NewVHDSizeBytes 50GB New-VM –Name "Fileserver_VM2" –MemoryStartupBytes 4GB –NewVHDPath d:vhdbase_2.vhdx -NewVHDSizeBytes 50GB Next, we will install the file server role and configure a failover cluster on both the virtual machines using PowerShell. You need to enable PowerShell remoting on both the file servers and also have them joined to a domain. The following is the code: Install-WindowsFeature -computername Fileserver_VM1 File- Services, FS-FileServer, Failover-Clustering   Install-WindowsFeature -computername Fileserver_VM1 RSAT- Clustering –IncludeAllSubFeature   Install-WindowsFeature -computername Fileserver_VM2 File- Services, FS-FileServer, Failover-Clustering   Install-WindowsFeature -computername Fileserver_VM2 RSAT- Clustering -IncludeAllSubFeature Once we have the virtual machines created and the file server and failover clustering features installed, we will create the failover cluster as per Microsoft's best practices using the following set of cmdlets: New-Cluster -Name Cluster1 -Node FileServer_VM1,   FileServer_VM2 -StaticAddress 10.0.0.59 -NoStorage – Verbose You will need to choose a name and IP address that fits your organization. Next, we will create two vhdx files named sharedvhdx_data.vhdx (which will be used as a data disk) and sharedvhdx_quorum.vhdx (which will be used as the quorum or the witness disk). To do this, the following commands need to be run on the Hyper-V cluster: New-VHD -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX -Fixed - SizeBytes 10GB   New-VHD -Path   c:ClusterStorageVolume1sharedvhdx_quorum.VHDX -Fixed - SizeBytes 1GB Once we have created these virtual hard disk files, we will add them as shared .vhdx files. We will attach these newly created VHDX files to the Fileserver_VM1 and Fileserver_VM2 virtual machines and specify the parameter-shared VHDX files for guest clustering: Add-VMHardDiskDrive –VMName Fileserver_VM1 -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX – ShareVirtualDisk   Add-VMHardDiskDrive –VMName Fileserver_VM2 -Path   c:ClusterStorageVolume1sharedvhdx_data.VHDX – ShareVirtualDisk Finally, we will be making the disks available online and adding them to the failover cluster using the following command: Get-ClusterAvailableDisk | Add-ClusterDisk Once we have executed the preceding set of steps, we will have a highly available file server infrastructure using shared VHD files. Live virtual hard disk resizing With Windows Server 2012 R2, a newly added feature in Hyper-V allows the administrators to expand or shrink the size of a virtual hard disk attached to the SCSI controller while the virtual machines are still running. Hyper-V administrators can now perform maintenance operations on a live VHD and avoid any downtime by not temporarily shutting down the virtual machine for these maintenance activities. Prior to Windows Server 2012 R2, to resize a VHD attached to the virtual machine, it had to be turned off leading to costly downtime. Using the GUI controls, the VHD resize can be done by using only the Edit Virtual Hard Disk wizard. Also, note that the VHDs that were previously expanded can be shrunk. The Windows PowerShell way of doing a VHD resize is by using the Resize-VirtualDisk cmdlet. Let's look at the ways you can automate a VHD resize using PowerShell. In the next example, we will demonstrate how you can expand and shrink a virtual hard disk connected to a VM's SCSI controller. We will continue using the virtual machine that we created for our previous example. We have a pre-created VHD of 50 GB that is connected to the virtual machine's SCSI controller. Expanding the virtual hard disk Let's resize the aforementioned virtual hard disk to 57 GB using the Resize-Virtualdisk cmdlet: Resize-VirtualDisk -Name "scsidisk" -Size (57GB) Next, if we open the VM settings and perform an inspect disk operation, we'll be able to see that the VHDX file size has become 57 GB: Also, one can verify this when he or she logs into the VM, opens disk management, and extends the unused partition. You can see that the disk size has increased to 57 GB: Resizing the virtual hard disk Let's resize the earlier mentioned VHD to 57 GB using the Resize-Virtualdisk cmdlet: For this exercise, the primary requirement is to shrink the disk partition by logging in to the VM using disk management, as you can see in the following screenshot; we're shrinking the VHDX file by 7 GB: Next, click on Shrink. Once you complete this step, you will see that the unallocated space is 7 GB. You can also execute this step using the Resize-Partition Powershell cmdlet: Get-Partition -DiskNumber 1 | Resize-Partition -Size 50GB The following screenshot shows the partition: Next, we will resize/shrink the VHD to 50 GB: Resize-VirtualDisk -Name "scsidisk" -Size (50GB) Once the previous steps have been executed successfully, run a re-scan disk using disk management and you will see that the disk size is 50 GB: Summary In this article, we went through the basics of setting up a Hyper-V environment using PowerShell. We also explored the fundamental concepts of Hyper-V management with Hyper-V management shell. Resources for Article: Further resources on this subject: Hyper-V building blocks for creating your Microsoft virtualization platform [article] The importance of Hyper-V Security [article] Network Access Control Lists [article]
Read more
  • 0
  • 0
  • 2035

article-image-hyper-v-building-blocks-creating-your-microsoft-virtualization-platform
Packt
05 Feb 2015
6 min read
Save for later

Hyper-V building blocks for creating your Microsoft virtualization platform

Packt
05 Feb 2015
6 min read
In this article by Peter De Tender, the author of Mastering Hyper-V, we will talk about the building blocks for creating your virtualization platform through Hyper-V. We need to clearly define a detailed list of required server hardware, storage hardware, physical and virtual machine operating systems, and anything else we need to be able to build our future virtualization platform. These components are known as the Hyper-V building blocks, and we describe each one of them in the following sections. (For more resources related to this topic, see here.) Physical server hardware One of the first important components when building a virtualization platform is the physical server hardware. One of the key elements to check is the Microsoft certified hardware and software supportability and compatibility list. This list gives a detailed overview of all tested and certified server brands, server types, and their corresponding configuration components. While it is not a requirement to use this kind of machine, we can only recommend it, based on our own experience. Imagine you have a performance issue with one of your applications running inside a VM, being hosted on non-supported hardware, using non-supported physical NICs, and you're not getting decent support from your IT partner or Microsoft on that specific platform, as the hardware is not supported. The landing page for this compatibility list is http://www.windowsservercatalog.com. After checking the compatibility of the server hardware and software, you need to find out which system resources are available for Hyper-V. The following table shows the maximum scaling possibilities for different components of the Hyper-V platform (the original source is Microsoft TechNet Library article at http://technet.microsoft.com/en-us/library/jj680093.aspx.) System Resource Maximum number   Windows 2008 R2 Windows Server 2012 (R2) Host Logical processors on hardware 64 320 Physical memory 1 TB 4 TB Virtual processors per host 512 1,024 Virtual machine Virtual processors per virtual machine 4 64 Memory per virtual machine 64 GB 1 TB Active virtual machines 384 1,024 Virtual disk size 2 TB 64 TB Cluster Nodes 16 64 Virtual machines 1,000 4,000 Physical storage hardware Next to the physical server component, another vital part of the virtualization environment is the storage hardware. In the Hyper-V platform, multiple kinds of storage are supported, that is DAS, NAS, and/or SAN: Direct Attached Storage (DAS): This is directly connected to the server (think of disk which is located inside the server chassis). Network Attached Storage (NAS): This is the storage provided via the network and presented to the Hyper-V server or virtual machines as file shares. This disk type is file-based access. Server 2012 and 2012 R2 make use of SMB 3.0 as file-sharing protocol, which allows us to use plain file shares as virtual machine storage location Storage Area Network (SAN): This is also network-based storage, but relies on block-based access. The volumes are presented as local disks to the host. Popular protocols within SAN environments are iSCSI and Fibre Channel. The key point of consideration when sizing your disk infrastructure is providing enough storage, at the best performance available, and preferably high availability as well. Depending on the virtual machine's required resources, the disk subsystem can be based on high-performant / expensive SSD disks (solid-state drives), performant / medium-priced SAS disks (serial attached SCSI), or slower but cheaper SATA (serial ATA) disks. Or it could even be a combination of all these types. Although a bit outside of Hyper-V as such, one technology that is configured and used a lot in combination with Hyper-V Server 2012 R2, is Storage Spaces. Storage Spaces is new as of Server 2012, and can be considered as a storage virtualization subsystem. Storage Spaces are disk volumes built on top of physical storage pools, which is in fact just a bunch of physical disks (JBOD). A very important point to note is that the aforementioned network-based SAN and NAS storage solutions cannot be a part of Storage Spaces, as it is only configurable for DAS storage. The following schema diagram provides a good overview of the Storage Spaces topology, possibilities, and features: Physical network devices It's easy to understand that your virtual platform is dependent on your physical network devices such as physical (core) switches and physical NICs in the Hyper-V hosts. When configuring Hyper-V, there are a few configurations to keep into consideration. NIC Teaming NIC Teaming is the configuration of multiple physical network interface cards into a single team, mainly used for high availability or higher bandwidth purposes. NIC Teaming as such is no technology of Hyper-V, but Hyper-V can make good use of this operating system feature. When configuring a NIC team, the physical network cards are bundled and presented to the host OS as one or more virtual network adapter(s). Within Hyper-V, two basic sets of algorithms exist where you can choose from during the configuration of Hyper-V networking: Switch-independent mode: In this configuration, the teaming is configured regardless of the switches to which the host is connected. The main advantage in this configuration is the fact the teaming can be configured to use multiple switches (for example, two NICs in the host are connected to switch 1 and 2 NICs are configured to use switch 2). Switch-dependent mode: In this configuration, the underlying switch is part of the teaming configuration; this automatically requires all NICs in the team to be connected to the same switch. NIC Teaming is managed through the Server Manager / NIC Teaming interface or by using PowerShell cmdlets. Depending on your server hardware and brand, the vendor might provide you with specific configuration software to achieve the same. For example, the HP Proliant series of servers allows for HP Team configuration, which is managed by using a specific HP Team tool. Network virtualization Within Hyper-V 2012 R2, network virtualization not only refers to the virtual networking connections that are used by the virtual machines but also refers to the technology that allows for true network isolation to the different networks in which virtual machines operate. This feature set is very important for hosting providers, who run different virtual machines for their customers in an isolated network. You have to make sure that there is no connection possible between the virtual machines from customer A and the virtual machines from customer B. That's exactly the main purpose of network virtualization. Another possible way of configuring network segmentation is by using VLANs. However, this also requires VLAN configuration to be done on the physical switches, where the described network virtualization completely runs inside the virtual network switch of Hyper-V. Server editions and licensing The last component that comprises the Hyper-V building blocks is the server editions and licensing of the physical and virtual machines operating system. Summary In this article, we looked at the various building blocks for building a virtualization platform using Hyper-V.
Read more
  • 0
  • 0
  • 1153

article-image-getting-started-xenserver
Packt
26 Dec 2014
11 min read
Save for later

Getting Started with XenServer®

Packt
26 Dec 2014
11 min read
This article is written by Martez Reed, the author of Mastering Citrix® XenServer®. One of the most important technologies in the information technology field today is virtualization. Virtualization is beginning to span every area of IT, including but not limited to servers, desktops, applications, network, and more. Our primary focus is server virtualization, specifically with Citrix XenServer 6.2. There are three major platforms in the server virtualization market: VMware's vSphere, Microsoft's Hyper-V, and Citrix's XenServer. In this article, we will cover the following topics: XenServer's overview XenServer's features What's new in Citrix XenServer 6.2 Planning and installing Citrix XenServer (For more resources related to this topic, see here.) Citrix® XenServer® Citrix XenServer is a type 1 or bare metal hypervisor. A bare metal hypervisor does not require an underlying host operating system. Type 1 hypervisors have direct access to the underlying hardware, which provides improved performance and guest compatibility. Citrix XenServer is based on the open source Xen hypervisor that is widely deployed in various industries and has a proven record of stability and performance. Citrix® XenCenter® Citrix XenCenter is a Windows-based application that provides a graphical user interface for managing the Citrix XenServer hosts from a single management interface. Features of Citrix® XenServer® The following section covers the features offered by Citrix XenServer: XenMotion/Live VM Migration: The XenMotion feature allows for running virtual machines to be migrated from one host to another without any downtime. XenMotion relocates the processor and memory instances of the virtual machine from one host to another, while the actual data and settings reside on the shared storage. This feature is pivotal in providing maximum uptime when performing maintenance or upgrades. This feature requires shared storage among the hosts. Storage XenMotion / Live Storage Migration: The Storage XenMotion feature provides functionality similar to that of XenMotion, but it is used to move a virtual machine's virtual disk from one storage repository to another without powering off the virtual machine. High Availability: High Availability automatically restarts the virtual machines on another host in the event of a host failure. This feature requires shared storage among the hosts. Resource pools: Resource pools are a collection of Citrix XenServer hosts grouped together to form a single pool of compute, memory, network, and storage resources that can be managed as a single entity. The resource pool allows the virtual machines to be started on any of the hosts and seamlessly moved between them. Active Directory integration: Citrix XenServer can be joined to a Windows Active Directory domain to provide centralized authentication for XenServer administrators. This eliminates the need for multiple independent administrator accounts on each XenServer host in a XenServer environment. Role-based access control (RBAC): RBAC is a feature that takes advantage of the Active Directory integration and allows administrators to define roles that have specific privileges associated with them. This allows administrative permissions to be segregated among different administrators. Open vSwitch: The default network backend for the Citrix XenServer 6.2 hypervisor is Open vSwitch. Open vSwitch is an open source multilayer virtual switch that brings advanced network functionality to the XenServer platform such as NetFlow, SPAN, OpenFlow, and enhanced Quality of Service (QoS). The Open vSwitch backend is also an integral component of the platform's support of software-defined networking (SDN). Dynamic Memory Control: Dynamic Memory Control allows XenServer to maximize the physical memory utilization by sharing unused physical memory among the guest virtual machines. If a virtual machine has been allocated 4 GB of memory and is only using 2 GB, the remaining memory can be shared with the other guest virtual machines. This feature provides a mechanism for memory oversubscription. IntelliCache: IntelliCache is a feature aimed at improving the performance of Citrix XenDesktop virtual desktops. IntelliCache creates a cache on a XenServer local storage repository, and as the virtual desktops perform read operations, the parent VM's virtual disk is copied to the cache. Write operations are also written to the local cache when nonpersistent or shared desktops are used. This mechanism reduces the load on the storage array by retrieving data from a local source for reads instead of the array. This is particularly beneficial when multiple desktops share the same parent image. This feature is only available with Citrix XenDesktop. Disaster Recovery: The XenServer Disaster Recovery feature provides a mechanism to recover the virtual machines and vApps in the event of the failure of an entire pool or site. Distributed Virtual Switch Controller (DVSC): DVSC provides centralized management and visibility of the networking in XenServer. Thin provisioning: Thin provisioning allows for a given amount of disk space to be allocated to virtual machines but only consume the amount that is actually being used by the guest operating system. This feature provides more efficient use of the underlying storage due to the on-demand consumption. What's new in Citrix® XenServer® 6.2 Citrix has added a number of new and exciting features in the latest version of XenServer: Open source New licensing model Improved guest support Open source Starting with Version 6.2, the Citrix XenServer hypervisor is now open sourced, but continues to be managed by Citrix Systems. The move to an open source model was the result of Citrix Systems' desire to further collaborate and integrate the XenServer product with its partners and the open source community. New licensing model The licensing model has been changed in Version 6.2, with the free version of the XenServer platform now providing full functionality, the previous advanced, enterprise, and platinum versions have been eliminated. Citrix will offer paid support for the free version of the XenServer hypervisor that will include the ability to install patches/updates using the XenCenter GUI, in addition to Citrix technical support. Improved guest support Version 6.2 has added official support for the following guest operating systems: Microsoft Windows 8 (full support) Microsoft Windows Server 2012 SUSE Linux Enterprise Server (SLES) 11 SP2 (32/64 bit) Red Hat Enterprise Linux (RHEL) (32/64 bit) 5.8, 5.9, 6.3, and 6.4 Oracle Enterprise Linux (OEL) (32/64 bit) 5.8, 5.9, 6.3, and 6.4 CentOS (32/64 bit) 5.8, 5.9, 6.3, and 6.4 Debian Wheezy (32/64 bit) VSS support for Windows Server 2008 R2 has been improved and reintroduced Citrix XenServer 6.2 Service Pack 1 adds support for the following operating systems: Microsoft Windows 8.1 Microsoft Windows Server 2012 R2 Retired features The following features have been removed from Version 6.2 of Citrix XenServer: Workload Balancing (WLB) SCOM integration Virtual Machine Protection Recovery (VMPR) Web Self Service XenConvert (this has been replaced by XenServer Conversion Manager) Deprecated features The following features will be removed from the future releases of Citrix XenServer. Citrix has reviewed the XenServer market and has determined that there are third-party products that are able to provide the product functionality more effectively: Microsoft System Center Virtual Machine Manager SCVMM support Integrated StorageLink Planning and Installing Citrix® XenServer® Installing Citrix XenServer is generally a simple and straightforward process that can be completed in 10 to 15 minutes. While the actual install is simple, there are several major decisions that need to be made prior to installing Citrix XenServer in order to ensure a successful deployment. Selecting the server hardware Typically, the first step is to select the server hardware that will be used. While the thought might be to just pick a server that fits our needs, we should also ensure that the hardware meets the documented system requirements. Checking the hardware against the Hardware Compatibility List (HCL) provided by Citrix Systems is advised to ensure that the system qualifies for Citrix support and that the system will properly run Citrix XenServer. The HCL provides a list of server models that have been verified to work with Citrix XenServer. The HCL can be found online at http://www.citrix.com/xenserver/hcl. Meeting the system requirements The following sections cover the minimal system requirements for Citrix XenServer 6.2. Processor requirements The following list covers the minimum requirements for the processor(s) to install Citrix XenServer 6.2: One or more 64-bit x86 CPU(s), 1.5 GHz minimum, 2 GHz or faster multicore CPU To support VMs running on Windows, an Intel VT or AMD-V 64-bit x86-based system with one or more CPU(s) is required Virtualization technology needs to be enabled in the BIOS Virtualization technology is disabled by default on many server platforms and needs to be manually enabled. Memory requirements The minimum memory requirement for installing Citrix XenServer 6.2 is 2 GB with a recommendation of 4 GB or more for production workloads. In addition to the memory usage of the guest virtual machines, the Xen hypervisor on the Control Domain (dom0) consumes the memory resources. The amount of resources consumed by the Control Domain (dom0) is based on the amount of physical memory in the host. Hard disk requirements The following are the minimum requirements for the hard disk(s) to install Citrix XenServer 6.2: 16 GB of free disk space minimum and 60 GB of free disk space is recommended Direct attached storage in the form of SATA, SAS, SCSI, or PATA interfaces are supported XenServer can be installed on a LUN presented from a storage area network (SAN) via a host bus adapter (HBA) in the XenServer host A physical HBA is required to boot XenServer from a SAN. Network card requirements 100 Mbps or a faster NIC is required for installing Citrix XenServer. One or more gigabit NICs is recommended for faster P2V, export/import data transfers, and VM live migrations. Installing Citrix® XenServer® 6.2 The following sections cover the installation of Citrix XenServer 6.2. Installation methods The Citrix XenServer 6.2 installer can be launched via two methods as listed: CD/DVD PXE or network boot Installation source There are several options where the Citrix XenServer installation files can be stored, and depending on the scenario, one would be preferred over another. Typically, the HTTP, FTP, or NFS option would be used when the installer is booted over the network via PXE or when a scripted installation is being performed. The installation sources are as follows: Local media (CD/DVD) HTTP or FTP NFS Supplemental packs Supplemental packs provide additional functionality to the XenServer platform through features such as enhanced hardware monitoring and third-party management software integration. The supplemental packs are typically downloaded from the vendor's website and are installed when prompted during the XenServer installation. XenServer® installation The following steps cover installing Citrix XenServer 6.2 from a CD: Boot the server from the Citrix XenServer 6.2 installation media and press Enter when prompted to start the Citrix XenServer 6.2 installer. Select the desired key mapping and select Ok to proceed. Press F9 if additional drivers need to be installed or select Ok to continue. Accept the EULA. Select the hard drive for the Citrix XenServer installation and choose Ok to proceed. Select the hard drive(s) to be used for storing the guest virtual machines and choose Ok to continue. You need to select the Enable thin provisioning (Optimized storage for XenDesktop) option to make use of the IntelliCache feature. Select the installation media source and select Ok to continue. Install supplemental packs if necessary and choose No to proceed. Select Verify installation source and select Ok to begin the verification. The installation media should be verified at least once to ensure that none of the installation files are corrupt. Choose Ok to continue after the verification has successfully completed. Provide and confirm a password for the root account and select Ok to proceed. Select the network interface to be used as the primary management interface and choose Ok to continue. Select the Static configuration option and provide the requested information. Choose Ok to continue. Enter the desired hostname and DNS server information. Select Ok to proceed. Select the appropriate geographical area to configure the time zone and select Ok to continue. Select the appropriate city or area to configure the time zone and select Ok to proceed. Select Using NTP or Manual time entry for the server to determine the local time and choose Ok to continue. Using NTP to synchronize the time of XenServer hosts in a pool is recommended to ensure that the time on all the hosts in the pool is synchronized. Enter the IP address or hostname of the desired NTP server(s) and select Ok to proceed. Select Install XenServer to start the installation. Click on Ok to restart the server after the installation has completed. The following screen should be presented after the reboot: Summary In this article, we covered an overview of Citrix XenServer along with the features that were available. We also looked at the new features that were added in XenServer 6.2 and then examined installing XenServer. Resources for Article: Further resources on this subject: Understanding Citrix®Provisioning Services 7.0 [article] Designing a XenDesktop® Site [article] Installation and Deployment of Citrix Systems®' CPSM [article]
Read more
  • 0
  • 0
  • 1498
article-image-metrics-vrealize-operations
Packt
26 Dec 2014
25 min read
Save for later

Metrics in vRealize Operations

Packt
26 Dec 2014
25 min read
 In this article by Iwan 'e1' Rahabok, author of VMware vRealize Operations Performance and Capacity Management, we will learn that vSphere 5.5 comes with many counters, many more than what a physical server provides. There are new counters that do not have a physical equivalent, such as memory ballooning, CPU latency, and vSphere replication. In addition, some counters have the same name as their physical world counterpart but behave differently in vSphere. Memory usage is a common one, resulting in confusion among system administrators. For those counters that are similar to their physical world counterparts, vSphere may use different units, such as milliseconds. (For more resources related to this topic, see here.) As a result, experienced IT administrators find it hard to master vSphere counters by building on their existing knowledge. Instead of trying to relate each counter to its physical equivalent, I find it useful to group them according to their purpose. Virtualization formalizes the relationship between the infrastructure team and application team. The infrastructure team changes from the system builder to service provider. The application team no longer owns the physical infrastructure. The application team becomes a consumer of a shared service—the virtual platform. Depending on the Service Level Agreement (SLA), the application team can be served as if they have dedicated access to the infrastructure, or they can take a performance hit in exchange for a lower price. For SLAs where performance matters, the VM running in the cluster should not be impacted by any other VMs. The performance must be as good as if it is the only VM running in the ESXi. Because there are two different counter users, there are two different purposes. The application team (developers and the VM owner) only cares about their own VM. The infrastructure team has to care about both the VM and infrastructure, especially when they need to show that the shared infrastructure is not a bottleneck. One set of counters is to monitor the VM; the other set is to monitor the infrastructure. The following diagram shows the two different purposes and what we should check for each. By knowing what matters on each layer, we can better manage the virtual environment. The two-tier IT organization At the VM layer, we care whether the VM is being served well by the platform. Other VMs are irrelevant from the VM owner's point of view. A VM owner only wants to make sure his or her VM is not contending for a resource. So the key counter here is contention. Only when we are satisfied that there is no contention can we proceed to check whether the VM is sized correctly or not. Most people check for utilization first because that is what they are used to monitoring in the physical infrastructure. In a virtual environment, we should check for contention first. At the infrastructure layer, we care whether it serves everyone well. Make sure that there is no contention for resource among all the VMs in the platform. Only when the infrastructure is clear from contention can we troubleshoot a particular VM. If the infrastructure is having a hard time serving majority of the VMs, there is no point troubleshooting a particular VM. This two-layer concept is also implemented by vSphere in compute and storage architectures. For example, there are two distinct layers of memory in vSphere. There is the individual VM memory provided by the hypervisor and there is the physical memory at the host level. For an individual VM, we care whether the VM is getting enough memory. At the host level, we care whether the host has enough memory for everyone. Because of the difference in goals, we look for a different set of counters. In the previous diagram, there are two numbers shown in a large font, indicating that there are two main steps in monitoring. Each step applies to each layer (the VM layer and infrastructure layer), so there are two numbers for each step. Step 1 is used for performance management. It is useful during troubleshooting or when checking whether we are meeting performance SLAs or not. Step 2 is used for capacity management. It is useful as part of long-term capacity planning. The time period for step 2 is typically 3 months, as we are checking for overall utilization and not a one off spike. With the preceding concept in mind, we are ready to dive into more detail. Let's cover compute, network, and storage. Compute The following diagram shows how a VM gets its resource from ESXi. It is a pretty complex diagram, so let me walk you through it. The tall rectangular area represents a VM. Say this VM is given 8 GB of virtual RAM. The bottom line represents 0 GB and the top line represents 8 GB. The VM is configured with 8 GB RAM. We call this Provisioned. This is what the Guest OS sees, so if it is running Windows, you will see 8 GB RAM when you log into Windows. Unlike a physical server, you can configure a Limit and a Reservation. This is done outside the Guest OS, so Windows or Linux does not know. You should minimize the use of Limit and Reservation as it makes the operation more complex. Entitlement means what the VM is entitled to. In this example, the hypervisor entitles the VM to a certain amount of memory. I did not show a solid line and used an italic font style to mark that Entitlement is not a fixed value, but a dynamic value determined by the hypervisor. It varies every minute, determined by Limit, Entitlement, and Reservation of the VM itself and any shared allocation with other VMs running on the same host. Obviously, a VM can only use what it is entitled to at any given point of time, so the Usage counter does not go higher than the Entitlement counter. The green line shows that Usage ranges from 0 to the Entitlement value. In a healthy environment, the ESXi host has enough resources to meet the demands of all the VMs on it with sufficient overhead. In this case, you will see that the Entitlement, Usage, and Demand counters will be similar to one another when the VM is highly utilized. This is shown by the green line where Demand stops at Usage, and Usage stops at Entitlement. The numerical value may not be identical because vCenter reports Usage in percentage, and it is an average value of the sample period. vCenter reports Entitlement in MHz and it takes the latest value in the sample period. It reports Demand in MHz and it is an average value of the sample period. This also explains why you may see Usage a bit higher than Entitlement in highly-utilized vCPU. If the VM has low utilization, you will see the Entitlement counter is much higher than Usage. An environment in which the ESXi host is resource constrained is unhealthy. It cannot give every VM the resources they ask for. The VMs demand more than they are entitled to use, so the Usage and Entitlement counters will be lower than the Demand counter. The Demand counter can go higher than Limit naturally. For example, if a VM is limited to 2 GB of RAM and it wants to use 14 GB, then Demand will exceed Limit. Obviously, Demand cannot exceed Provisioned. This is why the red line stops at Provisioned because that is as high as it can go. The difference between what the VM demands and what it gets to use is the Contention counter. Contention is Demand minus Usage. So if the Contention is 0, the VM can use everything it demands. This is the ultimate goal, as performance will match the physical world. This Contention value is useful to demonstrate that the infrastructure provides a good service to the application team. If a VM owner comes to see you and says that your shared infrastructure is unable to serve his or her VM well, both of you can check the Contention counter. The Contention counter should become a part of your SLA or Key Performance Indicator (KPI). It is not sufficient to track utilization alone. When there is contention, it is possible that both your VM and ESXi host have low utilization, and yet your customers (VMs running on that host) perform poorly. This typically happens when the VMs are relatively large compared to the ESXi host. Let me give you a simple example to illustrate this. The ESXi host has two sockets and 20 cores. Hyper-threading is not enabled to keep this example simple. You run just 2 VMs, but each VM has 11 vCPUs. As a result, they will not be able to run concurrently. The hypervisor will schedule them sequentially as there are only 20 physical cores to serve 22 vCPUs. Here, both VMs will experience high contention. Hold on! You might say, "There is no Contention counter in vSphere and no memory Demand counter either." This is where vRealize Operations comes in. It does not just regurgitate the values in vCenter. It has implicit knowledge of vSphere and a set of derived counters with formulae that leverage that knowledge. You need to have an understanding of how the vSphere CPU scheduler works. The following diagram shows the various states that a VM can be in: The preceding diagram is taken from The CPU Scheduler in VMware vSphere® 5.1: Performance Study (you can find it at http://www.vmware.com/resources/techresources/10345). This is a whitepaper that documents the CPU scheduler with a good amount of depth for VMware administrators. I highly recommend you read this paper as it will help you explain to your customers (the application team) how your shared infrastructure juggles all those VMs at the same time. It will also help you pick the right counters when you create your custom dashboards in vRealize Operations. Storage If you look at the ESXi and VM metric groups for storage in the vCenter performance chart, it is not clear how they relate to one another at first glance. You have storage network, storage adapter, storage path, datastore, and disk metric groups that you need to check. How do they impact on one another? I have created the following diagram to explain the relationship. The beige boxes are what you are likely to be familiar with. You have your ESXi host, and it can have NFS Datastore, VMFS Datastore, or RDM objects. The blue colored boxes represent the metric groups. From ESXi to disk NFS and VMFS datastores differ drastically in terms of counters, as NFS is file-based while VMFS is block-based. For NFS, it uses the vmnic, and so the adapter type (FC, FCoE, or iSCSI) is not applicable. Multipathing is handled by the network, so you don't see it in the storage layer. For VMFS or RDM, you have more detailed visibility of the storage. To start off, each ESXi adapter is visible and you can check the counters for each of them. In terms of relationship, one adapter can have many devices (disk or CDROM). One device is typically accessed via two storage adapters (for availability and load balancing), and it is also accessed via two paths per adapter, with the paths diverging at the storage switch. A single path, which will come from a specific adapter, can naturally connect one adapter to one device. The following diagram shows the four paths: Paths from ESXi to storage A storage path takes data from ESXi to the LUN (the term used by vSphere is Disk), not to the datastore. So if the datastore has multiple extents, there are four paths per extent. This is one reason why I did not use more than one extent, as each extent adds four paths. If you are not familiar with extent, Cormac Hogan explains it well on this blog post: http://blogs.vmware.com/vsphere/2012/02/vmfs-extents-are-they-bad-or-simply-misunderstood.html For VMFS, you can see the same counters at both the Datastore level and the Disk level. Their value will be identical if you follow the recommended configuration to create a 1:1 relationship between a datastore and a LUN. This means you present an entire LUN to a datastore (use all of its capacity). The following screenshot shows how we manage the ESXi storage. Click on the ESXi you need to manage, select the Manage tab, and then the Storage subtab. In this subtab, we can see the adapters, devices, and the host cache. The screen shows an ESXi host with the list of its adapters. I have selected vmhba2, which is an FC HBA. Notice that it is connected to 5 devices. Each device has 4 paths, so I have 20 paths in total. ESXi adapter Let's move on to the Storage Devices tab. The following screenshot shows the list of devices. Because NFS is not a disk, it does not appear here. I have selected one of the devices to show its properties. ESXi device If you click on the Paths tab, you will be presented with the information shown in the next screenshot, including whether a path is active. Note that not all paths carry I/O; it depends on your configuration and multipathing software. Because each LUN typically has four paths, path management can be complicated if you have many LUNs. ESXi paths The story is quite different on the VM layer. A VM does not see the underlying shared storage. It sees local disks only. So regardless of whether the underlying storage is NFS, VMFS, or RDM, it sees all of them as virtual disks. You lose visibility in the physical adapter (for example, you cannot tell how many IOPSs on vmhba2 are coming from a particular VM) and physical paths (for example, how many disk commands travelling on that path are coming from a particular VM). You can, however, see the impact at the Datastore level and the physical Disk level. The Datastore counter is especially useful. For example, if you notice that your IOPS is higher at the Datastore level than at the virtual Disk level, this means you have a snapshot. The snapshot IO is not visible at the virtual Disk level as the snapshot is stored on a different virtual disk. From VM to disk Counters in vCenter and vRealize Operations We compared the metric groups between vCenter and vRealize Operations. We know that vRealize Operations provides a lot more detail, especially for larger objects such as vCenter, data center, and cluster. It also provides information about the distributed switch, which is not displayed in vCenter at all. This makes it useful for the big-picture analysis. We will now look at individual counters. To give us a two-dimensional analysis, I would not approach it from the vSphere objects' point of view. Instead, we will examine the four key types of metrics (CPU, RAM, network, and storage). For each type, I will provide my personal take on what I think is a good guidance for their value. For example, I will give guidance on a good value for CPU contention based on what I have seen in the field. This is not an official VMware recommendation. I will state the official recommendation or popular recommendation if I am aware of it. You should spend time understanding vCenter counters and esxtop counters. This section of the article is not meant to replace the manual. I would encourage you to read the vSphere documentation on this topic, as it gives you the required foundation while working with vRealize Operations. The following are the links to this topic: The link for vSphere 5.5 is http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.vsphere.monitoring.doc/GUID-12B1493A-5657-4BB3-8935-44B6B8E8B67C.html. If this link does not work, visit https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html and then navigate to ESXi and vCenter Server 5.5 Documentation | vSphere Monitoring and Performance | Monitoring Inventory Objects with Performance Charts. The counters are documented in the vSphere API. You can find it at http://pubs.vmware.com/vsphere-55/index.jsp#com.vmware.wssdk.apiref.doc/vim.PerformanceManager.html. If this link has changed and no longer works, open the vSphere online documentation and navigate to vSphere API/SDK Documentation | vSphere Management SDK | vSphere Web Services SDK Documentation | VMware vSphere API Reference | Managed Object Types | P. Here, choose Performance Manager from the list under the letter P. The esxtop manual provides good information on the counters. You can find it at https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html. You should also be familiar with the architecture of ESXi, especially how the scheduler works. vCenter has a different collection interval (sampling period) depending upon the timeline you are looking at. Most of the time you are looking at the real-time statistic (chart), as other timelines do not have enough counters. You will notice right away that most of the counters become unavailable once you choose a timeline. In the real-time chart, each data point has 20 seconds' worth of data. That is as accurate as it gets in vCenter. Because all other performance management tools (including vRealize Operations) get their data from vCenter, they are not getting anything more granular than this. As mentioned previously, esxtop allows you to sample down to a minimum of 2 seconds. Speaking of esxtop, you should be aware that not all counters are exposed in vCenter. For example, if you turn on 3D graphics, there is a separate SVGA thread created for that VM. This can consume CPU and it will not show up in vCenter. The Mouse, Keyboard, Screen (MKS) threads, which give you the console, also do not show up in vCenter. The next screenshot shows how you lose most of your counters if you choose a timespan other than real time. In the case of CPU, you are basically left with two counters, as Usage and Usage in MHz cover the same thing. You also lose the ability to monitor per core, as the target objects only list the host now and not the individual cores. Counters are lost beyond 1 hour Because the real-time timespan only lasts for 1 hour, the performance troubleshooting has to be done at the present moment. If the performance issue cannot be recreated, there is no way to troubleshoot in vCenter. This is where vRealize Operations comes in, as it keeps your data for a much longer period. I was able to perform troubleshooting for a client on a problem that occurred more than a month ago! vRealize Operations takes data every 5 minutes. This means it is not suitable to troubleshoot performance that does not last for 5 minutes. In fact, if the performance issue only lasts for 5 minutes, you may not get any alert, because the collection may happen exactly in the middle of those 5 minutes. For example, let's assume the CPU is idle from 08:00:00 to 08:02:30, spikes from 08:02:30 to 08:07:30, and then again is idle from 08:07:30 to 08:10:00. If vRealize Operations is collecting at exactly 08:00, 08:05, and 08:10, you will not see the spike as it is spread over two data points. This means, for vRealize Operations to pick up the spike in its entirety without any idle data, the spike has to last for 10 minutes or more. In some metrics, the unit is actually 20 seconds. vRealize Operations averages a set of 20-second data points into a single 5-minute data point. The Rollups column is important. Average means the average of 5 minutes in the case of vRealize Operations. The summation value is actually an average for those counters where accumulation makes more sense. An example is CPU Ready time. It gets accumulated over the sampling period. Over a period of 20 seconds, a VM may accumulate 200 milliseconds of CPU ready time. This translates into 1 percent, which is why I said it is similar to average, as you lose the peak. Latest, on the other hand, is different. It takes the last value of the sampling period. For example, in the sampling for 20 seconds, it takes the value between 19 and 20 seconds. This value can be lower or higher than the average of the entire 20-second period. So what is missing here is the peak of the sampling period. In the 5-minute period, vRealize Operations does not collect low, average, and high from vCenter. It takes average only. Let's talk about the Units column now. Some common units are milliseconds, MHz, percent, KBps, and KB. Some counters are shown in MHz, which means you need to know your ESXi physical CPU frequency. This can be difficult due to CPU power saving features, which lower the CPU frequency when the demand is low. In large environments, this can be operationally difficult as you have different ESXi hosts from different generations (and hence, are likely to sport a different GHz). This is also the reason why I state that the cluster is the smallest logical building block. If your cluster has ESXi hosts with different frequencies, these MHz-based counters can be difficult to use, as the VMs get vMotion-ed by DRS. vRealize Operations versus vCenter I mentioned earlier that vRealize Operations does not simply regurgitate what vCenter has. Some of the vSphere-specific characteristics are not properly understood by traditional management tools. Partial understanding can lead to misunderstanding. vRealize Operations starts by fully understanding the unique behavior of vSphere, then simplifying it by consolidating and standardizing the counters. For example, vRealize Operations creates derived counters such as Contention and Workload, then applies them to CPU, RAM, disk, and network. Let's take a look at one example of how partial information can be misleading in a troubleshooting scenario. It is common for customers to invest in an ESXi host with plenty of RAM. I've seen hosts with 256 to 512 GB of RAM. One reason behind this is the way vCenter displays information. In the following screenshot, vCenter is giving me an alert. The host is running on high memory utilization. I'm not showing the other host, but you can see that it has a warning, as it is high too. The screenshots are all from vCenter 5.0 and vCenter Operations 5.7, but the behavior is still the same in vCenter 5.5 Update 2 and vRealize Operations 6.0. vSphere 5.0 – Memory alarm I'm using vSphere 5.0 and vCenter Operations 5.x to show the screenshots as I want to provide an example of the point I stated earlier, which is the rapid change of vCloud Suite. The first step is to check if someone has modified the alarm by reducing the threshold. The next screenshot shows that utilization above 95 percent will trigger an alert, while utilization above 90 percent will trigger a warning. The threshold has to be breached by at least 5 minutes. The alarm is set to a suitably high configuration, so we will assume the alert is genuinely indicating a high utilization on the host. vSphere 5.0 – Alarm settings Let's verify the memory utilization. I'm checking both the hosts as there are two of them in the cluster. Both are indeed high. The utilization for vmsgesxi006 has gone down in the time taken to review the Alarm Settings tab and move to this view, so both hosts are now in the Warning status. vSphere 5.0 – Hosts tab Now we will look at the vmsgesxi006 specification. From the following screenshot, we can see it has 32 GB of physical RAM, and RAM usage is 30747 MB. It is at 93.8 percent utilization. vSphere – Host's summary page Since all the numbers shown in the preceding screenshot are refreshed within minutes, we need to check with a longer timeline to make sure this is not a one-time spike. So let's check for the last 24 hours. The next screenshot shows that the utilization was indeed consistently high. For the entire 24-hour period, it has consistently been above 92.5 percent, and it hits 95 percent several times. So this ESXi host was indeed in need of more RAM. Deciding whether to add more RAM is complex; there are many factors to be considered. There will be downtime on the host, and you need to do it for every host in the cluster since you need to maintain a consistent build cluster-wide. Because the ESXi is highly utilized, I should increase the RAM significantly so that I can support more VMs or larger VMs. Buying bigger DIMMs may mean throwing away the existing DIMMs, as there are rules restricting the mixing of DIMMs. Mixing DIMMs also increases management complexity. The new DIMM may require a BIOS update, which may trigger a change request. Alternatively, the large DIMM may not be compatible with the existing host, in which case I have to buy a new box. So a RAM upgrade may trigger a host upgrade, which is a larger project. Before jumping in to a procurement cycle to buy more RAM, let's double-check our findings. It is important to ask what is the host used for? and who is using it?. In this example scenario, we examined a lab environment, the VMware ASEAN lab. Let's check out the memory utilization again, this time with the context in mind. The preceding graph shows high memory utilization over a 24-hour period, yet no one was using the lab in the early hours of the morning! I am aware of this as I am the lab administrator. We will now turn to vCenter Operations for an alternative view. The following screenshot from vCenter Operations 5 tells a different story. CPU, RAM, disk, and network are all in the healthy range. Specifically for RAM, it has 97 percent utilization but 32 percent demand. Note that the Memory chart is divided into two parts. The upper one is at the ESXi level, while the lower one shows individual VMs in that host. The upper part is in turn split into two. The green rectangle (Demand) sits on top of a grey rectangle (Usage). The green rectangle shows a healthy figure at around 10 GB. The grey rectangle is much longer, almost filling the entire area. The lower part shows the hypervisor and the VMs' memory utilization. Each little green box represents one VM. On the bottom left, note the KEY METRICS section. vCenter Operations 5 shows that Memory | Contention is 0 percent. This means none of the VMs running on the host is contending for memory. They are all being served well! vCenter Operations 5 – Host's details page I shared earlier that the behavior remains the same in vCenter 5.5. So, let's take a look at how memory utilization is shown in vCenter 5.5. The next screenshot shows the counters provided by vCenter 5.5. This is from a different ESXi host, as I want to provide you with a second example. Notice that the ballooning is 0, so there is no memory pressure for this host. This host has 48 GB of RAM. About 26 GB has been mapped to VM or VMkernel, which is shown by the Consumed counter (the highest line in the chart; notice that the value is almost constant). The Usage counter shows 52 percent because it takes from Consumed. The active memory is a lot lower, as you can see from the line at the bottom. Notice that the line is not a simple straight line, as the value goes up and down. This proves that the Usage counter is actually the Consumed counter. vCenter 5.5 Update 1 memory counters At this point, some readers might wonder whether that's a bug in vCenter. No, it is not. There are situations in which you want to use the consumed memory and not the active memory. In fact, some applications may not run properly if you use active memory. Also, technically, it is not a bug as the data it gives is correct. It is just that additional data will give a more complete picture since we are at the ESXi level and not at the VM level. vRealize Operations distinguishes between the active memory and consumed memory and provides both types of data. vCenter uses the Consumed counter for utilization for the ESXi host. As you will see later in this article, vCenter uses the Active counter for utilization for VM. So the Usage counter has a different formula in vCenter depending upon the object. This makes sense as they are at different levels. vRealize Operations uses the Active counter for utilization. Just because a physical DIMM on the motherboard is mapped to a virtual DIMM in the VM, it does not mean it is actively used (read or write). You can use that DIMM for other VMs and you will not incur (for practical purposes) performance degradation. It is common for Microsoft Windows to initialize pages upon boot with zeroes, but never use them subsequently. For further information on this topic, I would recommend reviewing Kit Colbert's presentation on Memory in vSphere at VMworld, 2012. The content is still relevant for vSphere 5.x. The title is Understanding Virtualized Memory Performance Management and the session ID is INF-VSP1729. You can find it at http://www.vmworld.com/docs/DOC-6292. If the link has changed, the link to the full list of VMworld 2012 sessions is http://www.vmworld.com/community/sessions/2012/. Not all performance management tools understand this vCenter-specific characteristic. They would have given you a recommendation to buy more RAM. Summary In this article, we covered the world of counters in vCenter and vRealize Operations. The counters were analyzed based on their four main groupings (CPU, RAM, disk, and network). We also covered each of the metric groups, which maps to the corresponding objects in vCenter. For the counters, we also shared how they are related, and how they differ. Resources for Article: Further resources on this subject: Backups in the VMware View Infrastructure [Article] VMware vCenter Operations Manager Essentials - Introduction to vCenter Operations Manager [Article] An Introduction to VMware Horizon Mirage [Article]
Read more
  • 0
  • 0
  • 3372

article-image-introduction-veeam-one-business-view
Packt
24 Dec 2014
3 min read
Save for later

Introduction to Veeam® ONE™ Business View

Packt
24 Dec 2014
3 min read
In this article, by Kevin L. Sapp, author of the book Managing Virtual Infrastructure with Veeam® ONE™, we will have a look at how Veeam® ONE™ Business View allows you to group and manage your virtual infrastructure in business containers. This is helpful to split machines into function, priority, or any other descriptive category you would like. Veeam® ONE™ Business View displays the categorized information about VMs, clusters, and hosts in business terms. This perspective allows you to plan, control, and analyze the changes in the virtual environment. We will also have a look at data collection. (For more resources related to this topic, see here.) Data collection The data required to create the business topology is periodically collected from the connected virtual infrastructure servers. The data collection is usually run at a scheduled interval. However, you can also run the data collection manually. By default, after a virtual infrastructure server is connected to Veeam® ONE™, the collection is scheduled to run on a weekday at 2 a.m. If required, you can adjust the data collection schedule or switch to the manual collection mode to start each data collection session manually. Scheduling the data collection The best way to automate the collection of data is by creating a schedule for a specific VM server. To change the collection mode to Scheduled and to specify the time settings, use the following steps: Open the Veeam® ONE™ Business View web application by either double-clicking on the desktop icon or connecting to the Veeam® ONE™ server using a browser with the URL http://servername : 1340 by default. Click on the Configuration link located in the upper-right corner of the screen. Click on the VI Management Servers menu option located on the left-hand side of the screen.   Select the Run mode option for the server that you would like to change the schedule for.   While scheduling the data collection for the VM server, perform the following steps: Select the Periodically every option if you plan to run the data collection at a desired interval Select the Daily at this time option if you plan to run the data collection at a specific time of the day or week Once the schedule has been created, click on OK. Collecting data manually The following steps are needed to perform a manual collection of the virtual environment data. Use this procedure to collect data manually: Click on the Session History menu item on the left-hand side of the screen.   Click on the Run Now button for the server that you wish to run the data collection manually. The data collection normally takes a few minutes to run. However, it can vary based on the size and complexity of your infrastructure.   View the details of the session data by clicking on the server from the list shown in Session History. Summary In this article, we explained Veeam® ONE™ Business View. We discussed the steps needed to plan, control, and analyze the changes in the virtual environment. Resources for Article: Further resources on this subject: Configuring vShield App [article] Backups in the VMware View Infrastructure [article] Introduction to Veeam® Backup & Replication for VMware [article]
Read more
  • 0
  • 0
  • 1161

article-image-network-access-control-lists
Packt
27 Nov 2014
6 min read
Save for later

Network Access Control Lists

Packt
27 Nov 2014
6 min read
In this article by Ryan Boud, author of Hyper-V Network Virtualization Cookbook, we will learn to lock down a VM for security access. (For more resources related to this topic, see here.) Locking down a VM for security access This article will show you how to apply ACLs to VMs to protect them from unauthorized access. Getting ready You will need to start two VMs in the Tenant A VM Network: in this case, Tenant A – VM 10, to test the gateway and as such should have IIS installed) and Tenant A – VM 11. How to do it... Perform the following steps to lock down a VM: In the VMM console, click on the Home tab in the ribbon bar and click on the PowerShell button. This will launch PowerShell with the VMM module already loaded and the console connected to the current VMM instance. To obtain the Virtual Subnet IDs for all subnets in the Tenant A VM Network, enter the following PowerShell: $VMNetworkName = "Tenant A" $VMNetwork = Get-SCVMNetwork | Where-Object -Property Name -EQ $VMNetworkName Get-SCVMSubnet -VMNetwork $VMNetwork | Select-Object VMNetwork,Name,SubnetVlans,VMSubnetID You will be presented with the list of subnets and the VMSubnetID for each. The VMSubnetID will used later in this article; in this case, the VMSubnetID is 4490741, as shown in the following screenshot: Your VMSubnet ID value may be different to the one obtained here; this is normal behavior. In the PowerShell Console, run the following PowerShell to get the IP addresses of Tenant A – VM 10 and Tenant A – VM 11: $VMs = @() $VMs += Get-SCVirtualMachine -Name "Tenant A - VM 10" $VMs += Get-SCVirtualMachine -Name "Tenant A - VM 11" ForEach($VM in $VMs){    Write-Output "$($VM.Name): $($VM.VirtualNetworkAdapters.IPv4Addresses)"    Write-Output "Host name: $($VM.HostName)" } You will be presented with the IPv4 addresses for the two VMs as shown in the following screenshot: Please leave this PowerShell console open. Your IP addresses and host names may differ from those shown here; this is normal behavior. In the VMM console, open the VMs and Services workspace and navigate to All Hosts | Hosts | hypvclus01. Right-click on Tenant A – VM 11, navigate to Connect or View, and then click on Connect via Console. Log in to the VM via the Remote Console. Open Internet Explorer and go to the URL http://10.0.0.14, where 10.0.0.14 is the IP address of Tenant A – VM 10, as we discussed in step 4. You will be greeted with default IIS page. This shows that there are currently no ACLs preventing Tenant A – VM 11 accessing Tenant A – VM 10 within Hyper-V or within the Windows Firewall. Open a PowerShell console on Tenant A – VM 11 and enter the following command: Ping 10.0.0.14 –t Here, 10.0.0.14 is the IP address of Tenant A – VM 10. This will run a continuous ping against Tenant A – VM10. In the PowerShell console left open in Step 4, enter the following PowerShell: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Deny -Direction      Inbound -VMName "Tenant A - VM 10" -Weight 1 -        IsolationID 4490741 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4 and the Isolation ID needs to be VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding base rules such as a Deny All, it is suggested to apply a weight of 1 to allow other rules to override it if appropriate. Return to the PowerShell console left open on Tenant A – VM 11 in step 10. You will see that Tenant A – VM 10 has stopped responding to pings. This has created a Hyper-V Port ACL that will deny all inbound traffic to Tenant A – VM10. In the same PowerShell console, enter the following PowerShell: Test-NetConnection -CommonTCPPort HTTP -ComputerName 10.0.0.14 -InformationLevel Detailed Here, 10.0.0.14 is the IP address of Tenant A – VM 10. This shows that you cannot access the IIS website on Tenant A – VM 10. Return to the PowerShell console left open on the VMM console in step 11 and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Allow -      Direction Inbound -VMName "Tenant A - VM 10" -Weight        10 -IsolationID 4490741 -LocalPort 80 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4, and the Isolation ID needs to be set to VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding rules it is suggested to use weight increments of 10 to allow other rules to be inserted between rules if necessary. On Tenant A – VM 11, repeat step 13. You will see that TCPTestSucceeded has changed to True. Return to the PowerShell console left open on the VMM console in step 14, and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Deny -Direction      Outbound -VMName "Tenant A - VM 10" -Weight 1 -        IsolationID 4490741 } Here, HYPVCH1.ad.demo.com is the name of the host where Tenant A – VM 10 is running, as obtained in step 4, and the Isolation ID needs to be set to VMSubnetID as obtained in step 2. Please leave this PowerShell console open. When adding base rules such as a Deny All, it is suggested to apply a weight of 1 to allow other rules to override it if appropriate. On Tenant A – VM 11 repeat step 14. You will see that TCPTestSucceeded has changed to False. This is because all outbound connections have been denied. Return to the PowerShell console left open on the VMM console in step 17, and enter the following PowerShell cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Remove-VMNetworkAdapterExtendedAcl -Direction Inbound -      VMName "Tenant A - VM 10" -Weight 10 } This removes the inbound rule for port 80. In the same PowerShell console enter the following cmdlets: Invoke-Command -ComputerName HYPVCH1.ad.demo.com - ScriptBlock{    Add-VMNetworkAdapterExtendedAcl -Action Allow -      Direction Inbound -VMName "Tenant A - VM 10" -Weight        10 -IsolationID 4490741 -LocalPort 80 -Stateful          $True -Protocol TCP } This adds a stateful ACL rule; this ensures that the switch dynamically creates an outbound rule to allow the traffic to return to the requestor. On Tenant A – VM 11 repeat step 14. You will see that the TCPTestSucceeded has changed to True. This is because the stateful ACL is now in place. How it works... Extended ACLs are applied as traffic ingresses and egresses the VM into and out of the Hyper-V switch. As the ACLs are VM-specific, they are stored in the VM's configuration file. This ensures that the ACLs are moved with the VM ensuring continuity of ACL. For the complete range of options, it is advisable to review the TechNet article at http://technet.microsoft.com/en-us/library/dn464289.aspx. Summary In this article we learned how to lock down a VM for security access. Resources for Article: Further resources on this subject: High Availability Scenarios [Article] Performance Testing and Load Balancing [Article] Your first step towards Hyper-V Replica [Article]
Read more
  • 0
  • 0
  • 1615
article-image-high-availability-scenarios
Packt
26 Nov 2014
14 min read
Save for later

High Availability Scenarios

Packt
26 Nov 2014
14 min read
"Live Migration between hosts in a Hyper-V cluster is very straightforward and requires no specific configuration, apart from type and amount of simultaneous Live Migrations. If you add multiple clusters and standalone Hyper-V hosts into the mix, I strongly advise you to configure Kerberos Constrained Delegation for all hosts and clusters involved." Hans Vredevoort – MVP Hyper-V This article written by Benedict Berger, the author of Hyper-V Best Practices, will guide you through the installation of Hyper-V clusters and their best practice configuration. After installing the first Hyper-V host, it may be necessary to add another layer of availability to your virtualization services. With Failover Clusters, you get independence from hardware failures and are protected from planned or unplanned service outages. This article includes prerequirements and implementation of Failover Clusters. (For more resources related to this topic, see here.) Preparing for High Availability Like every project, a High Availability (HA) scenario starts with a planning phase. Virtualization projects are often turning up the question for additional availability for the first time in an environment. In traditional data centers with physical server systems and local storage systems, an outage of a hardware component will only affect one server hosting one service. The source of the outage can be localized very fast and the affected parts can be replaced in a short amount of time. Server virtualization comes with great benefits, such as improved operating efficiency and reduced hardware dependencies. However, a single component failure can impact a lot of virtualized systems at once. By adding redundant systems, these single points of failure can be avoided. Planning a HA environment The most important factor in the decision whether you need a HA environment is your business requirements. You need to find out how often and how long an IT-related production service can be interrupted unplanned, or planned, without causing a serious problem to your business. Those requirements are defined in a central IT strategy of a business as well as in process definitions that are IT-driven. They include Service Level Agreements of critical business services run in the various departments of your company. If those definitions do not exist or are unavailable, talk to the process owners to find out the level of availability needed. High Availability is structured in different classes, measured by the total uptime in a defined timespan, that is 99.999 percent in a year. Every nine in this figure adds a huge amount of complexity and money needed to ensure this availability, so take time to find out the real availability needed by your services and resist the temptation to plan running every service on multi-redundant, geo-spread cluster systems, as it may not fit in the budget. Be sure to plan for additional capacity in a HA environment, so you can lose hardware components without the need to sacrifice application performance. Overview of the Failover Cluster A Hyper-V Failover Cluster consists of two or more Hyper-V Server compute nodes. Technically, it's possible to use a Failover Cluster with just one computing node; however, it will not provide any availability advantages over a standalone host and is typically only used for migration scenarios. A Failover Cluster is hosting roles such as Hyper-V virtual machines on its computing nodes. If one node fails due to a hardware problem, it will not answer any more to cluster heartbeat communication, even though the service interruption is almost instantly detected. The virtual machines running on the particular node are powered off immediately due to the hardware failure on their computing node. The remaining cluster nodes then immediately take over these VMs in an unplanned failover process and start them on their respective own hardware. The virtual machines will be the backup running after a successful boot of their operating systems and applications in just a few minutes. Hyper-V Failover Clusters work under the condition that all compute nodes have access to a shared storage instance, holding the virtual machine configuration data and its virtual hard disks. In case of a planned failover, that is, for patching compute nodes, it's possible to move running virtual machines from one cluster node to another without interrupting the VM. All cluster nodes can run virtual machines at the same time, as long as there is enough failover capacity running all services when a node goes down. Even though a Hyper-V cluster is still called a Failover Cluster—utilizing the Windows Server Failover-Clustering feature—it is indeed capable of running an Active/Active Cluster. To ensure that all these capabilities of a Failover Cluster are indeed working, it demands an accurate planning and implementation process. Failover Cluster prerequirements To successfully implement a Hyper-V Failover Cluster, we need suitable hardware, software, permissions, and network and storage infrastructure as outlined in the following sections. Hardware The hardware used in a Failover Cluster environment needs to be validated against the Windows Server Catalogue. Microsoft will only support Hyper-V clusters when all components are certified for Windows Server 2012 R2. The servers used to run our HA virtual machines should ideally consist of identical hardware models with identical components. It is possible, and supported, to run servers in the same cluster with different hardware components, that is, different size of RAM; however, due to a higher level of complexity, this is not best practice. Special planning considerations are needed to address the CPU requirements of a cluster. To ensure maximum compatibility, all CPUs in a cluster should be exactly the same model. While it's possible from a technical point of view to mix even CPUs from Intel and AMD in the same cluster through to different architecture, you will lose core cluster capabilities such as Live Migration. Choosing a single vendor for your CPUs is not enough, even when using different CPU models your cluster nodes may be using a different set of CPU instruction set extensions. With different instructions sets, Live Migrations won't work either. There is a compatibility mode that disables most of the instruction set on all CPUs on all cluster nodes; however, this leaves you with a negative impact on performance and should be avoided. A better approach to this problem would be creating another cluster from the legacy CPUs running smaller or non-production workloads without affecting your high-performance production workloads. If you want to extend your cluster after some time, you will find yourself with the problem of not having the exact same hardware available to purchase. Choose the current revision of the model or product line you are already using in your cluster and manually compare the CPU instruction sets at http://ark.intel.com/ and http://products.amd.com/, respectively. Choose the current CPU model that best fits the original CPU features of your cluster and have this design validated by your hardware partner. Ensure that your servers are equipped with compatible CPUs, the same amount of RAM, and the same network cards and storage controllers. The network design Mixing different vendors of network cards in a single server is fine and best practice for availability, but make sure all your Hyper-V hosts are using an identical hardware setup. A network adapter should only be used exclusively for LAN traffic or storage traffic. Do not mix these two types of communication in any basic scenario. There are some more advanced scenarios involving converged networking that can enable mixed traffic, but in most cases, this is not a good idea. A Hyper-V Failover Cluster requires multiple layers of communication between its nodes and storage systems. Hyper-V networking and storage options have changed dramatically through the different releases of Hyper-V. With Windows Server 2012 R2, the network design options are endless. In this article, we will work with a typically seen basic set of network designs. We have at least six Network Interface Cards (NICs) available in our servers with a bandwidth of 1 Gb/s. If you have more than five interface cards available per server, use NIC Teaming to ensure the availability of the network or even use converged networking. Converged networking will also be your choice if you have less than five network adapters available. The First NIC will be exclusively used for Host Communication to our Hyper-V host and will not be involved in the VM network traffic or cluster communication at any time. It will ensure Active Directory and management traffic to our Management OS. The second NIC will ensure Live Migration of virtual machines between our cluster nodes. The third NIC will be used for VM traffic. Our virtual machines will be connected to the various production and lab networks through this NIC. The fourth NIC will be used for internal cluster communication. The first four NICs can either be teamed through Windows Server NIC Teaming or can be abstracted from the physical hardware through to Windows Server network virtualization and converged fabric design. The fifth NIC will be reserved for storage communication. As advised, we will be isolating storage and production LAN communication from each other. If you do not use iSCSI or SMB3 storage communication, this NIC will not be necessary. If you use Fibre Channel SAN technology, use a FC-HBA instead. If you leverage Direct Attached Storage (DAS), use the appropriate connector for storage communication. The sixth NIC will also be used for storage communication as a redundancy. The redundancy will be established via MPIO and not via NIC Teaming. There is no need for a dedicated heartbeat network as in older revisions of Windows Server with Hyper-V. All cluster networks will automatically be used for sending heartbeat signals throughout the other cluster members. If you don't have 1 Gb/s interfaces available, or if you use 10 GbE adapters, it’s best practice to implement a converged networking solution. Storage design All cluster nodes must have access to the virtual machines residing on a centrally shared storage medium. This could be a classic setup with a SAN, a NAS, or a more modern concept with Windows Scale Out File Servers hosting Virtual Machine Files SMB3 Fileshares. In this article, we will use a NetApp SAN system that's capable of providing a classic SAN approach with LUNs mapped to our Hosts as well as utilizing SMB3 Fileshares, but any other Windows Server 2012 R2 validated SAN will fulfill the requirements. In our first setup, we will utilize Cluster Shared Volumes (CSVs) to store several virtual machines on the same storage volume. It's not good these days to create a single volume per virtual machine due to a massive management overhead. It's a good rule of thumb to create one CSV per cluster node; in larger environments with more than eight hosts, a CSV per two to four cluster nodes. To utilize CSVs, follow these steps: Ensure that all components (SAN, Firmware, HBAs, and so on) are validated for Windows Server 2012 R2 and are up to date. Connect your SAN physically to all your Hyper-V hosts via iSCSI or Fibre Channel connections. Create two LUNs on your SAN for hosting virtual machines. Activate Hyper-V performance options for these LUNs if possible (that is, on a NetApp, by setting the LUN type to Hyper-V). Size the LUNs for enough capacity to host all your virtual hard disks. Label the LUNs CSV01 and CSV02 with appropriate LUN IDs. Create another small LUN with 1 GB in size and label it Quorum. Make the LUNs available to all Hyper-V hosts in this specified cluster by mapping it on the storage device. Do not make these LUNs available to any other hosts or cluster. Prepare storage DSMs and drivers (that is, MPIO) for Hyper-V host installation. Refresh disk configuration on hosts, install drivers and DSMs, and format volumes as NTFS (quick). Install Microsoft Multipath IO when using redundant storage paths: Install-WindowsFeature -Name Multipath-IO –Computername ElanityHV01, ElanityHV02 In this example, I added the MPIO feature to two Hyper-V hosts with the computer names ElanityHV01 and ElanityHV02. SANs typically are equipped with two storage controllers for redundancy reasons. Make sure to disperse your workloads over both controllers for optimal availability and performance. If you leverage file servers providing SMB3 shares, the preceding steps do not apply to you. Perform the following steps instead: Create a storage space with the desired disk-types, use storage tiering if possible. Create a new SMB3 Fileshare for applications. Customize the Permissions to include all Hyper-V servers from the planned clusters as well as the Hyper-V cluster object itself with full control. Server and software requirements To create a Failover Cluster, you need to install a second Hyper-V host. Use the same unattended file but change the IP address and the hostname. Join both Hyper-V hosts to your Active Directory domain if you have not done this until yet. Hyper-V can be clustered without leveraging Active Directory but it's lacking several key components, such as Live Migration, and shouldn't be done on purpose. The availability to successfully boot up a domain-joined Hyper-V cluster without the need to have any Active Directory domain controller present during boot time is the major benefit from the Active Directory independency of Failover Clusters. Ensure that you create a Hyper-V virtual switch as shown earlier with the same name on both hosts, to ensure cluster compatibility and that both nodes are installed with all updates. If you have System Center 2012 R2 in place, use the System Center Virtual Machine Manager to create a Hyper-V cluster. Implementing Failover Clusters After preparing our Hyper-V hosts, we will now create a Failover Cluster using PowerShell. I'm assuming your hosts are installed, storage and network connections are prepared, and the Hyper-V role is already active utilizing up-to-date drivers and firmware on your hardware. First, we need to ensure that Servername, Date, and Time of our Hosts are correct. Time and Timezone configurations should occur via Group Policy. For automatic network configuration later on, it's important to rename the network connections from default to their designated roles using PowerShell, as seen in the following commands: Rename-NetAdapter -Name "Ethernet" -NewName "Host" Rename-NetAdapter -Name "Ethernet 2" -NewName "LiveMig" Rename-NetAdapter -Name "Ethernet 3" -NewName "VMs" Rename-NetAdapter -Name "Ethernet 4" -NewName "Cluster" Rename-NetAdapter -Name "Ethernet 5" -NewName "Storage" The Network Connections window should look like the following screenshot: Hyper-V host Network Connections Next, IP configuration of the network adapters. If you are not using DHCP for your servers, manually set the IP configuration (different subnets) of the specified network cards. Here is a great blog post on how to automate this step: http://bit.ly/Upa5bJ Next, we need to activate the necessary Failover Clustering features on both of our Hyper-V hosts: Install-WindowsFeature -Name Failover-Clustering-IncludeManagementTools –Computername ElanityHV01, ElanityHV02 Before actually creating the cluster, we are launching a cluster validation cmdlet via PowerShell: Test-Cluster ElanityHV01, ElanityHV02 Test-Cluster cmdlet Open the generated .mht file for more details, as shown in the following screenshot: Cluster validation As you can see, there are some warnings that should be investigated. However, as long as there are no errors, the configuration is ready for clustering and fully supported by Microsoft. However, check out Warnings to be sure you won't run into problems in the long run. After you have fixed potential errors and warnings listed in the Cluster Validation Report, you can finally create the cluster as follows: New-Cluster-Name CN=ElanityClu1,OU=Servers,DC=cloud,DC=local-Node ElanityHV01, ElanityHV02-StaticAddress 192.168.1.49 This will create a new cluster named ElanityClu1 consisting of the nodes ElanityHV01 and ElanityHV02 and using the cluster IP address 192.168.1.49. This cmdlet will create the cluster and the corresponding Active Directory Object in the specified OU. Moving the cluster object to a different OU later on is no problem at all; even renaming is possible when done the right way. After creating the cluster, when you open the Failover Cluster Management console, you should be able to connect to your cluster: Failover Cluster Manager You will see that all your cluster nodes and Cluster Core Resources are online. Rerun the Validation Report and copy the generated .mht files to a secure location if you need them for support queries. Keep in mind that you have to rerun this wizard if any hardware or configuration changes occurring to the cluster components, including any of its nodes. The initial cluster setup is now complete and we can continue with post creation tasks. Summary With the knowledge from this article, you are now able to design and implement Hyper-V Failover Clusters as well as guest clusters. You are aware of the basic concepts of High Availability and the storage and networking options necessary to achieve this. You have seen real-world proven configurations to ensure a stable operating environment. Resources for Article: Further resources on this subject: Planning Desktop Virtualization [Article] Backups in the VMware View Infrastructure [Article] Virtual Machine Design/a> [Article]
Read more
  • 0
  • 0
  • 1645

article-image-openvz-container-administration
Packt
11 Nov 2014
11 min read
Save for later

OpenVZ Container Administration

Packt
11 Nov 2014
11 min read
In this article by Mark Furman, the author of OpenVZ Essentials, we will go over the various aspects of OpenVZ administration. Some of the things we are going to go over in this article are as follows: Listing the containers that are running on the server Starting, stopping, suspending, and resuming containers Destroying, mounting, and unmounting containers Setting quota on and off Creating snapshots of the containers in order to back up and restore the container to another server (For more resources related to this topic, see here.) Using vzlist The vzlist command is used to list the containers on a node. When you run vzlist on its own without any options, it will only list the containers that are currently running on the system: vzlist In the previous example, we used the vzlist command to list the containers that are currently running on the server. Listing all the containers on the server If you want to list all the containers on the server instead of just the containers that are currently running on the server, you will need to add -a after vzlist. This will tell vzlist to include all of the containers that are created on the node inside its output: vzlist -a In the previous example, we used the vzlist command with an -a flag to tell vzctl that we want to list all of the containers that have been created on the server. The vzctl command The next command that we are going to cover is the vzctl command. This is the primary command that you are going to use when you want to perform tasks with the containers on the node. The initial functions of the vzctl command that we will go over are how to start, stop, and restart the container. Starting a container We use vzctl to start a container on the node. To start a container, run the following command: vzctl start 101Starting Container ...Setup slm memory limitSetup slm subgroup (default)Setting devperms 20002 dev 0x7d00Adding IP address(es) to pool:Adding IP address(es): 192.168.2.101Hostname for Container set: gotham.example.comContainer start in progress... In the previous example, we used the vzctl command with the start option to start the container 101. Stopping a container To stop a container, run the following command: vzctl stop 101Stopping container ...Container was stoppedContainer is unmounted In the previous example, we used the vzctl command with the stop option to stop the container 101. Restarting a container To restart a container, run the following command: vzctl restart 101Stopping Container ...Container was stoppedContainer is unmountedStarting Container... In the previous example, we used the vzctl command with the restart option to restart the container 101. Using vzctl to suspend and resume a container The following set of commands will use vzctl to suspend and resume a container. When you use vzctl to suspend a container, it creates a save point of the container to a dump file. You can then use vzctl to resume the container to the saved point it was in before the container was suspended. Suspending a container To suspend a container, run the following command: vzctl suspend 101 In the previous example, we used the vzctl command with the suspend option to suspend the container 101. Resuming a container To resume a container, run the following command: vzctl resume 101 In the previous example, we used the vzctl command with the resume option to resume operations on the container 101. In order to get resume or suspend to work, you may need to enable several kernel modules by running the following:modprobe vzcptmodprobe vzrst Destroying a container You can destroy a container that you created by using the destroy argument with vzctl. This will remove all the files including the configuration file and the directories created by the container. In order to destroy a container, you must first stop the container from running. To destroy a container, run the following command: vzctl destroy 101Destroying container private area: /vz/private/101 Container private area was destroyed. In the previous example, we used the vzctl command with the destroy option to destroy the container 101. Using vzctl to mount and unmount a container You are able to mount and unmount a container's private area located at /vz/root/ctid, which provides the container with root filesystem that exists on the server. Mounting and unmounting containers come in handy when you have trouble accessing the filesystem for your container. Mounting a container To mount a container, run the following command: vzctl mount 101 In the previous example, we used the vzctl command with the mount option to mount the private area for the container 101. Unmounting a container To unmount a container, run the following command: vzctl umount 101 In the previous example, we used the vzctl command with the umount option to unmount the private area for the container 101. Disk quotas Disk quotas allow you to define special limits for your container, including the size of the filesystem or the number of inodes that are available for use. Setting quotaon and quotaoff for a container You can manually start and stop the containers disk quota by using the quotaon and quotaoff arguments with vzctl. Turning on disk quota for a container To turn on disk quota for a container, run the following command: vzctl quotaon 101 In the previous example, we used the vzctl command with the quotaon option to turn disk quota on for the container 101. Turning off disk quota for a container To turn off disk quota for a container, run the following command: vzctl quotaoff 101 In the previous example, we used the vzctl command with the quotaoff option to turn off disk quota for the container 101. Setting disk quotas with vzctl set You are able to set the disk quotas for your containers on your server using the vzctl set command. With this command, you can set the disk space, disk inodes, and the quota time. To set the disk space for container 101 to 2 GB, use the following command: vzctl set 101 --diskspace 2000000:2200000 --save In the previous example, we used the vzctl set command to set the disk space quota to 2 GB with a 2.2 GB barrier. The two values that are separated with a : symbol and are the soft limit and the hard limit. The soft limit in the example is 2000000 and the hard limit is 2200000. The soft limit can be exceeded up to the value of the hard limit. The hard limit should never exceed its value. OpenVZ defines soft limits as barriers and hard limits as limits. To set the inode disk for container 101 to 1 million inodes, use the following command: vzctl set 101 --diskinodes 1000000:1100000 --save In the previous example, we used the vzctl set command to set the disk inode limits to a soft limit or barrier of 1 million inodes and a hard limit or limit or 1.1 million inodes. To set the quota time or the period of time in seconds that the container is allowed to exceed the soft limit values of disk quota and inode quota, use the following command: vzctl set 101 --quotatime 900 --save In the previous example, we used the vzctl command to set the quota time to 900 seconds or 15 minutes. This means that once the container soft limit is broken, you will be able to exceed the quota to the value of the hard limit for 15 minutes before the container reports that the value is over quota. Further use of vzctl set The vzctl set command allows you to make modifications to the container's config file without the need to manually edit the file. We are going to go over a few of the options that are essential to administer the node. --onboot The --onboot flag allows you to set whether or not the container will be booted when the node is booted. To set the onboot option, use the following command: vzctl set 101 --onboot In the previous example, we used the vzctl command with the set option and the --onboot flag to enable the container to boot automatically when the server is rebooted, and then saved to the container configuration file. --bootorder The --bootorder flag allows you to change the boot order priority of the container. The higher the value given, the sooner the container will start when the node is booted. To set the bootorder option, use the following command: vzctl set 101 --bootorder 9 --save In the previous example, we used the vzctl command with the set option and the bootorder flag to tell that we would like to change the priority of the order that the container is booted in, and then we save the option to the container's configuration file. --userpasswd The --userpasswd flag allows you to change a user's password that belongs to the container. If the user does not exist, then the user will be created. To set the userpasswd option, use the following command: vzctl set 101 --userpasswd admin:changeme In the previous example, we used the vzctl command with the set option and the --userpasswd flag and change the password for the admin user to the password changeme. --name The --name flag allows you to give the container a name that when assigned, can be used in place of the CTID value when using vzctl. This allows for an easier way to memorize your containers. Instead of focusing on the container ID, you will just need to remember the container name to access the container. To set the name option, use the following command: vzctl set 101 --name gotham --save In the previous example, we used the vzctl command with the set option to set our container 101 to use the name gotham and then save the changes to containers configuration file. --description The --description flag allows you to add a description for the container to give an idea of what the container is for. To use the description option, use the following command: vzctl set 101 --description "Web Development Test Server" --save In the previous example, we used the vzctl command with the set option and the --description flag to add a description of the container "Web Development Test Server". --ipadd The --ipadd flag allows you to add an IP address to the specified container. To set the ipadd option, use the following command: vzctl set 101 --ipadd 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipadd flag to add the IP address 192.168.2.103 to container 101 and then save the changes to the containers configuration file. --ipdel The --ipdel flag allows you to remove an IP address from the specified container. To use the ipdel option, use the following command: vzctl set 101 --ipdel 192.168.2.103 --save In the previous example, we used the vzctl command with the set option and the --ipdel flag to remove the IP address 192.168.2.193 from the container 101 and then save the changes to the containers configuration file. --hostname The --hostname flag allows you to set or change the hostname for your container. To use the hostname option, use the following command: vzctl set 101 --hostname gotham.example.com --save In the previous example, we used the vzctl command with the set option and the --hostname flag to change the hostname of the container to gotham.example.com. --disable The --disable flag allows you to disable a containers startup. When this option is in place, you will not be able to start the container until this option is removed. To use the disable option, use the following command: vzctl set 101 --disable In the preceding example, we used the vzctl command with the set option and the --disable flag to prevent the container 101 from starting and then save the changes to the container's configuration file. --ram The --ram flag allows you to set the value for the physical page limit of the container and helps to regulate the amount of memory that is available to the container. To use the ram option, use the following command: vzctl set 101 --ram 2G --save In the previous example, we set the physical page limit to 2 GB using the --ram flag. --swap The --swap flag allows you to set the value of the amount of swap memory that is available to the container. To use the swap option, use the following command: vzctl set 101 --swap 1G --save In the preceding example, we set the swap memory limit for the container to 1 GB using the --swap flag. Summary In this article, we learned to administer the containers that are created on the node by using the vzctl command, and the vzlist command to list containers on the server. The vzctl command has a broad range of flags that can be given to it to allow you to perform many actions to a container. It allows you to start, stop, and restart, create, and destroy a container. You can also suspend and unsuspend the current state of the container, mount and unmount a container, issue changes to the container's config file by using vzctl set. Resources for Article: Further resources on this subject: Basic Concepts of Proxmox Virtual Environment [article] A Virtual Machine for a Virtual World [article] Backups in the VMware View Infrastructure [article]
Read more
  • 0
  • 0
  • 1597