Wednesday, 15 April 2015

Speed Up vMotion

Nowadays, most of servers have been virtualized and I can’t imagine work without VMotion in my datacenter. Using VMotion, I can move (migrate) VMs between ESXi hosts without interruption.

Increasing configured RAM of VMs causes VMotion needs time to be completed. So in this article we discuss how to improve/speed up VMotion.

Best Practices for vMotion Networking:

Dedicate at least one gigabit ethernet adapter for vMotion
Provision at least one additional physical NIC as a failover NIC
vMotion traffic should not be routed, as this may cause latency issues
We can speed up VMotion by various ways:

10Gbit network

The easiest way to accelerate vMotion is using 10Gbit connection. This method provides not only better bandwidth but also more concurrent VMotions. If a vMotion is configured with a 1GB line speed, it is possible four concurrent vMotion, while with a 10GB link speed – eight concurrent vMotions per host. Dedicated 10Gbit links just for vMotion are not often scenario, so if you have a Distributed Switch, you can use Network IO Control (NetIOC)  to prioritize vMotion traffic.

Distributed Switch and LACPv2

If you are an administrator of VMware infrastructure with Enterprise Plus license installed, you are lucky. Enterprise + lets you use all features such Distributed Switch (VMware or Cisco Nexus 1000v). Why is it useful for VMotion? Distributed switch supports the Link Aggregation Control Protocol (LACP) (since version 5.1) or LACPv2 (since version 5.5). LACPv2 lets you use following balancing alghoritms:

Destination IP address
Destination IP address and TCP/UDP port
Destination IP address and VLAN
Destination IP address, TCP/UDP port and VLAN
Destination MAC address
Destination TCP/UDP port
Source IP address
Source IP address and TCP/UDP port
Source IP address and VLAN
Source IP address, TCP/UDP port and VLAN
Source MAC address
Source TCP/UDP port
Source and destination IP address
Source and destination IP address and TCP/UDP port
Source and destination IP address and VLAN
Source and destination IP address, TCP/UDP port and VLAN
Source and destination MAC address
Source and destination TCP/UDP port
Source port ID
VLAN
Distributed Switch in vSphere 5.1 supports only one algorithm:

Source and destination IP address
Pros:

Even if you have two hosts VMware cluster with Distributed Switch 5.5 configured (LACPv2), two or more vMotions (running at the same time) between two ESXi hosts should use two or more uplinks (one uplink per VMotion).
Cons:

Required Enterprise Plus license
LACPv2 should be supported by Physical Switch
Standard Switch (vSwitch) and Etherchannel

If your VMware hosts are licensed lower than Ent+, you can speed up VMotion by using static Etherchannel. It provides fault-tolerance and high-speed links between switches and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links.

Pros:

Any ESXi license is supported
Easy to configure
Cons:

It supports only Source and Destination IP address (hash). Example, two VMotions  between ESXi A and ESXi B would use the same one uplink because IP Source and IP Destination are the same.
Multi-NIC vMotion

Multi-NIC vMotion was introduced in vSphere 5.0. This feature provides load balancing the vMotion network traffic over multiple network adapters. It means that one VMotion session is balanced between all available vmknic.


Let’s assume that we have two ESXi hosts and we want to migrate a VM with 512GB RAM configured. To show how Multi-NIC vMotion exactly works, there are below two scenarios:Multi-NIC vMotion - Scenario AThe scenario A: Host A with 1 x 10Gbit NIC and Host B with 4 x 1Gbit NICs

Multi-NIC vMotion - ScenarioThe scenario B: Host A with 2 x 10Gbit NICs and Host B with 1 x 10Gbit and 3 x 1 Gbit NICs





When a migration is initiated, VMkernel will pair source and destination vMotion NICs based on link speed, pairing multiple NICs to a single NIC as needed to fully utilize the link. VMkernel opens a TCP connection per network adapter pair and transparently load balances the migration traffic over all the connections.

In scenario A, VMkernel will pair the first 10GbE NIC on the ESXi A host  with the four 1GbE NICs on the ESXi B host, thereby resulting in a total of four TCP connections.

In scenario B, VMkernel will pair the first 10GbE NIC on the ESXi A host with the sole 10GbE NIC on the ESXi B host, then VMkernel will pair the second 10GbE NIC on the ESXi A host with the two 1GbE NICs on the ESXi B host, thereby resulting in a total of four TCP connections.

Pros:

No special license required
Easy to configure
Cons:

None :)
To configure Multi-NIC vMotion, please follow steps mentioned in post here.

Even you have Distributed Switch, I recommend to use Multi-NIC vMotion. If you use vSwitch and vDS, VMkernel port group may be present on a standard switch and the other one on a distributed switch.

Configuration Multi-NIC vMotion with Cisco Nexus 1000v requires some steps mentioned in my another article here.

Conclusion


I used to configure Static Etherchannel to improve vMotion till release of vSphere 5.0 and introduce of Multi-NIC vMotion. Nowadays, almost my VMware designs are based on Multi-NIC vMotion. It’s simple to configure and works perfect without additional hardware requirements and costs.

Where is my vCenter VM

Where is my vCenter VM?


If you have a big VMware infrastructure and need to solve any problem with vCenter VM, it is not a seldom problem that it is difficult to know where (exactly, on which ESXi host) the VM is located.

How to mitigate that problem? Generally, depends on size of infrastructure, I recommend two options:

Dedicated Management Cluster

The Management Cluster hosts all of the critical vCloud infrastructure components (ex. vCenter, VUM, SRM, SSO, AD etc). Separating infrastructure components from production resources  improves manageability of the vCloud infrastructure (ex. search vCenter VM ;)

DRS rule for vCenter VM

If you have not a big infrastracture enough to have the management cluster, you can use VM-Host Affinity rule. It allows you to pin vCenter on to a set of ESXi hosts (I recommend 2 hosts)  and  prevent DRS from auto-migrating the virtual machines to other ESX hosts.

To create an affinity Rule in vCenter, see the below steps:

Right click the Cluster > Edit Settings.
Enable DRS, if it is not already enabled.
Click Rules > Add.
Click the DRS Groups Manager tab.
Click Add under Host DRS Groups to create a new Host DRS Group containing the hosts in the cluster that you want to tie the VMs to.
Click Add under Virtual Machine DRS Groups to create a Virtual Machine DRS Group for all of the virtual machines that you want to tie to hosts listed in the Host group created above
Click the Rule tab, give the new rule a name and from the Type drop-down menu, click Virtual Machines to Hosts.
Under Cluster Vm Group select the newly created VM group.
Select Must run on hosts in group

Under Cluster Host Group select the newly created Cluster Host Group and click OK.

Thats all you're done !!
Happy learning 

Tuesday, 6 January 2015

How to Create a Virtual SSD for vSphere 5.5 vFlash

Using a virtual solid-state drive can be a handy way to save time and money for certain vSphere tasks.

One of my favorite upgrades in VMware's vSphere 5.5 is Flash Read Cache, or vFlash. It's integrated with vCenter 5.5, high availability (HA), distributed resource scheduler (DRS) and vMotion.
vFlash uses a portion of a local physical Solid State Drive (SSD) drive in a vSphere infrastructure to allow for a high-performance read cache layer for the ESXi Host.  The other very nice benefit is that vFlash offloads I/O from your SAN to the vFLash local, physical SSD. As a result, the ESXi host provides lower application latency to its hosted virtual machines (VMs) and their applications.


I needed to speed up the performance of the applications running on some of my VMs under a proof of concept environment, so I decided to use vFlash. However, I didn't have a physical SSD, so I created a virtual SSD. I did it by tricking vSphere into accepting a virtual SSD in lieu of a physical one.





The process of creating a virtual SSD is straightforward. While it's not a permanent substitute for a physical SSD, it works for testing in an ESXi lab environment. Virtual SSDs save money on hardware, without causing a big impact on performance. Here are the steps for creating the virtual SSD:

Create a physical local virtual disk on the ESXi Host(s) that you want to enable vFlash. Ensure that the local virtual SSD size doesn't exceed the size of the physical ESXi host's local virtual disk.
Locate the ESXi host's local virtual disk path (e.g., mpx.vmhba1:C0:T0:L0).
Open a Secure Shell (SSH) session to each ESXi host you'll be configuring with a local virtual SSD.
Convert the physical local virtual disk to a local virtual SSD. Utilize the following esxcli command strings for the conversion.
The Storage Array Type Plugin (SATP) will allow your storage I/O to be load balanced properly by vCenter while using this new virtual disk. Here's the code that creates an SATP rule and enables the SSD string:
~ # esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d  mpx.vmhba1:C0:T0:L0 -o enable_ssd
Next, verify the SATP rule creation:
~ # esxcli storage nmp satp rule list | grep enable_ssd
Output:
VMW_SATP_LOCAL         mpx.vmhba1:C0:T0:L0                                           
enable_ssd             user

Next comes reclamation of the new virtual SSD, to enable application of the SATP rule:

~ # esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T0:L0

Finally, confirm that the new virtual SSD has been created:
~ # esxcli storage core device list -d mpx.vmhba1:C0:T0:L0

Output:
mpx.vmhba1:C0:T2:L0

  Display Name: Original VM  Disk (mpx.vmhba1:C0:T0:L0)

  Has Settable Display Name:  false

  Size: 5120

  Device Type: Direct-Access

  Multipath Plugin: NMP

  Devfs Path: /vmfs/devices/disks/mpx.vmhba1:C0:T0:L0

  Vendor: VMware

  Model: Virtual disk

  Revision: 1.0

  SCSI Level: 2

  Is Pseudo: false

  Status: on

  Is RDM Capable: false

  Is Local: true

  Is Removable: false

  Is SSD: true

  Is Offline: false

  Is Perennially Reserved:  false

  Thin Provisioning Status:  unknown

  Attached Filters:

  VAAI Status: unsupported

Other UIDs:  vml.0000000000577d274761343a323a47

Once you verify that the Is SSD entry is true, the local disk is now a virtual one. You can use the GUI or the following command to refresh vSphere's storage:

~ #vim-cmd hostsvc/storage/refresh

Now that your virtual SSD is created and verified, you can add it to your ESXi host(s) and start using the vFlash feature. vFlash is easy to set up – just configure it in the vCenter Web client.

Setting Up vFlash:
1. In the vSphere Web Client, navigate to the host.
2. Click the Manage tab and click Settings.
3. Under Virtual Flash, select Virtual Flash Resource Management, then click Add Capacity.
4. From the list of available SSD devices, select the newly-created local virtual SSD drive to use for virtual flash and click OK.

A Few Warnings
Now that you have vFlash configured and operational, you can now start enjoying its benefits. Remember to ensure that you have VM version 10, or your VMs ill not benefit from the vFlash benefits. I also recommend that you have the vFlash configured for each ESXi Host.  I learned this lesson through initial trial and error while  conducting a vMotion session.  The VM will fail during a vMotion session if each ESXi Host doesn't have vFlash configured.
At this point, your vFlash will start providing the increased storage read and write speeds and reduced storage I/O contention. The best part is that you're utilizing a virtual SSD in lieu of a physical SSD, while maintaining the vFlash benefits, saving on hardware and operating your lab environment within a budget.

Sunday, 4 January 2015

VSphere Web Client HTTP Status 404 the requested requested resource is not available



It is often standard practice to install the server operating system on C: and then applications and data on additional drives such as E:, F: and so on.
However vSphere Web Client since its 5.0 release though 5.1 and including the latest version at time of writing 5.5 U2, if you install vSphere Web Client to any directory other than the default installation directory you will get the following error when browsing to the vSphere Web Client page.

"HTTP Status 404

The requested requested resource is not available"



1.       Choose whether to enable VM monitoring via VMware Tools.
Select the sensitivity for monitoring if enabled.
Click Next.



Chose "Disable EVC".
Click Next.


For the swapfile policy choose "Store the swapfile in the same directory as the virtual machine" (This will speed up VMotion).
Click Next.



13. Review the Cluster configuration.
Click Finish.



Right click the cluster and select "Add Host”



Enter the hostname/IP address of the ESX/ESXi Server host.
Enter a username for the host (root).
Enter a password for the host.
Click Next.



Review the host details and a list of virtual machines currently running on the host (if any).
Click Next.



If running in evaluation mode Click Next. Otherwise enter the license for the correct edition of vSphere.< /span>



Choose to enable Lockdown Mode.
Click Next.


Choose where to put any existing virtual machines on the host. Either:
In the root resource pool.
A new resource pool named after the host.
Click Next.






Review the actions and click Finish.






The ESX/ESXi host is added to the HA DRS Cluster including any existing virtual machines.





How to Reset the Password for admin@System-Domain vCenter SSO 5.1 (Single Sign On)

If you are in the unfortunate position in which you or someone else has forgotten the vCenter SSO v5.1 admin@System-Domain password, then you may have a problem. Particularly if there are no other users delegated as SSO administrators.
However the aim of this blog post is to help you in resetting the admin@System-Domain password in SSO 5.1 only (it is much easier in 5.5)!.

First and foremost it's worth pointing out this is completed unsupported by VMware. VMware's advise and supported method is to reinstall SSO.
However you do have 2 other possible options I have presented below.

The first options is to simply check the password for the SSO DB in clear text which may be the same as the SSO admin user password.
The second is to update the SSO SQL database admin users password hash, to essentially change the password hash to a password has we know and will change later.


Option A - If your lucky you might be able to find the password this way..

1. Check this file to see if the password used for the SSO SQL database user was the same as the password used for "admin@System-Domain"
C:\Program Files\VMware\Infrastructure\SSOServer\webapps\lookupservice\WEB-INF\classes\config.properties
Note: You will need to change the drive letter to where you install vCenter SSO to if different to C:

2. The password used for the SQL Server database is on this line "db.pass="

## Jdbc Url
db.url=jdbc:jtds:sqlserver://;serverName=;portNumber=1433;databaseName=sqldb1sso
## DB Username
db.user=svcvc1sso
## DB password
db.pass=Password123
## DB type
db.type=Mssql
## DB host
db.host=sqldb1.vmadmin.co.uk



Option B - This should work if you do not know the SSO master password for "admin@System-Domain" and wish to reset it..

1. Open SQL Server Management Studio and connect to SQL server hosting SSO (RSA) database

2. Backup the SSO RSA database so you can restore it if there is a problem

3. Run the following SQL script on the SSO RSA database to set the "admin" users password hash to "VMware1234!"
Note: You can change the password later, for now we will set it to the above password to save re-installing SSO.

UPDATE
[dbo].[IMS_PRINCIPAL]
SET
[PASSWORD] = '{SSHA256}KGOnPYya2qwhF9w4xK157EZZ/RqIxParohltZWU7h2T/VGjNRA=='
WHERE
LOGINUID = 'admin'
AND
PRINCIPAL_IS_DESCRIPTION = 'Admin';


3. If you try to login to vSphere Web Client at this point you may get the following message about your password has expired.


"Associated users password is expired"

4.Open an elevated command prompt and run the command:
SET JAVA_HOME=C:\Program Files\VMware\Infrastructure\jre
Note: Do not put quotes round the path and change the directory to the path you installed vCenter to

5. Navigate to the ssolscli directory (change to the directory you installed vCenter SSO to)
cd "C:\Program Files\VMware\Infrastructure\SSOServer\ssolscli"

6. Run the SSOPASS command to remove the password expiry
ssopass -d https://vcenter1.rootzones.net:7444/lookupservice/sdk admin
Note: This has to be the FQDN the certificate was generated for, localhost will not work.

7. Type your current password, even if it is expired.

8. Type the new password, and then type it again to confirm.

9. Now you can logon to the vSphere Web Client with the following credentials:
admin@System-Domain
VMware1234!

10. Change the password for the account and keep a record of it!

11. It would also be advantageous to add a domain user or group to the SSO administrators group.


Features Removed from Windows Server 2012

This post cover about the windows legacy features which has been removed from Windows Server 2012


Cluster.exe
Good old cluster.exe will be replaced by failover cluster powershell cmdlets. Cluster.exe won’t be installed
By default but it is an optional component. 32 bits DLL resources are no longer supported also.

XDDM
Hardwire driver’s support XDDM has been removed in Windows Server 2012. You may still use the WDDM
Basic display only drive that is included in this OS.

Hyper­V TCP Offload
TCP offload feature for Hyper­V VMs will be removed. Guest OS will not able to use TCP chimney.
Token Rings
Token ring network support is removed in Windows Server 2012, who needs it anyway?

SMB.sys
This file has been removed, now OS uses the WSK, winsock kernel to provide the same service.
NDIS 5.x
NDIS 5.0,5.1 and 5.2 APIs is removed. NDIS 6 is supported.
VM Import/Export
In hyper­v, the import / export concept of transporting VM is replaced by “Register / Unregister” command
/ method.

SMTP
SMTP and the associated management tools are deprecated. you should begin using System.Net.Smtp. With this API, you will not be able to insert a message into a file for pickup; instead configure Web apps to
connect on port 25 to another server using SMTP.

WMI Namespaces
SNMP service and its WMI components will be remoced. Win32_serverfeature namespace is removed.

wmic command line tool is now replaced with get­wmiobject

New Features in Windows Server 2012

User interface
Server Manager has been redesigned with an emphasis on easing management of multiple servers.The operating system, like Windows 8, uses the Metro UI unless installed in Server Core mode.Windows PowerShell in this version has over 2300 commandlets, compared with around 200 in Windows Server 2008 R2.There is also command auto­completion.

Task Manager
Windows 8 and Windows Server 2012 include a new version of Windows Task Manager together with the old version. In the new version, the tabs are hidden by default, showing applications. In the new Processes tab, the processes are displayed in various shades of yellow, with darker shades representing heavier resource use. It lists application names, application status, and overall utilization data for CPU, memory, hard disk, and network resources, moving the process information found in the older Task Manager to the new Details tab. The Performance tab is split into CPU, memory (RAM), disk, ethernet, and, if applicable, wireless network sections with graphs for each. The CPU tab no longer displays individual graphs for every logical processor on the system by default; instead, it can display data for each NUMA node. When displaying data for each logical processor for machines with more than 64 logical processors, the CPU tab now displays simple utilization percentages on heat­mapping tiles. The color used for these heat maps is blue, with darker shades again indicating heavier utilization. Hovering the cursor over any logical processor's data now shows the NUMA node of that processor and its ID, if applicable. Additionally, a new
Startup tab has been added that lists startup applications.The new task manager recognizes when a WinRT application has the "Suspended" status.

Installation options
Unlike its predecessor, Windows Server 2012 can switch between Server Core and the GUI (full) installation options without a full reinstallation. There is also a new third installation option that allows MMC and Server Manager to run, but without Windows Explorer or the other parts of the normal GUI
shell.

IP address management (IPAM)
Windows Server 2012 has an IPAM role for discovering, monitoring, auditing, and managing the IP address space used on a corporate network. IPAM provides for administration and monitoring of servers running Dynamic Host Configuration Protocol (DHCP) and Domain Name Service (DNS). IPAM includes components for:Automatic IP address infrastructure discovery: IPAM discovers domain controllers, DHCP servers, and DNS servers in the domains you choose. You can enable or disable management of these servers by IPAM. Custom IP address space display, reporting, and management: The display of IP addresses is highly customizable and detailed tracking and utilization data is available. IPv4 and IPv6 address space is organized into IP address blocks, IP address ranges, and individual IP addresses. IP addresses are assigned built­in or user­defined fields that can be used to further organize IP address space into hierarchical, logical groups. Audit of server configuration changes and tracking of IP address usage: Operational events are displayed for the IPAM server and managed DHCP servers. IPAM also enables IP address tracking using DHCP lease events and user logon events collected from Network Policy Server (NPS), domain controllers, and DHCP servers. Tracking is available by IP address, client ID, host name, or user name. Monitoring and management of DHCP and DNS services: IPAM enables automated service availability monitoring for Microsoft DHCP and DNS servers across the forest. DNS zone health is displayed, and detailed DHCP server and scope management is available using the IPAM console. Both IPv4 and IPv6 are fully supported.

Active Directory
Windows Server 2012 has a number of changes to Active Directory from the version shipped with Windows Server 2008 R2. The Active Directory Domain Services installation wizard has been replaced by a new section in Server Manager, and the Active Directory Administrative Center has been enhanced. A GUI has been added to the Active Directory Recycle Bin. Password policies can differ more easily within the same domain. Active Directory in Windows Server 2012 is now aware of any changes resulting from 12/28/2014 New Features in Windows Server 2012 virtualization, and virtualized domain controllers can be safely cloned. Upgrades of the domain functional level to Windows Server 2012 are simplified; it can be performed entirely in Server Manager. Active Directory Federation Services is no longer required to be downloaded when installed as a role, and claims which can be used by the Active Directory Federation Services have been introduced into the Kerberos token. Windows Powershell commands used by Active Directory Administrative Center can be viewed in a
"Powershell History Viewer".

Hyper­V
Windows Server 2012, along with Windows 8, will include a new version of Hyper­V,as presented at the Microsoft Build Event Many new features have been added to Hyper­V, including network virtualization, multi­tenancy, storage resource pools, cross­premise connectivity, and cloud backup. Additionally, many of the former restrictions on resource consumption have been greatly lifted. Each virtual machine in this version of Hyper­V can access up to 32 virtual processors, up to 512 gigabytes of random­access memory, and up to 16 terabytes of virtual disk space per virtual hard disk (using a new .vhdx format). Up to 1024 virtual machines can be active per host, and up to 4000 can be active per failover cluster. The version of Hyper­V shipped with the client version of Windows 8 requires a processor that supports SLAT and for SLAT to be turned on, while the version in Windows Server 2012 only requires it if the RemoteFX role is installed.

ReFS
ReFS (Resilient File System, originally codenamed "Protogon") is a new file system initially intended
for file servers that improves on NTFS in Windows Server 2012. Major new features of ReFS include:

Improved reliability for on­disk structures
ReFS uses B+ trees for all on­disk structures including metadata and file data. The file size, total volume size, number of files in a directory and number of directories in a volume are limited by 64­bit numbers, which translates to maximum file size of 16 Exbibytes, maximum volume size of 1 Yobibyte (with 64 KB clusters), which allows large scalability with no practical limits on file and directory size (hardware restrictions still apply). Metadata and file data are organized into tables similar to relational database. Free space is counted by a hierarchal allocator which includes three separate tables for large, medium, and small chunks. File names and file paths are each limited to a
32 KB Unicode text string.

Built­in resiliency
ReFS employs an allocation­on­write update strategy for metadata, which allocates new chunks for every update transaction and uses large IO batches. All ReFS metadata has built­in 64­bit checksums which are stored independently. The file data can have an optional checksum in a separate "integrity stream", in which case the file update strategy also implements allocation­on­write; this is
controlled by a new "integrity" attribute applicable to both files and directories. If nevertheless file data or metadata becomes corrupt, the file can be deleted without taking down the whole volume offline for maintenance, then restored from the backup. As a result of built­in resiliency, administrators do not need to periodically run error­checking tools such as CHKDSKwhen using ReFS.
Compatibility with existing APIs and technologies ReFS does not require new system APIs and most file system filters continue to work with ReFS volumes. ReFS supports many existing Windows and NTFS features such as BitLockerencryption, Access Control Lists, USN Journal, change notifications, symbolic links, junction points, mount points, reparse points, volume snapshots, file IDs, and oplock. ReFS seamlessly[citation needed] integrates with Storage Spaces, a storage virtualization layer that allows data mirroring and striping, as well as sharing storage pools between machines.ReFS resiliency features enhance the mirroring feature provided by Storage Spaces and can detect whether any mirrored copies of files become corrupt using background data scrubbing process, which periodically reads all mirror copies and verifies their checksums then replaces bad copies with good ones. Some NTFS features are not supported in ReFS, including named streams, object IDs, short names, file
compression, file level encryption (EFS), user data transactions, sparse files, hard links,extended attributes, and disk quotas.ReFS does not itself offer data deduplication. Dynamic disks with mirrored or striped volumes are replaced with mirrored or striped storage pools provided by Storage Spaces. However,
in Windows Server 2012, automated error­correction is only supported on mirrored spaces, and booting from ReFS is not supported either. ReFS was first shown in screenshots from leaked build 6.2.7955, where it went by code name

"Protogon".Support for ReFS is absent in the developer preview (build 8102). ReFS is not readable by Windows 7 or earlier.

HA Slots Calculation in VMWare

This post will help you to understand how the HA slots can be calculated.


What is SLOT?
As per VMWare’s Definition,
“A slot is a logical representation of the memory and CPU resources that satisfy the requirements for any powered-on virtual machine in the cluster.”
If you have configured reservations at VM level, It influence the HA slot calculation. Highest memory reservation and highest CPU reservation of the VM in your cluster determines the slot size for the cluster.
Here is the Example,
If you have the VM configured with the highest memory reservation of 8192 MB (8 GB) and highest CPU reservation of 4096 MHZ. among the other VM’s in the cluster, then the slot size for memory is 8192 MB and slot size for CPU is 4096 MHZ. in the cluster.