Thursday, 19 October 2017

Cloud Computing with Amazon Webservices

It was one rainy evening in Bangalore, Sky full of clouds. Hmmm Cloud….!!!!   A visible mass of condensed watery vapour floating in the atmosphere, typically high above the general level of the ground is called “cloud”. Nah.....!! that’s not my definition of cloud.

To me cloud is an arrangement of computing infrastructure which includes computer network, storage, server, applications and services which are hosted somewhere in the internet and can be accessed globally.

Cloud computing is built over virtualization, which is the creation of virtual version of something say a computer network, a storage device or a platform. To enable virtualization, we would need a software called hypervisor. Hypervisor is thin layer of software which builds the relationship between hardware and a user just like the operation system.
There are 2 types of hypervisors: -
1.       Type 1 hypervisor runs directly on the system hardware. They are often referred to as a "native" or "bare metal".  E.g. VMware esxi, Xen, Hyper-V etc.
2.       Type 2 hypervisor runs on top of operating system called as hosted hypervisors E.g. VMware player, Microsoft virtual PC, Oracle virtual box.
Amazon web service (AWS) strikes the top of my mind, when I think about cloud computing. AWS is one of the market leaders in public cloud business and a technologically advanced company in the world today. Winning over Microsoft Azure, Google cloud platform etc.
Now, the question is what is AWS?
Amazon web service (AWS) is a cloud computing service provider, a company part of Amazon.com, offers Infrastructure-as-service, Software-as-service and Platform-as-service. Highly reliable, scalable, cost effective offers and pay as you go model.
AWS uses Xen para-virtualization (PV) and Xen full virtualization (HVM) as hypervisor for different AIM’s (Amazon machine images). AMI’s are pre-defined Operating system templates with pre-installed certain software basically to reduce administration time.
AWS offers service in the following: -
1.       Compute
2.       Networking & Content Delivery
3.       Database
4.       Analytics
5.       Storage
6.       Security, Identity & Compliance
7.       Developer Tools
And many more…

To conclude, I would say that AWS services can perform various functions like simple mail service, complex data analysis on social live streaming data, AI, machine learning, image recognition and you name it, they have it. 

Wednesday, 15 April 2015

Speed Up vMotion

Nowadays, most of servers have been virtualized and I can’t imagine work without VMotion in my datacenter. Using VMotion, I can move (migrate) VMs between ESXi hosts without interruption.

Increasing configured RAM of VMs causes VMotion needs time to be completed. So in this article we discuss how to improve/speed up VMotion.

Best Practices for vMotion Networking:

Dedicate at least one gigabit ethernet adapter for vMotion
Provision at least one additional physical NIC as a failover NIC
vMotion traffic should not be routed, as this may cause latency issues
We can speed up VMotion by various ways:

10Gbit network

The easiest way to accelerate vMotion is using 10Gbit connection. This method provides not only better bandwidth but also more concurrent VMotions. If a vMotion is configured with a 1GB line speed, it is possible four concurrent vMotion, while with a 10GB link speed – eight concurrent vMotions per host. Dedicated 10Gbit links just for vMotion are not often scenario, so if you have a Distributed Switch, you can use Network IO Control (NetIOC)  to prioritize vMotion traffic.

Distributed Switch and LACPv2

If you are an administrator of VMware infrastructure with Enterprise Plus license installed, you are lucky. Enterprise + lets you use all features such Distributed Switch (VMware or Cisco Nexus 1000v). Why is it useful for VMotion? Distributed switch supports the Link Aggregation Control Protocol (LACP) (since version 5.1) or LACPv2 (since version 5.5). LACPv2 lets you use following balancing alghoritms:

Destination IP address
Destination IP address and TCP/UDP port
Destination IP address and VLAN
Destination IP address, TCP/UDP port and VLAN
Destination MAC address
Destination TCP/UDP port
Source IP address
Source IP address and TCP/UDP port
Source IP address and VLAN
Source IP address, TCP/UDP port and VLAN
Source MAC address
Source TCP/UDP port
Source and destination IP address
Source and destination IP address and TCP/UDP port
Source and destination IP address and VLAN
Source and destination IP address, TCP/UDP port and VLAN
Source and destination MAC address
Source and destination TCP/UDP port
Source port ID
VLAN
Distributed Switch in vSphere 5.1 supports only one algorithm:

Source and destination IP address
Pros:

Even if you have two hosts VMware cluster with Distributed Switch 5.5 configured (LACPv2), two or more vMotions (running at the same time) between two ESXi hosts should use two or more uplinks (one uplink per VMotion).
Cons:

Required Enterprise Plus license
LACPv2 should be supported by Physical Switch
Standard Switch (vSwitch) and Etherchannel

If your VMware hosts are licensed lower than Ent+, you can speed up VMotion by using static Etherchannel. It provides fault-tolerance and high-speed links between switches and servers by grouping two to eight physical Ethernet links to create a logical Ethernet link with additional failover links.

Pros:

Any ESXi license is supported
Easy to configure
Cons:

It supports only Source and Destination IP address (hash). Example, two VMotions  between ESXi A and ESXi B would use the same one uplink because IP Source and IP Destination are the same.
Multi-NIC vMotion

Multi-NIC vMotion was introduced in vSphere 5.0. This feature provides load balancing the vMotion network traffic over multiple network adapters. It means that one VMotion session is balanced between all available vmknic.


Let’s assume that we have two ESXi hosts and we want to migrate a VM with 512GB RAM configured. To show how Multi-NIC vMotion exactly works, there are below two scenarios:Multi-NIC vMotion - Scenario AThe scenario A: Host A with 1 x 10Gbit NIC and Host B with 4 x 1Gbit NICs

Multi-NIC vMotion - ScenarioThe scenario B: Host A with 2 x 10Gbit NICs and Host B with 1 x 10Gbit and 3 x 1 Gbit NICs





When a migration is initiated, VMkernel will pair source and destination vMotion NICs based on link speed, pairing multiple NICs to a single NIC as needed to fully utilize the link. VMkernel opens a TCP connection per network adapter pair and transparently load balances the migration traffic over all the connections.

In scenario A, VMkernel will pair the first 10GbE NIC on the ESXi A host  with the four 1GbE NICs on the ESXi B host, thereby resulting in a total of four TCP connections.

In scenario B, VMkernel will pair the first 10GbE NIC on the ESXi A host with the sole 10GbE NIC on the ESXi B host, then VMkernel will pair the second 10GbE NIC on the ESXi A host with the two 1GbE NICs on the ESXi B host, thereby resulting in a total of four TCP connections.

Pros:

No special license required
Easy to configure
Cons:

None :)
To configure Multi-NIC vMotion, please follow steps mentioned in post here.

Even you have Distributed Switch, I recommend to use Multi-NIC vMotion. If you use vSwitch and vDS, VMkernel port group may be present on a standard switch and the other one on a distributed switch.

Configuration Multi-NIC vMotion with Cisco Nexus 1000v requires some steps mentioned in my another article here.

Conclusion


I used to configure Static Etherchannel to improve vMotion till release of vSphere 5.0 and introduce of Multi-NIC vMotion. Nowadays, almost my VMware designs are based on Multi-NIC vMotion. It’s simple to configure and works perfect without additional hardware requirements and costs.

Where is my vCenter VM

Where is my vCenter VM?


If you have a big VMware infrastructure and need to solve any problem with vCenter VM, it is not a seldom problem that it is difficult to know where (exactly, on which ESXi host) the VM is located.

How to mitigate that problem? Generally, depends on size of infrastructure, I recommend two options:

Dedicated Management Cluster

The Management Cluster hosts all of the critical vCloud infrastructure components (ex. vCenter, VUM, SRM, SSO, AD etc). Separating infrastructure components from production resources  improves manageability of the vCloud infrastructure (ex. search vCenter VM ;)

DRS rule for vCenter VM

If you have not a big infrastracture enough to have the management cluster, you can use VM-Host Affinity rule. It allows you to pin vCenter on to a set of ESXi hosts (I recommend 2 hosts)  and  prevent DRS from auto-migrating the virtual machines to other ESX hosts.

To create an affinity Rule in vCenter, see the below steps:

Right click the Cluster > Edit Settings.
Enable DRS, if it is not already enabled.
Click Rules > Add.
Click the DRS Groups Manager tab.
Click Add under Host DRS Groups to create a new Host DRS Group containing the hosts in the cluster that you want to tie the VMs to.
Click Add under Virtual Machine DRS Groups to create a Virtual Machine DRS Group for all of the virtual machines that you want to tie to hosts listed in the Host group created above
Click the Rule tab, give the new rule a name and from the Type drop-down menu, click Virtual Machines to Hosts.
Under Cluster Vm Group select the newly created VM group.
Select Must run on hosts in group

Under Cluster Host Group select the newly created Cluster Host Group and click OK.

Thats all you're done !!
Happy learning 

Tuesday, 6 January 2015

How to Create a Virtual SSD for vSphere 5.5 vFlash

Using a virtual solid-state drive can be a handy way to save time and money for certain vSphere tasks.

One of my favorite upgrades in VMware's vSphere 5.5 is Flash Read Cache, or vFlash. It's integrated with vCenter 5.5, high availability (HA), distributed resource scheduler (DRS) and vMotion.
vFlash uses a portion of a local physical Solid State Drive (SSD) drive in a vSphere infrastructure to allow for a high-performance read cache layer for the ESXi Host.  The other very nice benefit is that vFlash offloads I/O from your SAN to the vFLash local, physical SSD. As a result, the ESXi host provides lower application latency to its hosted virtual machines (VMs) and their applications.


I needed to speed up the performance of the applications running on some of my VMs under a proof of concept environment, so I decided to use vFlash. However, I didn't have a physical SSD, so I created a virtual SSD. I did it by tricking vSphere into accepting a virtual SSD in lieu of a physical one.





The process of creating a virtual SSD is straightforward. While it's not a permanent substitute for a physical SSD, it works for testing in an ESXi lab environment. Virtual SSDs save money on hardware, without causing a big impact on performance. Here are the steps for creating the virtual SSD:

Create a physical local virtual disk on the ESXi Host(s) that you want to enable vFlash. Ensure that the local virtual SSD size doesn't exceed the size of the physical ESXi host's local virtual disk.
Locate the ESXi host's local virtual disk path (e.g., mpx.vmhba1:C0:T0:L0).
Open a Secure Shell (SSH) session to each ESXi host you'll be configuring with a local virtual SSD.
Convert the physical local virtual disk to a local virtual SSD. Utilize the following esxcli command strings for the conversion.
The Storage Array Type Plugin (SATP) will allow your storage I/O to be load balanced properly by vCenter while using this new virtual disk. Here's the code that creates an SATP rule and enables the SSD string:
~ # esxcli storage nmp satp rule add -s VMW_SATP_LOCAL -d  mpx.vmhba1:C0:T0:L0 -o enable_ssd
Next, verify the SATP rule creation:
~ # esxcli storage nmp satp rule list | grep enable_ssd
Output:
VMW_SATP_LOCAL         mpx.vmhba1:C0:T0:L0                                           
enable_ssd             user

Next comes reclamation of the new virtual SSD, to enable application of the SATP rule:

~ # esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T0:L0

Finally, confirm that the new virtual SSD has been created:
~ # esxcli storage core device list -d mpx.vmhba1:C0:T0:L0

Output:
mpx.vmhba1:C0:T2:L0

  Display Name: Original VM  Disk (mpx.vmhba1:C0:T0:L0)

  Has Settable Display Name:  false

  Size: 5120

  Device Type: Direct-Access

  Multipath Plugin: NMP

  Devfs Path: /vmfs/devices/disks/mpx.vmhba1:C0:T0:L0

  Vendor: VMware

  Model: Virtual disk

  Revision: 1.0

  SCSI Level: 2

  Is Pseudo: false

  Status: on

  Is RDM Capable: false

  Is Local: true

  Is Removable: false

  Is SSD: true

  Is Offline: false

  Is Perennially Reserved:  false

  Thin Provisioning Status:  unknown

  Attached Filters:

  VAAI Status: unsupported

Other UIDs:  vml.0000000000577d274761343a323a47

Once you verify that the Is SSD entry is true, the local disk is now a virtual one. You can use the GUI or the following command to refresh vSphere's storage:

~ #vim-cmd hostsvc/storage/refresh

Now that your virtual SSD is created and verified, you can add it to your ESXi host(s) and start using the vFlash feature. vFlash is easy to set up – just configure it in the vCenter Web client.

Setting Up vFlash:
1. In the vSphere Web Client, navigate to the host.
2. Click the Manage tab and click Settings.
3. Under Virtual Flash, select Virtual Flash Resource Management, then click Add Capacity.
4. From the list of available SSD devices, select the newly-created local virtual SSD drive to use for virtual flash and click OK.

A Few Warnings
Now that you have vFlash configured and operational, you can now start enjoying its benefits. Remember to ensure that you have VM version 10, or your VMs ill not benefit from the vFlash benefits. I also recommend that you have the vFlash configured for each ESXi Host.  I learned this lesson through initial trial and error while  conducting a vMotion session.  The VM will fail during a vMotion session if each ESXi Host doesn't have vFlash configured.
At this point, your vFlash will start providing the increased storage read and write speeds and reduced storage I/O contention. The best part is that you're utilizing a virtual SSD in lieu of a physical SSD, while maintaining the vFlash benefits, saving on hardware and operating your lab environment within a budget.

Sunday, 4 January 2015

VSphere Web Client HTTP Status 404 the requested requested resource is not available



It is often standard practice to install the server operating system on C: and then applications and data on additional drives such as E:, F: and so on.
However vSphere Web Client since its 5.0 release though 5.1 and including the latest version at time of writing 5.5 U2, if you install vSphere Web Client to any directory other than the default installation directory you will get the following error when browsing to the vSphere Web Client page.

"HTTP Status 404

The requested requested resource is not available"



1.       Choose whether to enable VM monitoring via VMware Tools.
Select the sensitivity for monitoring if enabled.
Click Next.



Chose "Disable EVC".
Click Next.


For the swapfile policy choose "Store the swapfile in the same directory as the virtual machine" (This will speed up VMotion).
Click Next.



13. Review the Cluster configuration.
Click Finish.



Right click the cluster and select "Add Host”



Enter the hostname/IP address of the ESX/ESXi Server host.
Enter a username for the host (root).
Enter a password for the host.
Click Next.



Review the host details and a list of virtual machines currently running on the host (if any).
Click Next.



If running in evaluation mode Click Next. Otherwise enter the license for the correct edition of vSphere.< /span>



Choose to enable Lockdown Mode.
Click Next.


Choose where to put any existing virtual machines on the host. Either:
In the root resource pool.
A new resource pool named after the host.
Click Next.






Review the actions and click Finish.






The ESX/ESXi host is added to the HA DRS Cluster including any existing virtual machines.





How to Reset the Password for admin@System-Domain vCenter SSO 5.1 (Single Sign On)

If you are in the unfortunate position in which you or someone else has forgotten the vCenter SSO v5.1 admin@System-Domain password, then you may have a problem. Particularly if there are no other users delegated as SSO administrators.
However the aim of this blog post is to help you in resetting the admin@System-Domain password in SSO 5.1 only (it is much easier in 5.5)!.

First and foremost it's worth pointing out this is completed unsupported by VMware. VMware's advise and supported method is to reinstall SSO.
However you do have 2 other possible options I have presented below.

The first options is to simply check the password for the SSO DB in clear text which may be the same as the SSO admin user password.
The second is to update the SSO SQL database admin users password hash, to essentially change the password hash to a password has we know and will change later.


Option A - If your lucky you might be able to find the password this way..

1. Check this file to see if the password used for the SSO SQL database user was the same as the password used for "admin@System-Domain"
C:\Program Files\VMware\Infrastructure\SSOServer\webapps\lookupservice\WEB-INF\classes\config.properties
Note: You will need to change the drive letter to where you install vCenter SSO to if different to C:

2. The password used for the SQL Server database is on this line "db.pass="

## Jdbc Url
db.url=jdbc:jtds:sqlserver://;serverName=;portNumber=1433;databaseName=sqldb1sso
## DB Username
db.user=svcvc1sso
## DB password
db.pass=Password123
## DB type
db.type=Mssql
## DB host
db.host=sqldb1.vmadmin.co.uk



Option B - This should work if you do not know the SSO master password for "admin@System-Domain" and wish to reset it..

1. Open SQL Server Management Studio and connect to SQL server hosting SSO (RSA) database

2. Backup the SSO RSA database so you can restore it if there is a problem

3. Run the following SQL script on the SSO RSA database to set the "admin" users password hash to "VMware1234!"
Note: You can change the password later, for now we will set it to the above password to save re-installing SSO.

UPDATE
[dbo].[IMS_PRINCIPAL]
SET
[PASSWORD] = '{SSHA256}KGOnPYya2qwhF9w4xK157EZZ/RqIxParohltZWU7h2T/VGjNRA=='
WHERE
LOGINUID = 'admin'
AND
PRINCIPAL_IS_DESCRIPTION = 'Admin';


3. If you try to login to vSphere Web Client at this point you may get the following message about your password has expired.


"Associated users password is expired"

4.Open an elevated command prompt and run the command:
SET JAVA_HOME=C:\Program Files\VMware\Infrastructure\jre
Note: Do not put quotes round the path and change the directory to the path you installed vCenter to

5. Navigate to the ssolscli directory (change to the directory you installed vCenter SSO to)
cd "C:\Program Files\VMware\Infrastructure\SSOServer\ssolscli"

6. Run the SSOPASS command to remove the password expiry
ssopass -d https://vcenter1.rootzones.net:7444/lookupservice/sdk admin
Note: This has to be the FQDN the certificate was generated for, localhost will not work.

7. Type your current password, even if it is expired.

8. Type the new password, and then type it again to confirm.

9. Now you can logon to the vSphere Web Client with the following credentials:
admin@System-Domain
VMware1234!

10. Change the password for the account and keep a record of it!

11. It would also be advantageous to add a domain user or group to the SSO administrators group.


Features Removed from Windows Server 2012

This post cover about the windows legacy features which has been removed from Windows Server 2012


Cluster.exe
Good old cluster.exe will be replaced by failover cluster powershell cmdlets. Cluster.exe won’t be installed
By default but it is an optional component. 32 bits DLL resources are no longer supported also.

XDDM
Hardwire driver’s support XDDM has been removed in Windows Server 2012. You may still use the WDDM
Basic display only drive that is included in this OS.

Hyper­V TCP Offload
TCP offload feature for Hyper­V VMs will be removed. Guest OS will not able to use TCP chimney.
Token Rings
Token ring network support is removed in Windows Server 2012, who needs it anyway?

SMB.sys
This file has been removed, now OS uses the WSK, winsock kernel to provide the same service.
NDIS 5.x
NDIS 5.0,5.1 and 5.2 APIs is removed. NDIS 6 is supported.
VM Import/Export
In hyper­v, the import / export concept of transporting VM is replaced by “Register / Unregister” command
/ method.

SMTP
SMTP and the associated management tools are deprecated. you should begin using System.Net.Smtp. With this API, you will not be able to insert a message into a file for pickup; instead configure Web apps to
connect on port 25 to another server using SMTP.

WMI Namespaces
SNMP service and its WMI components will be remoced. Win32_serverfeature namespace is removed.

wmic command line tool is now replaced with get­wmiobject