Sunday, February 26, 2012

What is beacon probing?
Beacon probing is a network failover detection mechanism that sends out and listens for beacon probes on all NICs in the team and uses this information along with link status to determine link failure. Beacon probing detects failures, such as cable pulls and physical switch power failures on the immediate physical switch and also on the downstream switches.
How does beacon probing work?
ESX periodically broadcasts beacon packets from all uplinks in a team. The physical switch is expected to forward all packets to other ports on the same broadcast domain. Therefore, a team member is expected to see beacon packets from other team members. If an uplink fails to receive three consecutive beacon packets, it is marked as bad. The failure can be due to the immediate link or a downstream link.

Beaconing is most useful with three or more uplinks in a team because ESX can detect failures of a single uplink. When there are only two NICs in service and one of them loses connectivity, it is unclear which NIC needs to be taken out of service because both do not receive beacons and as a result all packets sent to both uplinks. Using at least three NICs in such a team allows for n-2 failures where n is the number of NICs in the team before reaching an ambiguous situation.

(more details you can see KB from VMware

Multipathing policies in ESX/ESXi 4.x and ESXi 5.x

(KB Article from VMware

These pathing policies can be used with VMware ESX/ESXi 4.x and ESXi 5.x:

  • Most Recently Used (MRU) — Selects the first working path, discovered at system boot time. If this path becomes unavailable, the ESX/ESXi host switches to an alternative path and continues to use the new path while it is available. This is the default policy for Logical Unit Numbers (LUNs) presented from an Active/Passive array. ESX/ESXi does not return to the previous path when if, or when, it returns; it remains on the working path until it, for any reason, fails.

    Note: The preferred flag, while sometimes visible, is not applicable to the MRU pathing policy and can be disregarded.
  • Fixed (Fixed) — Uses the designated preferred path flag, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX/ESXi host cannot use the preferred path or it becomes unavailable, ESX/ESXi selects an alternative available path. The host automatically returns to the previously-defined preferred path as soon as it becomes available again. This is the default policy for LUNs presented from an Active/Active storage array.
  • Round Robin (RR) — Uses an automatic path selection rotating through all available paths, enabling the distribution of load across the configured paths.
    • For Active/Passive storage arrays, only the paths to the active controller will used in the Round Robin policy.
    • For Active/Active storage arrays, all paths will used in the Round Robin policy.

    Note: This policy is not currently supported for Logical Units that are part of a Microsoft Cluster Service (MSCS) virtual machine.
  • Fixed path with Array Preference — The VMW_PSP_FIXED_AP policy was introduced in ESX/ESXi 4.1. It works for both Active/Active and Active/Passive storage arrays that support ALUA. This policy queries the storage array for the preferred path based on the arrays preference. If no preferred path is specified by the user, the storage array selects the preferred path based on specific criteria.

    Note: The VMW_PSP_FIXED_AP policy has been removed from the ESXi 5.0 release and VMW_PSP_MRU became the default PSP for all ALUA devices
  • These pathing policies apply to VMware's Native Multipathing (NMP) Path Selection Plugins (PSP). Third party PSPs have their own restrictions.
  • Switching to Round Robin from MRU or Fixed is safe and supported for all arrays, Please check with your vendor for supported Multipathing policies for your storage array. Switching to a unsupported pathing policy can cause an outage.
Warning: VMware does not recommend changing the LUN policy from Fixed to MRU, as the automatic selection of the pathing policy is based on the array that has been detected by the NMP PSP.

Additional Information

The Round Robin (RR) multipathing policies have configurable options that can be modified at the command-line interface. Some of these options include:
  • Number of bytes to send along one path for this device before the PSP switches to the next path.
  • Number of I/O operations to send along one path for this device before the PSP switches to the next path.

Thursday, February 23, 2012

Changed Block Tracking (CBT) on virtual machines

Changed Block Tracking (CBT) is a VMware feature that helps perform incremental backups. VMware Data Recovery uses this technology, and so can developers of backup and recovery software.

Virtual machines running on ESX/ESXi hosts can track disk sectors that have changed. This feature is called Changed Block Tracking (CBT). On many file systems, CBT identifies the disk sectors altered between two change set IDs. On VMFS partitions, CBT can also identify all the disk sectors that are in use.

Virtual disk block changes are tracked from outside virtual machines, in the virtualization layer. When software performs a backup, it can request transmission of only the blocks that changed since the last backup, or the blocks in use.
The CBT feature can be accessed by third-party applications as part of the vSphere APIs for Data Protection (VADP). Applications call VADP to request that the VMkernel return blocks of data that have changed on a virtual disk since the last backup snapshot.
For CBT to identify altered disk sectors since the last change ID, the following items are required:
  • The host must be ESX/ESXi 4.0 or later.
  • The virtual machine owning the disks to be tracked must be hardware version 7 or later.
  • I/O operations must go through the ESX/ESXi storage stack. So NFS is supported, as is RDM in virtual compatibility mode, but not RDM in physical compatibility mode. Of course VMFS is supported, whether backed by SAN, iSCSI, or local disk.
  • CBT must be enabled for the virtual machine (see below).
  • Obviously, virtual machine storage must not be (persistent or non-persistent) independent disk, meaning unaffected by snapshots.
For CBT to identify disk sectors in use with the special "*" change ID, the following items are required:
  • The virtual disk must be located on a VMFS volume, backed by SAN, iSCSI, or local disk. RDM is not VMFS.
  • The virtual machine must have zero (0) snapshots when CBT is enabled, for a clean start.
In some cases, such as a power failure or hard shutdown while virtual machines are powered on, CBT might reset and lose track of incremental changes. Likewise offline Storage vMotion, but not online Storage vMotion, could reset but not disable CBT.
To check if a virtual disk has CBT enabled, open the vSphere Client, select a powered-off virtual machine, and click Edit...Settings > Options > Advanced/General > Configuration Parameters.
  • The virtual machine's configuration (.vmx) file contains the entry:ctkEnabled = "TRUE"

    Note: Set the value "False" to disable CBT. For more information, see Enabling Changed Block Tracking (CBT) on virtual machines (1031873)
  • For each virtual disk, the .vmx file contains the entry:scsix:x.ctkEnabled = "TRUE"
  • For each virtual disk and snapshot disk there is a .ctk file. For example:vmname.vmdk

(KB from VMware

Wednesday, February 22, 2012

About Virtual Disk Provisioning Policies

When you perform certain virtual machine management operations, such as creating a virtual disk, cloning a virtual machine to a template, or migrating a virtual machine, you can specify a provisioning policy for the virtual disk file.

NFS datastores with Hardware Acceleration and VMFS datastores support the following disk provisioning policies. On NFS datastores that do not support Hardware Acceleration, only thin format is available.

You can use Storage vMotion to transform virtual disks from one format to another.

Thick Provision Lazy Zeroed
Creates a virtual disk in a default thick format. Space required for the virtual disk is allocated when the virtual disk is created. Data remaining on the physical device is not erased during creation, but is zeroed out on demand at a later time on first write from the virtual machine.
Using the default flat virtual disk format does not zero out or eliminate the possibility of recovering deleted files or restoring old data that might be present on this allocated space. You cannot convert a flat disk to a thin disk.

Thick Provision Eager Zeroed
A type of thick virtual disk that supports clustering features such as Fault Tolerance. Space required for the virtual disk is allocated at creation time. In contrast to the flat format, the data remaining on the physical device is zeroed out when the virtual disk is created. It might take much longer to create disks in this format than to create other types of disks.

Thin Provision
Use this format to save storage space. For the thin disk, you provision as much datastore space as the disk would require based on the value that you enter for the disk size. However, the thin disk starts small and at first, uses only as much datastore space as the disk needs for its initial operations.

(Sources VMware vSphere 5 Documentation Center)