Saturday, September 5, 2015

Understanding OSPF Routing

OSPF is a fast-converging, link-state IGP used by millions.

OSPF forms adjacencies with neighbors and shares information via the DR and BDR using Link State Advertisements.

Areas in OSPF are used to limit LSAs and summarize routes. Everyone connects to area zero, the backbone.

Open Shortest Path First is a robust link-state interior gateway protocol (IGP). People use OSPF when they discover that RIP just isn’t going to work for their larger network, or when they need very fast convergence

OSPF is the most widely used IGP. When we discuss IGPs, we’re talking about one routing domain, or Autonomous System (AS). Imagine a medium-sized company with multiple buildings and departments, all connected together and sharing two redundant Internet links. All of the buildings on-site are part of the same AS. But with OSPF we also have the concept of an Area, which allows further segmentation, perhaps by department in each building.

To understand the design needs for areas in OSPF, let’s start by discussing how OSPF works. There’s some terminology you may not have encountered before, including:

Router ID: In OSPF this is a unique 32-bit number assigned to each router. This is chosen as the highest IP address on a router, and can be set large by configuring an address on a loopback interface of the chosen router.

Neighbor Routers: two routers with a common link that can talk to each other.
Adjacency: a two-way relationship between two neighbor routers. Neighbors don’t always form adjacencies.

LSA: Link State Advertisements are flooded; they describe routes within a given link.

Hello Protocol: this is how routers on a network determine their neighbors and form LSAs.

Area: a hierarchy. A set of routers that exchange LSAs, with others in the same area. Areas limit LSAs and encourage aggregate routes.

ABR:- An Area Border Router is a router that is in area zero, and one or more other areas.

DR, BDR:- A Designated Router, as we said, is the router that keeps the database for the subnet. It sends and receives updates (via multicast) from the other routers in the same network.

ASBR:- The Autonomous System Boundary Router is very special, but confusing. The ASBR connects one or more AS, and exchanges routes between them. The ASBR’s purpose is to redistribute routes from another AS into its own AS.

VXLAN (Virtual eXtensible Local Area Network)

VXLAN is an overlay network technology. Overlay network can be defined as any logical network that is created on top of the existing physical networks. VXLAN creates Layer 2 logical networks on top of the IP network. The following two are key traits of an overlay technology:
–       It encapsulates original packets into a new header. For example, IPSec VPN, an overlay technology, encapsulates original IP frame in another IP header.
–       Communication is typically established between two tunnel end points. For example, in an IPSec based VPN, which runs on the public internet, the tunnels are established between two sites.

Different components of the VMware’s VXLAN implementation

When you apply those overlay technology traits to VXLAN, you will see that VXLAN encapsulates original MAC frames in to a UDP header (shown below), and all vSphere hosts participating in VXLAN acts as tunnel end points. They are called Virtual Tunnel Endpoints (VTEPs).

VTEPs are the nodes that provide the encapsulation and de-encapsulation function. When we will go through the detail packet flows it will be clear how these VTEPs encapsulate and de-encapsulate traffic from any virtual machine connected to a VXLAN based Layer 2 logical network or virtual wire. The virtual tunnel endpoint (VTEP) configured on every vSphere host consists of the following three modules:
1) VMware Installation Bundle (VIB) or vmkernel module – VTEP functionality is part of the VDS and is installed as a VMware Installation Bundle (VIB). This module is responsible for VXLAN data path processing, which includes maintenance of forwarding tables and encapsulation and de-encapsulation of packets.
2) vmknic virtual adapter – This adapter is used to carry control traffic, which includes response to multicast join, DHCP, and ARP requests. As with any vmknic, a unique IP address is assigned per host. The IP address is used as the VTEP IP while establishing host-to-host tunnels to carry VXLAN traffic.
3) VXLAN port group – This is configured during the initial VXLAN configuration process. It includes physical NICs, VLAN information, teaming policy, and so on. These port group parameters dictate how VXLAN traffic is carried in and out of the host VTEP through the physical NICs.

Friday, September 4, 2015

NSX (Network Virtualization Platform)

NSX is a network virtualization platform that you can use to build a rich set of logical networking services.

Logical Switching: Layer 2 over Layer 3, decoupled from the physical network
Logical Routing: Routing between virtual networks without exiting the software container
Logical Firewall: Distributed Firewall, kernel integrated, high performance
Logical Load Balancer: Application load balancing in software
Logical VPN: Site-to-site and remote access VPN in software
NSX API: REST API for integration into any cloud management platform
Robust Partner Ecosystem: Additional features and use cases supported

Sunday, May 31, 2015

What is vSphere Virtual Volumes?

Virtual Volumes is a SAN/NAS management and integration framework that exposes virtual disks as native storage objects and enables array-based operations at the virtual disk level. Virtual Volumes transforms the data plane of SAN/NAS devices by aligning storage consumptions and operations with the VM. In other words, Virtual Volumes makes SAN/NAS devices VM-aware and unlocks the ability to leverage array based data services with a VM-centric approach at the granularity of a single virtual disk.
Virtual Volumes is an industry-wide initiative that will allow customers to leverage the unique capabilities of their current storage investments and transition without disruption to a simpler and more efficient operational model optimized for virtual environments that works across all storage types.

For more details:-

Wednesday, May 27, 2015

vSphere 6.0 Content Library

Get Simple and Effective Management of VM-Related Content for vSphere Admins

VMware vSphere® Content Library empowers vSphere Admins to effectively manage VM templates, vApps, ISO images and scripts with ease.
Store and manage content from a central location
Share content across boundaries of vCenter Servers
Deploy VM templates from Virtual Machines Library directly onto a host or cluster for usage

Once a library is created and published, content can be shared between various vCenter servers across your infrastructure. Sharing content from a single source gives IT admins the visibility and control that they need to enforce strict VM and application configuration policies within their organizations. 

On the subscriber vCenter, you can download all the contents of the published library at once or only when you need them (“On-Demand”). Once downloaded, this content will remain in sync with its source in the published library. 

Additionally, host to host bit transfer reduces load on vCenter server, while achieving high synchronization speed. Delivers operational ease at scale, reducing the effort required to re-create, import/export content in environments with multiple vCenter Servers Enables consistency in content across vCenter Servers Provides advanced controls that enable admins to control capabilities such as the synchronization window between publisher and subscriber libraries and network bandwidth allocation for synchronization operations

Wednesday, May 6, 2015

VMware vSphere Fault Tolerance Enhancements (vSphere 6.0)

VMware vSphere Fault Tolerance (vSphere FT) provides continuous availability for applications in the event of physical server failures by creating a live shadow instance of a virtual machine that is always up to date with the primary virtual machine. In the event of a hardware outage, vSphere FT automatically triggers failover, ensuring zero downtime and preventing data loss. vSphere FT is easy to set up and configure and does not require any OS-specific or application-specific agents or configuration. It is tightly integrated with vSphere and is managed using vSphere Web Client.

Previous versions of vSphere FT supported only a single vCPU. Through the use of a completely new fast-checkpointing technology, vSphere FT now supports protection of virtual machines with up to four vCPUs and 64GB of memory. This means that the vast majority of mission-critical customer workloads can now be protected regardless of application or OS.

VMware vSphere Storage APIs – VMware vSphere Data Protection™ can now be used with virtual machines protected by vSphere FT. An in-guest agent is required to back up the previous version of vSphere FT. vSphere FT 6.0 empowers vSphere administrators to use VMware Snapshot–based tools to back up virtual machines protected by vSphere FT, enabling easier backup administration, enhanced data protection, and reduced risk.

There have also been enhancements in how vSphere FT handles storage. It now creates a complete copy of the entire virtual machine, resulting in total protection for virtual machine storage in addition to compute and memory. It also increases the options for storage by enabling the files of the primary and secondary virtual machines to be stored on shared as well as local storage. This results in increased protection, reduced risk, and improved flexibility

In addition, improvements have been made to vSphere FT virtual disk support and host compatibility requirements. Prior versions required a very specific virtual disk type: eager-zeroed thick. They also had very limiting host compatibility requirements. vSphere FT now supports all virtual disk formats: eager-zeroed thick, thick, and thin. Host compatibility for vSphere FT is now the same as for vSphere vMotion. This makes it much easier to use vSphere FT.

More details:-

vSphere High Availability Enhancements (vSphere 6.0)

vSphere HA delivers the availability required by most applications running in virtual machines, independent of the OS and application running in it. It provides uniform, cost-effective failover protection against hardware and OS outages within a virtualized IT environment. It does this by monitoring vSphere hosts and virtual machines to detect hardware and guest OS failures. It restarts virtual machines on other vSphere hosts in the cluster without manual intervention when a server outage is detected, and it reduces application downtime by automatically restarting virtual machines upon detection of an OS failure.

With the growth in size and complexity of vSphere environments, the ability to prevent and recover from storage issues is more important than ever. vSphere HA now includes Virtual Machine Component Protection (VMCP), which provides enhanced protection from All Paths Down (APD) and Permanent Device Loss (PDL) conditions for block (FC, iSCSI, FCoE) and file storage (NFS).

Prior to vSphere 6.0, vSphere HA could not detect APD conditions and had limited ability to detect and remediate PDL conditions. When those conditions occurred, applications were impacted or unavailable longer and administrators had to help resolve the issue. vSphere VMCP detects APD and PDL conditions on connected storage, generates vCenter alarms, and automatically restarts impacted virtual machines on fully functional hosts. By doing this, it greatly improves the availability of virtual machines and applications without requiring more effort from administrators. 

vSphere HA can now protect as many as 64 ESXi hosts and 8,000 virtual machines—up from 32 and 4,000— which greatly increases the scale of vSphere HA supported environments. It also is fully compatible with VMware Virtual Volumes, VMware vSphere Network I/O Control, IPv6, VMware NSX™, and cross vCenter Server vSphere vMotion. vSphere HA can now be used in more and larger environments and with less concern for feature compatibility.

More details:-

Friday, April 24, 2015

vCenter Architectural Change vSphere 6.0 (PSC)

vCenter Server 6 has some fundamental architectural changes compared to vCenter Server Server 5.5. The multitude of components that existed in vCenter Server 5.x has been consolidated in vCenter Server 6 to have only two components vCenter Management Server and Platform Services Controller, formerly vCenter Server Single Sign-On.
The Platform Services Controller (PSC) provides a set of common infrastructure services encompassing
  • Single Sign-On (SSO)
  • Licensing
  • Certificate Authority
The vCenter Management Server consolidates all the other components such as Inventory Service & Web Client services along with its traditional management components. The vCenter Server components can be typically deployed in with either embedded or external PSC. Care should be taken to understand the critical differences between the two deployment models. Once deployed one cannot move from one mode to another in this version.
More details:-

Sunday, April 19, 2015

vSphere 6: Scalability Improvements

vSphere 6 has greater capacity than vSphere 5.5.

vCenter Server has the following configuration maximums:

Embedded database:
vCenter Server on Windows: 20 hosts or 200 virtual machines
vCenter Server Appliance: 1,000 hosts or 10,000 virtual machines

A cluster has the following configuration maximums:
64 hosts in a cluster
8,000 virtual machines per cluster

A host has the following configuration maximums:
480 physical CPUs per ESXi host
6 TB of RAM per ESXi host (12 TB of RAM on specific OEM platforms)
1,024 virtual machines per ESXi host

Wednesday, February 25, 2015

Save 50% off Your Network Virtualization Certification Exam

The future of networking is virtual. Keep your skills relevant and future-proof your career by earning your VMware Certified Professional – Network Virtualization (VCP-NV) certification for half price through June 30, 2015.
Plus, if you have certain Cisco certification*, we will waive the course requirement in recognition of your previous certification through January 31, 2016. Visit the VCP-NV certification requirements page for complete details.
Whether you are earning your first VMware certification or seeking recertification this is a terrific opportunity to discover cutting-edge NSX technology and save on your exam.
Exam Code
Discount Code
Act fast! You must complete your exam by June 30, 2015 to save 50%.

Terms & Conditions
1.     To claim your 50% discount enter the relevant discount code when you complete the payment section for your exam registration at
2.     Exam must be registered, scheduled, and completed before June 30, 2015 5:00 PM PST.
3.     The 50% off discount exam voucher can only be used on the VCP-NV exam listed and cannot be used for non-proctored/online exams.
4.     The 50% off discount exam code cannot be combined with any other promotions or discounts.
5.     You must meet all relevant certification requirements in order to receive certification.
6.     VMware reserves the right to change or cancel these offers at any time.
7.     You must request authorization from VMware for the exam you wish to take before registering with PearsonVue. Authorization can be requested on the relevant certification web page (VCP-NV)

* Valid CCNA Data Center, CCNA Routing & Switching, CCNP Data Center, CCNP Routing & Switching, CCIE Data Center, or CCIE Routing & Switching certification can be applied towards VCP-NV certification. See the certification program page for complete details.

Saturday, February 21, 2015

VMware Certified Professional

Earning a VCP certification is a great achievement. But staying up to date in the expertise gained and proven by your certification is equally vital. If your skills are not current, your certification loses value. To ensure that all VCP holders maintain their proficiency, VMware is instituting a recertification policy effective March 10, 2014.
Learn more about this policy at

Tuesday, February 17, 2015

Opportunity to learn from Expert (VMWare vSphere: ICM 5.5 Training)

March 2015 during weekend I am going to deliver VMware vSphere:Install Configure Manage [V5.5] @ Koenig Solutions Pvt. Ltd , learn with experts those who are interested to join and want to know more details please contact

Thursday, February 5, 2015

vSphere 6 released..... Time to explore product....

 VMware vSphere 6 to Deliver More Than 650 New Features and Innovations- New VMware OpenStack Distribution Delivers Simple Path to Building OpenStack Clouds.

VMware vSphere® 6.0, the industry-leading virtualization platform, empowers users to virtualize any application with confidence,
redefines availability, and simplifies the virtual data center. The the ideal foundation for any cloud environment. This blockbuster of which are industry-first features.
• Increased Scalability – Increased configuration maximums: Virtual machines will support up to 128 virtual CPUs (vCPUs) and 4TB virtual RAM (vRAM). Hosts will support up to 480 CPU and 12TB of RAM, 2,048 virtual machines per host, and 64 nodes per cluster.
• Expanded Support – Expanded support for the latest x86 chip sets, devices, drivers, and guest operating systems. For a complete list of guest operating systems supported, see the VMware Compatibility Guide.
• Amazing Graphics – NVIDIA GRIDTM vGPUTM delivers the full benefits of NVIDIA hardware-accelerated graphics to virtualized solutions.
• Instant Clone* – Technology, built in vSphere 6.0, that lays that foundation to rapidly clone and deploy virtual machines, as much as 10x faster than what is currently possible today.
• Transform Storage for your Virtual Machines – vSphere Virtual Volumes* enables your external storage arrays to become VM-aware. Storage Policy-Based Management (SPBM) allows common management across storage tiers and dynamic storage class of service automation. Together they enable exact combinations of data services (such as clones and snapshots) to be instantiated more efficiently on a per VM basis.
• Network IO Control – New support for per-VM Distributed vSwitch bandwidth reservations to guarantee isolation and enforce limits on bandwidth.
• Multicast Snooping - Supports IGMP snooping for IPv4 packet and MLD snooping for IPv6 packets in VDS. Improves performance and scale with multicast traffic.
• Multiple TCP/IP stack for vMotion - Allows vMotion traffic a dedicated networking stack. Simplifies IP address management with a dedicated default gateway for vMotion traffic.
result is a highly available, resilient, on-demand infrastructure that is release contains the following new features and enhancements, many
• vMotion Enhancements – Perform non-disruptive live migration of workloads across distributed switches and vCenter Servers and over distances of up to 100ms RTT. The astonishing 10x increase in RTT offered in long-distance vMotion now makes it possible for data centers physically located in New York and London to migrate live workloads between one another.
• Replication-Assisted vMotion* – Enables customers, with active-active replication set up between two sites, to perform a more efficient vMotion resulting in huge time and resource savings – as much as 95 percent more efficient depending on the size of the data.
• Fault Tolerance (up to 4-vCPUs) – Expanded support for software- based fault tolerance for workloads with up to 4 virtual CPUs.
• Content Library – Centralized repository that provides simple and effective management for content including virtual machine templates, ISO images and scripts. With vSphere Content Library, it is now possible to store and manage content from a central location and share through a publish/subscribe model.
• Cross-vCenter Clone and Migration* – Copy and move virtual machines between hosts on different vCenter Servers in a single action.
• Enhanced User Interface – Web Client is more responsive, more intuitive, and more streamlined than ever before.
Learn More
For information on upgrading to vSphere 6.0, visit the vSphere Upgrade Center at upgrade-center/overview.html.
vSphere is available standalone or as a part of vSphere with Operations Management or vCloud Suite. For more information, visit management/ or

See more at:

Wednesday, January 21, 2015

Free Learning for VMware Virtual SAN Fundamentals [V5.5]

There is really a great tool made available by VMware for the professionals, who want to explore their skills in VMware Virtual SAN (vSAN). There is free e-learning (1 Hour) available, you just need to create an account on VMware portal (if don't have one). You can use the link below and then click on add to my enrollment button, after that login using your account on VMware portal,

Thursday, January 8, 2015

VMWare virtual SAN Requirements

1.    vSphere Requirements
2.    Storage Requirements
3.    Network Requirements

1 vSphere Requirements

1.1 vCenter Server
Virtual SAN requires VMware vCenter Server™ 5.5 Update 1. Both the Microsoft Windows version of vCenter Server and the VMware vCenter Server Appliance™ can manage Virtual SAN. Virtual SAN is configurable and monitored from only VMware vSphere Web Client.
1.2 vSphere
Virtual SAN requires three or more vSphere hosts to form a supported cluster in which each host contributes local storage. The minimum, three-host, configuration enables the cluster to meet the lowest availability requirement of tolerating at least one host, disk, or network failure. The vSphere hosts require vSphere version 5.5 or later.

2 Storage Requirements

2.1 Disk Controllers
Each vSphere host that contributes storage to the Virtual SAN cluster requires a disk controller. This can be a SAS or SATA host bus adapter (HBA) or a RAID controller. However, the RAID controller must function in one of two modes:
•             Pass-through mode
•             RAID 0 mode
Pass-through mode, commonly referred to as JBOD or HBA mode, is the preferred configuration for Virtual SAN because it enables Virtual SAN to manage the RAID configuration settings for storage policy attributes based on availability and performance requirements that are defined on a virtual machine. For a list of the latest Virtual SAN certified hardware and supported controllers, check the VMware Compatibility.
Guide for the latest information:
2.2 Hard Disk Drives
Each vSphere host must have at least one SAS, near-line SAS (NL-SAS), or SATA magnetic hard-disk drive (HDD) to participate in the Virtual SAN cluster. HDDs account for the storage capacity of the Virtual SAN shared datastore. Additional magnetic disks increase the overall capacity and can also improve virtual machine performance, because the virtual machine storage objects might be striped across multiple spindles. 
2.3 Flash-Based Devices
Each vSphere host must have at least one flash-based device—SAS, SATA, or PCI Express SSD—to participate in the Virtual SAN cluster. Flash-based devices provide both a write buffer and a read cache. The larger the flash based device capacity per host, the larger the number of I/Os that can be cached and the greater the performance results that can be achieved.
NOTE: Flash-based devices do not contribute to the overall size of the distributed Virtual SAN shared datastore.
They count only toward the capacity of the Virtual SAN caching tier.

3 Network Requirements

3.1 Network Interface Cards
Each vSphere host must have at least one network adapter. It must be 1Gb Ethernet or 10Gb Ethernet capable, but VMware recommends 10Gb. For redundancy, a team of network adapters can be configured on a per-host basis. VMware considers this to be a best practice but not necessary in building a fully functional Virtual SAN cluster.
3.2 Supported Virtual Switch Types
Virtual SAN is supported on both the VMware vSphere Distributed Switch™ (VDS) and the vSphere standard switch (VSS). No other virtual switch types are supported in the initial release.
3.3 VMkernel Network
On each vSphere host, a VMkernel port for Virtual SAN communication must be created. A new VMkernel virtual adapter type has been added to vSphere 5.5 for Virtual SAN. The VMkernel port is labeled Virtual SAN traffic.
Figure 1. Virtual SAN VMkernel Adapter Type

This new interface is used for host intra cluster communications as well as for read and write operations whenever a vSphere host in the cluster is the owner of a particular virtual machine but the actual data blocks making up that virtual machine’s objects are located on a remote host in the cluster. In this case, I/O must traverse the network configured between the hosts in the cluster. If this interface is created on a VDS, the VMware vSphere Network I/O Control feature can be used to set shares or reservations for the Virtual SAN traffic.

More Details:-