Before and after VXI

Recent studies have revealed that over 60% of enterprise companies plans to deploy desktop virtualization in some way over the next 3 to 4 years.  From a TCO point of view the advantages of desktop virtualization are simply amazing. As we move further into the so called “post-pc era”, having the ability to “port over” the virtual desktop environment to other devices or let’s say locations than the traditional office desk brings unseen flexibility and mobility.  Think of our Cius business tablet that offers you a full desktop environment in the office, while keeping access to the virtual desktop  over wi-fi or 3G/4G connectivity while on the go.

Desktop virtualization however just doesn’t prove to be that good a solution when it comes to integrating real-time audio and video. Using a soft phone or video client over a display protocol such as Citrix ICA or VMWare PCOIP simply doesn’t scale. “Hair-pinning” all the real-time traffic back and forth to the data center where the virtual desktop resides causes delay and jitter and puts a heavy burden on data center resources, not to mention possible bandwidth exhaustion…

Thanks to our Virtual Expirience Infrastructure or simply VXI, we are able to separate real-time traffic out of the VDI display protocol, routing voice and video traffic directly between end points, bypassing the data center.

Please take a moment to view a short video on our VXI solutions, showing you how separating voice and video traffic from the display protocol enhances the user experience. To start with, you will first see what you get without VXI. They say that seeing is believing. Well,  this video really speaks for itself.

 

 

To find out more about our VXI offering and VXC clients, please visit the link below, and see how we effectively bring the best of our borderless networking, virtualization and collaboration technologies together.

http://www.cisco.com/go/vxi

Specifications-Based Hardware Support – VMware considerations

Since a while we support UC virtualization on Cisco, HP and IBM servers apart from the Cisco validated UCS configurations (Tested Reference Configurations or TRCs). This is referred to as “specs-based” hardware support. For these configurations we will not provide sizing guidelines as we do with the TRCs.  The configuration is supported as long as the requirements in term of CPU (vCPU and CPU type), memory and storage capacity and performance are respected. Although this looks very interesting, there are some considerations you should be aware of when going for a specs-based deployment over a Cisco TRC-based installation. In order to be able to support such a deployment that has not been thoroughly tested, TAC will need to be able to use some advanced VMware management tools to debug and analyze the virtual environment. This requires VMware vCenter, which is therefore mandatory for specs-based systems. This has an important influence on the cost of the VMware licenses.  In any doubt about the sizing of the WMware hosts and the number of application they can run, or whenever the pricing of the required VMware licenses is a potential issue, we recommend using the Cisco validated UCS TRCs.  In terms of VMware you are even allowed to use the free edition of vSphere as the hypervisor for them.  For any information on UC on UCS please visit www.cisco.com/go/uc-virtualized.

Scalable Cloud Network with Cisco Nexus 1000V Series Switches and VXLAN

Many customers are building private or public clouds. Intrinsic to cloud computing is having multiple tenants with numerous applications using the cloud infrastructure. Each of these tenants and applications needs to be logically isolated from each other, even at the networking level. For example, a three-tier application can have multiple virtual machines requiring logically isolated networks between the virtual machines. Traditional network isolation techniques such as IEEE 802.1Q VLAN provide 4096 LAN segments (via a 12-bit VLAN identifier) and may not provide enough segments for large cloud deployments. Cisco and a group of industry vendors are working together to address new requirements of scalable LAN segmentation as well as transporting virtual machines across a broader diameter. The underlying technology, referred to as virtual extended LAN (or VXLAN), defines a 24-bit LAN segment identifier to provide segmentation at cloud scale. In addition, VXLAN provides an architecture for customers to grow their cloud deployments with repeatable pods in different subnets. VXLAN can also enable virtual machines to be migrated between servers in different subnets. With Cisco Nexus® 1000V Series Switches supporting VXLAN, customers can quickly and confidently deploy their applications to the cloud.
 

Cloud Computing Demands More Logical Networks

Traditional servers have unique network addresses to help ensure proper communication. Network isolation techniques, such as VLANs, typically are used to isolate different logical parts of the network, such as a management VLAN, production VLAN, or DMZ VLAN.

In a cloud environment, each tenant requires a logical network isolated from all other tenants. Furthermore, each application from a tenant demands its own logical network, to isolate itself from other applications. To provide instant provisioning, cloud management tools, such as VMware vCloud Director, even duplicate the application’s virtual machines, including the virtual machines’ network addresses, with the result that a logical network is required for each instance of the application.

Challenges with Existing Network Isolation Techniques

The VLAN has been the traditional mechanism for providing logical network isolation. Because of the ubiquity of the IEEE 802.1Q standard, there are numerous switches and tools that provide robust network troubleshooting and monitoring capabilities, enabling mission-critical applications to depend on the network. Unfortunately, the IEEE 802.1Q standard specifies a 12-bit VLAN identifier, which hinders the scalability of cloud networks beyond 4K VLANs. Some in the industry have proposed incorporation of a longer logical network identifier in a MAC-in-MAC or MAC in Generic Route Encapsulation (MAC-in-GRE) encapsulation as a way to scale. Unfortunately, these techniques cannot make use of all the links in a port channel, which is often found in the data center network or in some cases do not behave well with Network Address Translation (NAT). In addition, because of the encapsulation, monitoring capabilities are lost, preventing troubleshooting and monitoring. Hence, customers are no longer confident in deploying Tier 1 applications or applications requiring regulatory compliance in the cloud.

VXLAN Solution

VXLAN solves these challenges with a MAC in User Datagram Protocol (MAC-in-UDP) encapsulation technique. VXLAN uses a 24-bit segment identifier to scale (Figure 1). In addition, the UDP encapsulation enables the logical network to be extended to different subnets and helps ensure high utilization of port channel links (Figure 2). Instead of broadcasting a frame as in a case of unknown unicast, the UDP packet is multicasted to the set of servers that have virtual machines on the same segment. Within each segment, traditional switching takes place and can therefore provide a much larger number of logical networks.
 
VXLAN Format

As shown in the Figures , the Cisco® VXLAN solution enables:

• Logical networks to be extended among virtual machines placed in different subnets

• Flexible, scalable cloud architecture in which new servers can be added in different subnets

• Migration of virtual machines between servers in different subnets

 Scalability with VXLAN

 
In conclusion, Cloud computing requires significantly more logical networks than traditional models. Traditional network isolation techniques such as the VLAN cannot scale adequately for the cloud. VXLAN resolves these challenges with a MAC-in-UDP approach and a 24-bit segment identifier. This solution enables a scalable cloud architecture with replicated server pods in different subnets. Because of the Layer 3 approach of UDP, virtual machine migration extends even to different subnets. Cisco Nexus 1000V Series switch with VXLAN support provides numerous advantages for customers, enabling customers to use LAN segments in a robust and customizable way without disrupting existing operational models. The unique capabilities of the Cisco Nexus 1000V Series with VXLAN help ensure that customers can deploy mission-critical applications in the cloud with confidence.

CISCO FIRST IN THE INDUSTRY – NEW VMmark 2.0 Cloud Benchmark

Cisco brings you the latest VMark 2.0 results.
This freshly new VMark benchmark is now based , not only on application workload performances , but for the first time VM performance is now evaluated on :

How well a server , network and storage support virtual machine movement within a given network , storage migration and VM provisioning.

This is ofcourse where a UCS platform in basically developped for .

It’s about bringing together network , compute and storage access in one system , managed and provisioned from one single pane.

Now this new stressful virtualization benchmark has the following properties :

Multi-host to model realistic datacenter deployments
Virtualization infrastructure workloads to more accurately capture overall platform performance
Heavier workloads than VMmark 1.x to reflect heavier customer usage patterns enabled by the increased capabilities of the virtualization and hardware layers
Multi-tier workloads driving both VM-to-VM and external network traffic
Workload burstiness to insure robust performance under variable high loads

For more information on the results, please go to :

http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/LE_367602_PB_FirstVMmark2.pdf

The Power of Collaboration Extended to Virtualized Desktops

Cisco Virtualization Experience Infrastructure ( VXI ) is the first solution to combine Cisco Collaboration, Cisco Borderless Networks and Cisco’s Data Center Architectures, creating an unsurpassed offering that eliminates the feature gaps of existing virtualization solutions.

Cisco Virtualization Experience Client endpoints are a critical element of this new solution.

Workers demand access to data, applications, and services anywhere, at any time, and across a
diversity of operating systems, device form factors, networking environments, and work preferences. At the same time, workers expect an uncompromised and unencumbered user experience, with rich media and collaboration services.

Cisco delivers on these requirements with Cisco Virtualization Experience Clients.

Picture 1: Sample of typical CISCO 2100 endpoint in white  integrated to high end IP Phone:

Picture 2: Sample of typical CISCO 2200 endpoint in white :

Picture 3: Sample of typical CISCO CIUS Video  endpoint  :

These endpoints provide workers with secure, real-time access to business applications and content — anytime, anywhere — without compromise of the rich collaborative user experience for which Cisco is known.

CITRIX tm on RDP/ICA as well as VMWARE tm PCoIP are supported on VXC clients.

You can read more on Cius in one of our previous post

More info at http://j.mp/hLAFl9

Consolidating Data Center Infrastructure with 10Gb Ethernet.

Data Centers and Server Rooms are more and more (over)loaded with IT-infrastructure like Servers, Storage Systems and Networking equipment. This leads to a multitude of concerns in the area’s of physical space usage, power & cooling, cabling & patching,…etc. The cost to keep these Data Centers running is going through the roof and becomes difficult to justify. As a consequence Data Center managers are looking into ways to consolidate IT-infrastructure and are in search of technologies that can reduce capital and operational spending.
IT technology providers are responding to these challenges with different sorts of solutions.
In the domain of data storage and back-up companies like EMC, Netapp and others provide solutions to pool storage resources in storage networks and apply very specialized data management techniques like for example data-deduplication to reduce the ever-growing demand for storage capacity.
CPU and server vendors are building always more performant systems and in combination with server virtualisation technology allow customers to attain drastic server consolidation ratio’s. We have seen ratio’s going from 5:1 to even 30:1 and more. Traditionally virtualisation servers are equipped with a number of I/O-interfaces for LAN, Storage , Back-up and Management. These different traffic types are allocated to dedicated interfaces to provide traffic segregation and guaranteed Quality of Service. Some of these interfaces are replicated for redundancy reasons or to provide higher throughput. In some situations this results in server I/O configurations with 2 to 4 SAN Fibre Channel interfaces and 4 to 8 LAN Gigabit Ethernet interfaces. Servers with a total of 6 to 12 I/O interface are putting a lot of stress on the Data Center LAN and SAN networks.
To  address this issue and to further consolidate the Data Center infrastructure we could consider to connect server with 10Gb Ethernet instead of 1Gb Ethernet. To do this servers can be equiped with 10Gb Ethernet NIC’s or some of the newest generations of servers are equipped with two 10Gb Ethernet Lan on Motherboard (LoM) interfaces and provide an accumulated I/O throughput of 20 Gbps, which is more than enough to satisfy the requirements of the most I/O hungry applications. VLAN technology and Network QoS provides the same level of traffic segmentation and guaranteed performance for the different types of traffic which used to run on dedicated interfaces.
Technology to transport storage traffic on LAN-type interfaces create the so-called “Unified Data Center Network Fabric” (consolidating LAN and Storage traffic on a single Data Center network). 10Gb Ethernet provides the ideal network foundation for building a Unified Fabric. Protocols like iSCSI, NFS/CIFS and FCoE are used to layer the storage related traffic on top of 10Gb Ethernet.
Using 10Gb Ethernet to connect servers and creating a Unified Fabric has the potential to dramatically reduces the number of I/O interfaces per server. In some cases from 12 to 2 which represents important capital and operational savings.
Cisco considers 10Gb Ethernet and Unified Fabric as strategic network technology evolutions and incorporates all the technology required to build such networks in its Nexus Family of Data Center Network switches.
To illustrate this evolution I want to cross-reference the VMware networking blog that describes vSphere ESX/ESXi server configurations based on 10Gb Ethernet.
To further complete this solution VMware and Cisco have co-developed the Nexus 1000V, a virtual switch for the vSphere ESX/ESXi servers that introduces the VN-Link technology. With the Nexus 1000V Cisco wants to be able to provide the same type of policy based network and security services for Virtual Machines as it does already for physical servers.
%d bloggers like this: