Some very good thoughts about Why should you care about Cisco UCS

Folks,

I wanted to share a few very good articles and web documents coming from friends into one condensed updated post about all the great things that Cisco UCS can provide to your organization :

Last performances update about our solution :

In the three years since its introduction, Cisco Unified Computing System™ (Cisco UCS™) powered by Intel® Xeon® processors captured 63 world performance records,UCS and Intel Xeon Processors: so, please check this out : 63 World-Record Performance Results

In response to what skeptics were saying 3 years ago :

Cisco as a server vendor! Ha!

Remember back when the company first unveiled its “Unified Computing System” (UCS)? At the time, the thought of Cisco being in the server market seemed almost laughable. But, this was a journey that we had seen before. Similar guffawing was heard when Cisco jumped into the voice market. Way back in the day, when I was in internal IT, Cisco acquired its way into the VoIP market and rode the IP wave to market leadership in only about a decade. When you think about how, historically, extremely difficult voice share was to gain, the fact that Cisco managed to grab as much share as it did, and as fast as it did, was remarkable…

more to read, here…

Some good tips to bear in mind if you want to do an apples to apples cost comparison ?

…therefore evaluate the real cost of implementing Cisco UCS solutions

The Service-Profile concept :

In this post Marcel will try to explain what a service profile template is within Cisco UCS. However to start with the basics let’s start with a service profile. I assume you are aware of the Cisco UCS Emulator. if not you can download it from here using your CCO account: http://developer.cisco.com/web/unifiedcomputing/home

So what is a service profile within Cisco UCS?
A service profile defines a single server and its storage and networking characteristics and are stored in the Cisco UCS Fabric Interconnects. Each server connected to the Fabric Interconnects are specified with a service profile. The advantage of service profiles are mainly automation of your physical hardware configuration like BIOS settings, firmware levels, network interface cards (NICs), host bus adapters (HBAs) etcetera…

Cisco Unified Computing System Ethernet Switching Modes

Great paper to understand end-host & switch modes and when to use the most appropriate option.

What You Will Learn ?

In Cisco Unified Computing System™ (Cisco UCS™) environments, two Ethernet switching modes determine the way that the fabric interconnects behave as switching devices between the servers and the network. In end-host mode, the fabric interconnects appear to the upstream devices as end hosts with multiple links. In end-host mode, the switch does not run Spanning Tree Protocol and avoids loops by following a set of rules for traffic forwarding. In switch mode, the switch runs Spanning Tree Protocol to avoid loops, and broadcast and multicast packets are handled in the traditional way. This document describes these two switching modes and discusses how and when to implement each mode.

 Cisco UCS Manager Configuration Common Practices and Quick-Start Guide

The introduction of the Cisco Unified Computing System™ (Cisco UCS™) in June 2009 presented a new paradigm for data center and server management. Today Cisco UCS is used by more than 10,000 unique customers. While the paradigm is no longer new, many customers are deploying Cisco UCS for the first time. This guide provides a concise overview of Cisco UCS essentials and common practices. This guide also covers the most direct path to working in a stateless-server SAN boot environment, upon which much of the Cisco UCS core value is predicated. In support of a utility or cloud computing model, this guide presents a number of concepts and elements within the Cisco UCS Management Model that will hopefully help data center operators increase responsiveness and efficiency by improving data center automation.
 
read more here
 
Cisco UCS and Storage Connectivity Options and Best Practices with Netapp Storage : 
This paper will provide an overview of the various storage features, connectivity options, and best practices when using the Unified Computing System (UCS) with NetApp storage. This document will focus on storage in detail, both block and file protocols and all the best practices for using these features exposed in UCS with NetApp storage. There will not be an application or specific use case focus to this paper. There are existing Cisco Validated Designs that should be referenced for a deeper understanding of how to configure the UCS and NetApp systems in detail for various application centric use cases. These documents treat the combination of UCS and NetApp from a more holistic or, end to end approach and include the design details and options for the various elements of UCS and NetApp systems. The reader is encouraged to review these documents which are referenced below.
 
 So what does is it mean for you, proven by real business cases..? There you go !
 
Citrix and Cisco virtualizing your workspace through cisco VXI implementation :
Awesome 8min demo
 
KPIT deploys VDI/VXI for 800 users and saves 75% desktop management & 60% desktop energy thanks to VCE and its vBlock Architecture :
read the details here
 
Banco Azteca deploys 500 user VDI/VXI pilot in just 3 weeks using VCE and its vBlock architecture :
read the details here
 
Novis sees 25% increase in its SAP applications per blade with Cisco UCS and Nexus Architecture :
Read this article speaking about how cisco helped out to deliver Cloud services for less
 
Hierro Barquisimeto expects 70% reduction in hardware, power, cooling, and space with Cisco UCS implementation :
read the details here
 
NTT Data reduces TCO and provisioning time by 50%, CO2 emissions by 79% with Cisco Unified Computing Systems :
read the details here
 
Training institute reduces infrastructure costs by upto 50%, energy consumption by 18%, provisioning by 90% with Cisco UCS :
read the details here
 
Hoping this article, through the initial writers posts of course, could help you to access as much relevant info as possible, I invite you to reach me out and let me know what sort of info and desired topic you would like to see more on this blog.
 
Happy reading, and I sincerely hope I helped you, just a bit, to gain more confidence into our Fabric Computing solutions.
 
cheers,
Michael
Advertisements

Nexus 1000V Product Family Public Webcast Series for Customers & Partners

Dear customers and partners,

I’m delighted to invite you to the next wave of webinars related to our Virtual Data Center products line.  Feel free to register to each relevant sessions and contact me if you need more details about one of them at mneefs@cisco.com.

Date

Technical Track Topics

Webcast

2/14/12

Virtual Security Gateway (VSG) v1.3 Technical Deep Dive

Register

2/22/12

Nexus 1000V v1.5 Technical Deep Dive

Register

2/29/12

Nexus 1010-X v1.4 Technical Deep Dive

Register

3/7/12

vWAAS and Nexus 1000V Technical Deep Dive

Register

3/14/12

FlexPod & Nexus 1000V/1010

Register

3/21/12

QoS for multimedia traffic in the Virtualized DC (w/ Nexus 1000V)

Register

3/28/12

Vblock & Nexus 1000V / VSG / vWAAS

Register

4/4/12

vCloud Director, Nexus 1000V, and VXLAN Technical Deep Dive

Register

4/11/12

Cisco’s CloudLab Deep Dive: Hands-on labs for N1KV, VSG & VXLAN

Register

 The above table is also posted @ http://www.cisco.com/go/1000vcommunity

The presentation and Q&A will be  posted at this link after each webcast.

Resources

Best regards,

Michael

UCS and Nexus 1000V Network Architectures and Best Practices

 

 

 

 

UCS and Nexus 1000V Network Architectures and Best Practices Forum Invitation!

 

The Cisco Data Center Server Access and Virtualization Technology Group would like to personally invite you to the UCS and Nexus 1000V Network Architecture and Best Practices Forum. This event will provide and equip you with the latest information on Cisco Data Center Network Virtualization solutions and products. This one day event will feature the Nexus 1000V and the Nexus 1010 deployment/integration best practices in a UCS environment. The best practices of deploying the Nexus 1000V in Vblock and FlexPod will be discussed. Also, the best practices discussion will include the Nexus hardware platform in various redundant topologies, such as vPC with the Nexus 1000V.

 

 What  

 UCS Overview

 Nexus 1000V/1010 Overview

 Nexus 1010 Best Practice Network Options

 Nexus 1000V Deployment Best Practices in a Vblock

 Nexus 1000V Deployment Best Practices in a FlexPod

 

Who Network, Server, and Virtualization Engineers/Managers
When Various dates/times – see link under “Registration” below
Where Various locations – see link under “Registration” below
Registration Click Here to Register

 

 

Agenda

8:30am – 9:30am UCS Overview This session focuses on Unified Computing System (UCS) architecture and relevant features and technologies that affect the Nexus 1000V. Configuration considerations of the UCS “Service Profiles” and UCS operational mode will be discussed as it pertains with the Nexus 1000V deployment.
9:30am – 10:30am Nexus 1000V and Nexus 1010 Overview This session describes the general overview of the Nexus 1000V and Nexus 1010 architecture. A general description of the components that make up the Nexus 1000V and the communication that happens between the VSM, VEM, and VMware’s vCenter Database.
10:30am – 12:00pm Nexus 1010 Best Practice Network Options

This session will dive into details of the Nexus 1010 network options and best practice designs for those network options, including deploying L2/L3 communication of the VSM. Describing the various options for the “virtual service blades” that are currently supported and use cases of upcoming virtual service blades that could be deployed on the Nexus 1010.

12:00pm – 1:00pm Lunch provided by Cisco Systems, Inc.
1:00pm – 2:00pm Nexus 1000V Deployment Best Practices in a Vblock
2:00pm – 3:00pm Nexus 1000V Deployment Best Practices in a FlexPod

 

Before and after VXI

Recent studies have revealed that over 60% of enterprise companies plans to deploy desktop virtualization in some way over the next 3 to 4 years.  From a TCO point of view the advantages of desktop virtualization are simply amazing. As we move further into the so called “post-pc era”, having the ability to “port over” the virtual desktop environment to other devices or let’s say locations than the traditional office desk brings unseen flexibility and mobility.  Think of our Cius business tablet that offers you a full desktop environment in the office, while keeping access to the virtual desktop  over wi-fi or 3G/4G connectivity while on the go.

Desktop virtualization however just doesn’t prove to be that good a solution when it comes to integrating real-time audio and video. Using a soft phone or video client over a display protocol such as Citrix ICA or VMWare PCOIP simply doesn’t scale. “Hair-pinning” all the real-time traffic back and forth to the data center where the virtual desktop resides causes delay and jitter and puts a heavy burden on data center resources, not to mention possible bandwidth exhaustion…

Thanks to our Virtual Expirience Infrastructure or simply VXI, we are able to separate real-time traffic out of the VDI display protocol, routing voice and video traffic directly between end points, bypassing the data center.

Please take a moment to view a short video on our VXI solutions, showing you how separating voice and video traffic from the display protocol enhances the user experience. To start with, you will first see what you get without VXI. They say that seeing is believing. Well,  this video really speaks for itself.

 

 

To find out more about our VXI offering and VXC clients, please visit the link below, and see how we effectively bring the best of our borderless networking, virtualization and collaboration technologies together.

http://www.cisco.com/go/vxi

Re-defining Fabric Scale: Thinking Beyond the “Box”

Today we are making a significant announcement with several new innovations across our data center and switching portfolio that showcase how our customers can build large scale-up and scale-out data center networks.  While the press release does a great job (thanks Lee!) of highlighting all the innovations across the Nexus Unified Fabric portfolio and the new ASA 1000v, two aspects of the announcement stand out quite prominently:

  1. Cisco is delivering the highest density 10GbE modular switching platform in the industry
  2. Cisco is delivering the most scalable fabric in the industry and, by extension – on the planet! (we’re told planet sounds much cooler)

No. 1 above is fairly straightforward. With our new 2nd-generation F2 line card and Fabric 2 module, at 768 ports of 10GbE line-rate switching ports running NX-OS, the flagship Nexus 7018 in a fully-loaded configuration is simply the epitome of switch scale.

No.2 is where things get interesting, because we’re no longer thinking about just the “box” but rather, how we can weave different elements across the data center into a holistic “fabric”.  This systems-based approach focuses on multi-dimensional scale transcending the box and even the data center LAN, to span between data centers, while providing feature-rich fabric capabilities.  At 12,000+ 10GbE nodes supported as part of one Fabricpath-enabled system, and with the ability to support Fabric Extender (FEX) technology (plus L2 and L3 capabilities), this approach re-defines fabric scalability at 2X the scale and half the cost point of the next best claim in the industry. More important, it achieves this in an evolutionary manner for our 19,000+ NX-OS customers, offering investment protection for brownfield deployments while raising the bar for greenfield environments!

The Nexus platforms have been around for 3+ years, and  over 500 customers have deployed FabricPath on the Nexus 7000 alone since its introduction about an year ago. It is a proven technology. With Fabricpath now coming onto the Nexus 5500 platforms, the momentum is likely to spike up with a mix of both size and scale. Like I said, things get interesting.

To make it more fun, our technical experts from the product teams have taken a data-driven approach and compared Cisco’s new innovations and our box and system-scale with others in the industry.

They looked at a couple of representative examples – the first being, what it would take any other vendor to build a non-blocking 768-port 10GbE “switch”, with capabilities similar to what the Nexus 7000 could provide in a single chassis. The second example takes a look at what it takes to build a “fabric” with Cisco leveraging its Nexus portfolio and NX-OS to build that.

Take a look and let us know what you think. It is useful to note that most vendors in the industry today have no fabric capabilities to speak of, and the few that are attempting a systems approach, have really limited to no customer traction thus far. Our customers and key analysts tell us that Cisco has a multi-year innovation lead in this space, even as Cisco continues to focus on bringing the network, compute, storage and application services together with integrated management to drive productivity and efficiency across traditional IT and organizational silos.

 

We often get asked – “So, who needs this kind of scale”? Today, we’re seeing a variety of customers asking for L2 or L3 scale across diverse deployment scenarios. Baidu is one such example. They chose the Nexus 7000 platform with the second generation capabilities for their global search business. Check out what Rackforce has to say. Where possible, we will continue to highlight several others that have chosen the Nexus portfolio and reaping the benefits of fabric scale and fabric extensibility.

Ultimately, customers desire architectural flexibility. The underlying infrastructure they invest in has to be adaptable enough to accommodate heterogeneous workloads and diverse business requirements – whether they’re focused on traditional enterprise, Web 2.0, Big Data, Cloud/Service Provider or specialized applications like high-frequency trading or high-performance computing. Upgrading or re-purposing the infrastructure every time they encounter a new workload or a new business requirement is not an option. It is also fair to state that most customers won’t overhaul their entire infrastructure at one go, but rather selectively choose which clusters they want to optimize in an evolutionary way.

Delivering architectural flexibility has been the hallmark of Cisco’s Unified Fabric approach, and the latest innovations are yet another example of giving our customers the power to choose.  Fortunately with this, we have been able to get both mind share as well as market share, and frankly, the results speak for themselves.

So, with sincere gratitude, a big THANK YOU to our customers, channel partners and our ecosystem partners for the tremendous success they have helped drive in the 3+ years since the Nexus portfolio was introduced.

This space will continue to be both disruptive and interesting, and we promise to keep you abreast of the happenings. There will be more blogs to follow on fabric extensibility, security and services. So, stay tuned and don’t forget to mark your calendars and register for our webinar “Evolutionary Fabric, Revolutionary Scale” on October 25th, 2011. You’ll hear from customers, partners, analysts and  some of Cisco’s top technology executives on how all this comes together. Register here now

And if after all Unified Computing System (a.k.a UCS) was a simple as that to deploy…

Ever wondered how look and feel would it be to manage the architectural breakthrough Unified Computing System from Cisco ?

You’re a just a few clicks away to make it happen…

1/ Download our UCSM emulator and follow installation procedure at : http://developer.cisco.com/web/unifiedcomputing/ucsemulatordownload

2 / Take a look to these short video’s bringing you sequentialy through the process of setting up your UCS system.

  1. UCS Initial CLI setup : http://www.youtube.com/user/scollora#p/u/5/86H_4lOeXfA
  2. UCS initial UCS-M setup: http://www.youtube.com/user/scollora#p/u/6/gXDJYmbR6WI
  3. UCS IP KVM setup: http://www.youtube.com/user/scollora#p/u/3/d0KTYItdU6g
  4. UCS LAN setup: http://www.youtube.com/user/scollora#p/u/2/bpDFU46F5NY
  5. UCS SAN setup: http://www.youtube.com/user/scollora#p/u/1/zytwxmRW1es

3/ You’re now all set to play and challenge your virtual system as much as you want 🙂

4/ It’s good now, right ? Convinced…? It’s probably time to contact your Cisco Sales rep and get to know more about all the benefits UCS can bring to your organization.

Michael

Scalable Cloud Network with Cisco Nexus 1000V Series Switches and VXLAN

Many customers are building private or public clouds. Intrinsic to cloud computing is having multiple tenants with numerous applications using the cloud infrastructure. Each of these tenants and applications needs to be logically isolated from each other, even at the networking level. For example, a three-tier application can have multiple virtual machines requiring logically isolated networks between the virtual machines. Traditional network isolation techniques such as IEEE 802.1Q VLAN provide 4096 LAN segments (via a 12-bit VLAN identifier) and may not provide enough segments for large cloud deployments. Cisco and a group of industry vendors are working together to address new requirements of scalable LAN segmentation as well as transporting virtual machines across a broader diameter. The underlying technology, referred to as virtual extended LAN (or VXLAN), defines a 24-bit LAN segment identifier to provide segmentation at cloud scale. In addition, VXLAN provides an architecture for customers to grow their cloud deployments with repeatable pods in different subnets. VXLAN can also enable virtual machines to be migrated between servers in different subnets. With Cisco Nexus® 1000V Series Switches supporting VXLAN, customers can quickly and confidently deploy their applications to the cloud.
 

Cloud Computing Demands More Logical Networks

Traditional servers have unique network addresses to help ensure proper communication. Network isolation techniques, such as VLANs, typically are used to isolate different logical parts of the network, such as a management VLAN, production VLAN, or DMZ VLAN.

In a cloud environment, each tenant requires a logical network isolated from all other tenants. Furthermore, each application from a tenant demands its own logical network, to isolate itself from other applications. To provide instant provisioning, cloud management tools, such as VMware vCloud Director, even duplicate the application’s virtual machines, including the virtual machines’ network addresses, with the result that a logical network is required for each instance of the application.

Challenges with Existing Network Isolation Techniques

The VLAN has been the traditional mechanism for providing logical network isolation. Because of the ubiquity of the IEEE 802.1Q standard, there are numerous switches and tools that provide robust network troubleshooting and monitoring capabilities, enabling mission-critical applications to depend on the network. Unfortunately, the IEEE 802.1Q standard specifies a 12-bit VLAN identifier, which hinders the scalability of cloud networks beyond 4K VLANs. Some in the industry have proposed incorporation of a longer logical network identifier in a MAC-in-MAC or MAC in Generic Route Encapsulation (MAC-in-GRE) encapsulation as a way to scale. Unfortunately, these techniques cannot make use of all the links in a port channel, which is often found in the data center network or in some cases do not behave well with Network Address Translation (NAT). In addition, because of the encapsulation, monitoring capabilities are lost, preventing troubleshooting and monitoring. Hence, customers are no longer confident in deploying Tier 1 applications or applications requiring regulatory compliance in the cloud.

VXLAN Solution

VXLAN solves these challenges with a MAC in User Datagram Protocol (MAC-in-UDP) encapsulation technique. VXLAN uses a 24-bit segment identifier to scale (Figure 1). In addition, the UDP encapsulation enables the logical network to be extended to different subnets and helps ensure high utilization of port channel links (Figure 2). Instead of broadcasting a frame as in a case of unknown unicast, the UDP packet is multicasted to the set of servers that have virtual machines on the same segment. Within each segment, traditional switching takes place and can therefore provide a much larger number of logical networks.
 
VXLAN Format

As shown in the Figures , the Cisco® VXLAN solution enables:

• Logical networks to be extended among virtual machines placed in different subnets

• Flexible, scalable cloud architecture in which new servers can be added in different subnets

• Migration of virtual machines between servers in different subnets

 Scalability with VXLAN

 
In conclusion, Cloud computing requires significantly more logical networks than traditional models. Traditional network isolation techniques such as the VLAN cannot scale adequately for the cloud. VXLAN resolves these challenges with a MAC-in-UDP approach and a 24-bit segment identifier. This solution enables a scalable cloud architecture with replicated server pods in different subnets. Because of the Layer 3 approach of UDP, virtual machine migration extends even to different subnets. Cisco Nexus 1000V Series switch with VXLAN support provides numerous advantages for customers, enabling customers to use LAN segments in a robust and customizable way without disrupting existing operational models. The unique capabilities of the Cisco Nexus 1000V Series with VXLAN help ensure that customers can deploy mission-critical applications in the cloud with confidence.
%d bloggers like this: