Some very good thoughts about Why should you care about Cisco UCS

Folks,

I wanted to share a few very good articles and web documents coming from friends into one condensed updated post about all the great things that Cisco UCS can provide to your organization :

Last performances update about our solution :

In the three years since its introduction, Cisco Unified Computing System™ (Cisco UCS™) powered by Intel® Xeon® processors captured 63 world performance records,UCS and Intel Xeon Processors: so, please check this out : 63 World-Record Performance Results

In response to what skeptics were saying 3 years ago :

Cisco as a server vendor! Ha!

Remember back when the company first unveiled its “Unified Computing System” (UCS)? At the time, the thought of Cisco being in the server market seemed almost laughable. But, this was a journey that we had seen before. Similar guffawing was heard when Cisco jumped into the voice market. Way back in the day, when I was in internal IT, Cisco acquired its way into the VoIP market and rode the IP wave to market leadership in only about a decade. When you think about how, historically, extremely difficult voice share was to gain, the fact that Cisco managed to grab as much share as it did, and as fast as it did, was remarkable…

more to read, here…

Some good tips to bear in mind if you want to do an apples to apples cost comparison ?

…therefore evaluate the real cost of implementing Cisco UCS solutions

The Service-Profile concept :

In this post Marcel will try to explain what a service profile template is within Cisco UCS. However to start with the basics let’s start with a service profile. I assume you are aware of the Cisco UCS Emulator. if not you can download it from here using your CCO account: http://developer.cisco.com/web/unifiedcomputing/home

So what is a service profile within Cisco UCS?
A service profile defines a single server and its storage and networking characteristics and are stored in the Cisco UCS Fabric Interconnects. Each server connected to the Fabric Interconnects are specified with a service profile. The advantage of service profiles are mainly automation of your physical hardware configuration like BIOS settings, firmware levels, network interface cards (NICs), host bus adapters (HBAs) etcetera…

Cisco Unified Computing System Ethernet Switching Modes

Great paper to understand end-host & switch modes and when to use the most appropriate option.

What You Will Learn ?

In Cisco Unified Computing System™ (Cisco UCS™) environments, two Ethernet switching modes determine the way that the fabric interconnects behave as switching devices between the servers and the network. In end-host mode, the fabric interconnects appear to the upstream devices as end hosts with multiple links. In end-host mode, the switch does not run Spanning Tree Protocol and avoids loops by following a set of rules for traffic forwarding. In switch mode, the switch runs Spanning Tree Protocol to avoid loops, and broadcast and multicast packets are handled in the traditional way. This document describes these two switching modes and discusses how and when to implement each mode.

 Cisco UCS Manager Configuration Common Practices and Quick-Start Guide

The introduction of the Cisco Unified Computing System™ (Cisco UCS™) in June 2009 presented a new paradigm for data center and server management. Today Cisco UCS is used by more than 10,000 unique customers. While the paradigm is no longer new, many customers are deploying Cisco UCS for the first time. This guide provides a concise overview of Cisco UCS essentials and common practices. This guide also covers the most direct path to working in a stateless-server SAN boot environment, upon which much of the Cisco UCS core value is predicated. In support of a utility or cloud computing model, this guide presents a number of concepts and elements within the Cisco UCS Management Model that will hopefully help data center operators increase responsiveness and efficiency by improving data center automation.
 
read more here
 
Cisco UCS and Storage Connectivity Options and Best Practices with Netapp Storage : 
This paper will provide an overview of the various storage features, connectivity options, and best practices when using the Unified Computing System (UCS) with NetApp storage. This document will focus on storage in detail, both block and file protocols and all the best practices for using these features exposed in UCS with NetApp storage. There will not be an application or specific use case focus to this paper. There are existing Cisco Validated Designs that should be referenced for a deeper understanding of how to configure the UCS and NetApp systems in detail for various application centric use cases. These documents treat the combination of UCS and NetApp from a more holistic or, end to end approach and include the design details and options for the various elements of UCS and NetApp systems. The reader is encouraged to review these documents which are referenced below.
 
 So what does is it mean for you, proven by real business cases..? There you go !
 
Citrix and Cisco virtualizing your workspace through cisco VXI implementation :
Awesome 8min demo
 
KPIT deploys VDI/VXI for 800 users and saves 75% desktop management & 60% desktop energy thanks to VCE and its vBlock Architecture :
read the details here
 
Banco Azteca deploys 500 user VDI/VXI pilot in just 3 weeks using VCE and its vBlock architecture :
read the details here
 
Novis sees 25% increase in its SAP applications per blade with Cisco UCS and Nexus Architecture :
Read this article speaking about how cisco helped out to deliver Cloud services for less
 
Hierro Barquisimeto expects 70% reduction in hardware, power, cooling, and space with Cisco UCS implementation :
read the details here
 
NTT Data reduces TCO and provisioning time by 50%, CO2 emissions by 79% with Cisco Unified Computing Systems :
read the details here
 
Training institute reduces infrastructure costs by upto 50%, energy consumption by 18%, provisioning by 90% with Cisco UCS :
read the details here
 
Hoping this article, through the initial writers posts of course, could help you to access as much relevant info as possible, I invite you to reach me out and let me know what sort of info and desired topic you would like to see more on this blog.
 
Happy reading, and I sincerely hope I helped you, just a bit, to gain more confidence into our Fabric Computing solutions.
 
cheers,
Michael
Advertisements

Consolidating Data Center Infrastructure with 10Gb Ethernet.

Data Centers and Server Rooms are more and more (over)loaded with IT-infrastructure like Servers, Storage Systems and Networking equipment. This leads to a multitude of concerns in the area’s of physical space usage, power & cooling, cabling & patching,…etc. The cost to keep these Data Centers running is going through the roof and becomes difficult to justify. As a consequence Data Center managers are looking into ways to consolidate IT-infrastructure and are in search of technologies that can reduce capital and operational spending.
IT technology providers are responding to these challenges with different sorts of solutions.
In the domain of data storage and back-up companies like EMC, Netapp and others provide solutions to pool storage resources in storage networks and apply very specialized data management techniques like for example data-deduplication to reduce the ever-growing demand for storage capacity.
CPU and server vendors are building always more performant systems and in combination with server virtualisation technology allow customers to attain drastic server consolidation ratio’s. We have seen ratio’s going from 5:1 to even 30:1 and more. Traditionally virtualisation servers are equipped with a number of I/O-interfaces for LAN, Storage , Back-up and Management. These different traffic types are allocated to dedicated interfaces to provide traffic segregation and guaranteed Quality of Service. Some of these interfaces are replicated for redundancy reasons or to provide higher throughput. In some situations this results in server I/O configurations with 2 to 4 SAN Fibre Channel interfaces and 4 to 8 LAN Gigabit Ethernet interfaces. Servers with a total of 6 to 12 I/O interface are putting a lot of stress on the Data Center LAN and SAN networks.
To  address this issue and to further consolidate the Data Center infrastructure we could consider to connect server with 10Gb Ethernet instead of 1Gb Ethernet. To do this servers can be equiped with 10Gb Ethernet NIC’s or some of the newest generations of servers are equipped with two 10Gb Ethernet Lan on Motherboard (LoM) interfaces and provide an accumulated I/O throughput of 20 Gbps, which is more than enough to satisfy the requirements of the most I/O hungry applications. VLAN technology and Network QoS provides the same level of traffic segmentation and guaranteed performance for the different types of traffic which used to run on dedicated interfaces.
Technology to transport storage traffic on LAN-type interfaces create the so-called “Unified Data Center Network Fabric” (consolidating LAN and Storage traffic on a single Data Center network). 10Gb Ethernet provides the ideal network foundation for building a Unified Fabric. Protocols like iSCSI, NFS/CIFS and FCoE are used to layer the storage related traffic on top of 10Gb Ethernet.
Using 10Gb Ethernet to connect servers and creating a Unified Fabric has the potential to dramatically reduces the number of I/O interfaces per server. In some cases from 12 to 2 which represents important capital and operational savings.
Cisco considers 10Gb Ethernet and Unified Fabric as strategic network technology evolutions and incorporates all the technology required to build such networks in its Nexus Family of Data Center Network switches.
To illustrate this evolution I want to cross-reference the VMware networking blog that describes vSphere ESX/ESXi server configurations based on 10Gb Ethernet.
To further complete this solution VMware and Cisco have co-developed the Nexus 1000V, a virtual switch for the vSphere ESX/ESXi servers that introduces the VN-Link technology. With the Nexus 1000V Cisco wants to be able to provide the same type of policy based network and security services for Virtual Machines as it does already for physical servers.
%d bloggers like this: