Consolidating Data Center Infrastructure with 10Gb Ethernet.

Data Centers and Server Rooms are more and more (over)loaded with IT-infrastructure like Servers, Storage Systems and Networking equipment. This leads to a multitude of concerns in the area’s of physical space usage, power & cooling, cabling & patching,…etc. The cost to keep these Data Centers running is going through the roof and becomes difficult to justify. As a consequence Data Center managers are looking into ways to consolidate IT-infrastructure and are in search of technologies that can reduce capital and operational spending.
IT technology providers are responding to these challenges with different sorts of solutions.
In the domain of data storage and back-up companies like EMC, Netapp and others provide solutions to pool storage resources in storage networks and apply very specialized data management techniques like for example data-deduplication to reduce the ever-growing demand for storage capacity.
CPU and server vendors are building always more performant systems and in combination with server virtualisation technology allow customers to attain drastic server consolidation ratio’s. We have seen ratio’s going from 5:1 to even 30:1 and more. Traditionally virtualisation servers are equipped with a number of I/O-interfaces for LAN, Storage , Back-up and Management. These different traffic types are allocated to dedicated interfaces to provide traffic segregation and guaranteed Quality of Service. Some of these interfaces are replicated for redundancy reasons or to provide higher throughput. In some situations this results in server I/O configurations with 2 to 4 SAN Fibre Channel interfaces and 4 to 8 LAN Gigabit Ethernet interfaces. Servers with a total of 6 to 12 I/O interface are putting a lot of stress on the Data Center LAN and SAN networks.
To  address this issue and to further consolidate the Data Center infrastructure we could consider to connect server with 10Gb Ethernet instead of 1Gb Ethernet. To do this servers can be equiped with 10Gb Ethernet NIC’s or some of the newest generations of servers are equipped with two 10Gb Ethernet Lan on Motherboard (LoM) interfaces and provide an accumulated I/O throughput of 20 Gbps, which is more than enough to satisfy the requirements of the most I/O hungry applications. VLAN technology and Network QoS provides the same level of traffic segmentation and guaranteed performance for the different types of traffic which used to run on dedicated interfaces.
Technology to transport storage traffic on LAN-type interfaces create the so-called “Unified Data Center Network Fabric” (consolidating LAN and Storage traffic on a single Data Center network). 10Gb Ethernet provides the ideal network foundation for building a Unified Fabric. Protocols like iSCSI, NFS/CIFS and FCoE are used to layer the storage related traffic on top of 10Gb Ethernet.
Using 10Gb Ethernet to connect servers and creating a Unified Fabric has the potential to dramatically reduces the number of I/O interfaces per server. In some cases from 12 to 2 which represents important capital and operational savings.
Cisco considers 10Gb Ethernet and Unified Fabric as strategic network technology evolutions and incorporates all the technology required to build such networks in its Nexus Family of Data Center Network switches.
To illustrate this evolution I want to cross-reference the VMware networking blog that describes vSphere ESX/ESXi server configurations based on 10Gb Ethernet.
To further complete this solution VMware and Cisco have co-developed the Nexus 1000V, a virtual switch for the vSphere ESX/ESXi servers that introduces the VN-Link technology. With the Nexus 1000V Cisco wants to be able to provide the same type of policy based network and security services for Virtual Machines as it does already for physical servers.
%d bloggers like this: