ToR & EoR Data Center Designs

This post is my snippet of Brad Hedlund article about ToR and EoR design which is accessible via the following link:
http://bradhedlund.com/2009/04/05/top-of-rack-vs-end-of-row-data-center-designs/

Top of Rack Design

ToR is sometimes called In-Rack design.

Benefits:
  • All copper cabling for servers stays within the rack as relatively
  • there is no need to for a large copper cabling infrastructure
  • Any network upgrades or issues with the rack switches will generally only affect the servers within that rack
  • any future support of 40 or 100 Gigabit on twisted pair will likely have very short distance limitations (in-rack distances). This too is another key factor why Top of Rack would be selected over End of Row.

Continue reading “ToR & EoR Data Center Designs”

Share this!

OSPFv2 in NX-OS

When you configure a summary address, Cisco NX-OS automatically configures a discard route for the summary address to prevent routing black holes and route loops.

OSPFv2 has the following configuration guidelines and limitations:

  • You can have up to four instances of OSPFv2 in a VDC.
  • Cisco NX-OS displays areas in dotted decimal notation regardless of whether you enter the area in decimal or dotted decimal notation.
  • All OSPFv2 routers must operate in the same RFC compatibility mode. OSPFv2 for Cisco NX-OS complies with RFC 2328. Use the rfc1583compatibility command in router configuration mode if your network includes routers that support only RFC 1583.
  • You must configure RFC 1583 compatibility on any VRF that connects to routers running only RFC1583 compatible OSPF.
Reference bandwidth for link cost calculation 40 Gb/s
Product License Requirement
Cisco NX-OS OSPFv2 requires an Enterprise Services license. For a complete explanation of the Cisco NX-OS licensing scheme and how to obtain and apply licenses, see the Cisco NX-OS Licensing Guide.
nexus7009(config)# feature ospf
Nexus7009(config-if)# ip router ospf 201 area 0.0.0.15

From http://www.cisco.com/en/US/docs/switches/datacenter/sw/5_x/nx-os/unicast/configuration/guide/l3_ospf.html

Share this!

Cisco DCI Design & Implementation

Being involved in different Data Center design projects requires you knowing how to interconnect Data Centers. Below you’ll find my notes from Cisco Data Center Design & Implementation Guide, System Release 1.0.

DCI Business DriversHA Clusters/Geoclusters
  • Microsoft MSCS
  • Veritas Cluster Server (Local)
  • Solaris Sun Cluster Enterprise
  • VMware Cluster (Local)
  • VMware VMotion
  • Oracle Real Application Cluster (RAC)
  • IBM HACMP
  • EMS/Legato Automated Availability Manager
  • NetApp Metro Cluster
  • HP Metrocluster
Active/Standby Migration, Move/Consolidate Servers
  • VMware Site Recovery Manager (SRM)
  • Microsoft Server 2008 Layer 3 Clustering
  • VMware Vmotion

The applications above drive the business and operation requirement for extending the Layer 2 domain across geographically dispersed data centers. Extending Layer 2 domains across data centers present challenges including, but not limited to:

  • Spanning tree isolation across data centers
  • Achieving high availability
  • Full utilization of cross sectional bandwidth across the Layer 2 domain
  • Network loop avoidance, given redundant links and devices without spanning tree

Continue reading “Cisco DCI Design & Implementation”

Share this!

DCI L2 Extension Between Remote DCs

***Most of this document is the same as “Cisco DCI Design & Implementation“,  so those parts are eliminated.***

DCI Considerations

Figure 1 shows the main considerations when deploying a DCI solution:

  • Layer 3 interconnect (typically over an existing enterprise IP core)
  • Layer 2 interconnect
  • SAN interconnect
Share this!