FabricPath is a technology which combines the benefits of Routing protocols, here will be Intermediate-System-to-Intermediate-System (IS-IS), and Layer 2 Network Ethernet environments.
To list some of FabricPath advantages:
- MAC Address scalability by Conversational Learning
- No spanning-tree anymore, hurray! Each switch will have its own view of Layer 2 topology and calculates the L2 topology using SPF calculation.
- Equal cost multipath forwarding for Unicast Layer 2 traffic!
- Makes any kind of topology possible!
- Configuration/Administration is not a hassle anymore
- Loop prevention/mitigation by having a TTL field in the frames
We can refer to FabricPath as “Routing MAC Addresses” or “Layer 2 over Layer 3”, but it doesn’t mean that FabricPath ports have an IP Address! In a FabricPath topology, each device is dynamically assigned a “switch-id” via Dynamic Resource Allocation Protocol (DRAP), and L2 forwarding table is populated based on reachability to each switch-id.
Function types in FabricPath
- Leaf: This is where Classic Ethernet devices are connected to. It’s the point of “MAC to switch-id” mapping. Traffic is looked up in the L2 forwarding table and then encapsulated into a MAC-in-MAC frame whose destination switch-id is the switch which the destination host is connected to. FabricPath is only supported on Cisco Nexus 5500 with NX-OS 5.1(3)N1(1) and higher as the edge (access) device in FabricPath topology.
- Spine: Cisco Nexus 7000 is supported as the aggregation device in FabricPath topology with NX-OS 5.1(1) and higher, but only based on F1 line cards. Layer 3 forwarding could be gained by adding M1 series cards.
Have in mind that by using M1 and F1 cards in the same Virtual Device Context (VDC), routing is offloaded to M1 cards.
As mentioned, any imaginable topology is possible with FabricPath (as long as it makes sense!) For example, you might consider directly connecting edge nodes to each other.
Regarding the Layer 3 routing, the easy would be to put the Layer 3 boundary at the Aggregation layer (spines), thus the spines keep switching Layer 2 frames based on switch-id and simultaneously routes Layer 3 traffic.
The other option would be to connect the routers as edge devices to the spines. Obviously it makes life easier for the spines, because there won’t be that much configuration changes on them and they will only take care of Layer 2 switching based on switch-id.
Spanning-tree? Not anymore!
It’s fabric, so all the links are in forwarding state. As I pointed out earlier, FabricPath multipath forwarding is achieved by a Layer 2 protocol based on IS-IS. In result, each device will have a full view of the network. Unlike a legacy spanning-tree network, each switch has multiple active paths to every other switch. I don’t wanna go into comparing STP to FabricPath, but I can suggest you to take a look at the below image and imagine it running STP:
FabricPath Data Plane
FabricPath achieves the mentioned benefits by introducing its own Data Plane implementation:
- A brand new frame header format consisting the following fields
- Unique addressing schema called switch-id
- Putting a TTL field in the frame header in order to prevent loops
and a RPF check on each frame entering a Fabric Port, which is another loop prevention mechanism.
FabricPath addresses one of the key scaling issues in nowadays datacenters, which is the growing number of MAC addresses! To get an idea, just consider a 48P ToR switch, connected to 46 servers, each server hosting 20 VMs… 920 MAC addresses! Now multiply that to the number of ToR switches in a small Data Center.
As mentioned, FabricPath has the L2 adjacency of Classical Ethernet, so it’s still depended on MAC learning to forward the frames. But, a FabricPath switch doesn’t learn every MAC address it sees on the wire. It only learns the ones which are actively conversing with another MAC address present in the forwarding table.
So, in this case, the switch won’t learn the source MAC address of a broadcast frame, like “ARP who-has” requests. Keep in mind that a FabricPath switch will learn all source MAC Addresses on its classic Ethernet ports.
This mechanism, eases the pressure on switch’s TCAM, by not learning any MAC address in the entire domain.