mVPN Fun in the Lab: Connect a Customer – Part 2 of 6

In our last blog (MPLS Fun in the Lab: Building the MPLS Cloud – Part 1) we built our MPLS cloud.  Now… we are ready to connect a customer!

Connect the Customer

We will connect the customer in 5 steps and (again) also have some additional fun throughout looking at show commands and sniffer traces.

  1. Create a VRF in each PE.
  2. Apply the VRF and IP addresses on the interfaces in each PE towards the CEs.
  3. Create the BGP neighbors in the PEs towards the CEs.
  4. Ping from HQ to Site 11
  5. Look at the sniffer trace of the above Ping

Want some snippets of the configs?

MPLS Part2 Zip

Configs for Brie (RR), Cheddar (PE1), Feta (PE3) and Rogue (P).  Brie, Cheddar and Feta are all ASR1Ks running IOS XE (3.16.3).  Rogue is a ASR9001 running IOS XR (5.3.3).

For those of you who already looked at the configs associated with the first blog… you know that those configs already had the configurations associated with adding a customer.  So you might wonder why, then, did I not just point to those configs?  Why a MPLS part 2 zip file?

The reason is that I find it easier to first explain labels with explicit-null enabled.  So in the configurations above, both PE1 (cheddar) and PE3 (feta) have mpls ldp explicit-null enabled.

Step 1: Create a VRF in each PE

Create a VRF named customerA in all the PE devices.

Route-Target Import and Export will each be set to 100:100 inside of each PE. The RD for each PE, however, will be unique.  Why?  Oh that is a whole different blog.  🙂  Basically if we suddenly decide we want Site 11 connected to both PE3 and PE4 I want PE1 to get both paths.  If the RDs were all the same, the vpnv4 RR, by default, would only send PE1. Just one of the paths.  Not both.

For PE1 (Cheddar) this looks like

cheddar_vrf

For PE3 (Feta) this looks like

feta_vrf

Step 2: Apply the VRF and IP addresses on the interfaces in each PE towards the CEs

Configure physical interfaces towards Headquarters (Mozzarella) and Site 11 (String) to be in vrf customerA and to have IP addresses.

For PE1 (Cheddar) this looks like

For PE3 (Feta) this looks like

Step 3: Create the BGP neighbors in the PEs towards the CEs

In the BGP IPv4 VRF address family for vrf customer A configure PE1 (Cheddar) to neighbor with Mozzarella and PE3 (Feta) to neighbor with String. (Note: Mozzarella and String already have a BGP peering configured and are advertising 10.1/16 (Headquarters/Mozzarella) and 10.11/16 (Site 11/String))

For PE1 (Cheddar) this looks like the below.  Where the blue outlined area, under address-family ipv4 vrf customerA is the BGP neighbor peering to Headquarters/Mozzarella.  The grey outlined area are the vpnv4 with Brie (the vpnv4 RR) from the previous blog.

For PE3 (Feta) this looks like the below. Where the blue outlined area, under address-family ipv4 vrf customerA is the BGP neighbor peering to Site11/String.  The grey outlined area are the vpnv4 with Brie (the vpnv4 RR) from the previous blog.

Step 4: Ping from HQ to Site 11

Below is the successful ping from 10.1.101.101 to 10.11.101.1.  The source of the ping (10.1.101.101) is a Spirent Test Center port I have up off the customer core in Headquarters.  The destination (10.11.101.1) is the LAN interface of the customer Site 11 router, String.

 

Looks good.  Successful.

But.. how do we know that the above ping went through the MPLS cloud?  🙂  I had a sniffer running in the cloud.

Step 5: Look at the sniffer trace of the above Ping

sniffer_ping

Yup.  There it is.  Clearly the ping went thru the MPLS cloud because it was caught by my sniffer.

We see the successful ICMP echo request making it from 10.1.101.101 to 10.11.101.1. We also see the successful ICMP recho reply back from 10.11.101.1 to 10.1.101.101.

So we know it went thru the MPLS cloud. Which means it HAD to go through Rogue.  But if we look at the routing table for Rogue we see NO IP addresses in it except for the 14/8 subnet in the MPLS cloud itself.  How is this the case?

That’s because while Rogue was involved with forwarding both the ICMP echo request and ICMP echo reply…. it didn’t look up the IP addresses in its routing table to do that.  So how did it all work?  I’m not going to go into all the detail of MPLS here.  So you might want to go look up some support docs.  But basically Rogue did it by switching labels.

NEXT Blog

In the next blog we will look at following the labels.

mVPN Fun in the Lab: Following the Labels – Part 3 of 6

sniffer_ping_with_labels



Categories: Fun in the Lab

Tags: , , , , , , , , ,

3 replies

Trackbacks

  1. MPLS Fun in the Lab: Following the Labels – Part 3
  2. Building a MPLS L3VPN Unicast and Multicast Cloud (6 Part Blog Series) : Networking with FISH
  3. mVPN Fun in the Lab: Building the MPLS Cloud - Part 1 of 6 :

Leave a Reply