mVPN Fun in the Lab: Add Multicast in the Cloud – Part 5 of 6

Time to do our next blog in this “MPLS Fun in the Lab” blog series.  This time we will focus on how I had the SP cloud configured in order to be able to support the customer’s multicast the way that it did.

With MVPN (Multicast VPN) a key concept to understand and visualize is that you are looking at 2 distinct multicast environments.  The multicast that runs in the customer network and the multicast that runs over the MPLS.  Just because one is ASM (Any Source Multicast) doesn’t mean the other needs to also be ASM. They aren’t “related” that way.

This seems to often confuse people.  But if we stop to think about it… the unicast is quite separate also.  Remember in MPLS Fun in the Lab: Following the Labels – Part 3 when Rogue didn’t have the IP address of the customer in its routing table? For the customer unicast to properly get across the Service Provider’s MPLS cloud we needed BGP VPNv4 and also MPLS LDP.  Right?

For MVPN there are multiple options. Instead of me writing all those options up… I think I’ll just reference my friend Daniel Dib’s blog written just last year. He did a great write up based on Cisco Live session BRKIPM-3017 mVPN Deployment Models by Ijsbrand Wijnands and Luc De Ghein.

CCDE – Next Generation Multicast – NG-MVPN

All that said? I’m going with simple Rosen Draft.  Here is screen capture from Daniel’s blog above that nets out Rosen draft.

rosen draft

So let’s look more closely at the SP cloud and the devices in there again.

multicast mvpn

So what do we need to do to get the MPLS cloud above to offer MVPN with Rosen draft? Most of it is straight forward with what configs you put on.  There are two design questions we need to make before we configure.

  1. Question: ASM or SSM in the SP IGP core? I personally like SSM in the core. I see no reason to run ASM.  So we will go SSM.
  2. Stay on the default MDT or create a data MDT? For this we will just stay on the default MDT.  Typically when I build a cloud for an enterprise customer that doesn’t own the cloud, I stick with staying on the default MDT. When I build a MVPN cloud for a CPOC where the customer is either an SP or a large enterprise running their own MVPN SP cloud core.. then they typically would like to see data MDT.

Okay… so our 2 decisions are made

  • SSM in the core
  • stay on the default MDT

Next? Time to configure!!

Configure MVPN Rosen Draft in the Cloud

There are 2 pieces that need to be built to the already existing environment we created in the past 4 blogs.

  1. Multicast-Routing/PIM enabled in the Core to Pass the Multicast
  2. BGP IPv4 MDT for the Rosen Draft MVPN

Multicast-Routing/PIM enabled in the Core to Pass the Multicast

The following is everything that needs to be done on the routers that will actively be involved with passing multicast traffic in the core.  I say this to emphasize that Brie, as the soon to be BGP IPv4 MDT RR, is only involved with all of this at that level.  Brie is will not be actively forwarding multicast packets.  So it does NOT need the following configured on it.  Only Cheddar, Rogue, and Feta.

  1. Multicast-Routing needs to be enabled in the default VRF
  2. Multicast address range 232/8 needs to be identified as SSM
  3. PIM needs to be enabled in the core for all router interfaces that the SSM will go over

Let’s start there first.  Let’s just prep everything.

First let’s talk about what this looks like in the IOS XE boxes Cheddar and Feta.

  1. Multicast-Routing needs to be enabled in the default VRF – both Cheddar and Feta will be configured with the global command ip multicast-routing distributed
  2. Multicast address range 232/8 needs to be identified as SSM – both Cheddar and Feta will be configured with the global command ip pim ssm default
  3. PIM needs to be enabled in the core for all router interfaces that the SSM will go over – both Cheddar and Feta will have their interfaces toward Rogue configure with ip pim sparse-mode

Okay… so that is all done. So now Cheddar and Feta are configured to be able to pass multicast addresses in the IANA assigned SSM range (232/8).

For Rogue as an IOS-XR box what do we do?

  1. Multicast-Routing needs to be enabled in the default VRF –  IOS XR doesn’t have that global command. For IOS-XR you need the mcast pie installed and active… AND you need to enable multicast-routing on the individual interfaces you want multicast-routing on.
    1. PIE –  I already have this done.  Using  — disk0:asr9k-mcast-px-5.3.3. Admittedly I also have it committed.
    2. Multicast-Routing enabled on individual interfaces rogue_mcast
  2. Multicast address range 232/8 needs to be identified as SSM – actually, IOS-XR has the IANA assigned SSM range (232/8) already as SSM by default.  So we are already good there.
  3. PIM needs to be enabled in the core for all router interfaces that the SSM will go over rogue_pim

Okay… so the core is ready to pass multicast traffic for the SSM multicast range. Cool.  Next step.

BGP IPv4 MDT for the Rosen Draft MVPN

The following is everything that needs to be done on the PEs and RR to support Rosen Draft MVPN.  Again… I’m only calling out what I needed to do in the core on top of the already existing other configs from the previous 4 blogs.  Also… since Rogue is not involved with the BGP it is not touched in this part.  So again… in the previous section where we were configuring the routers in the SP cloud that were PASSING the multicast traffic… we did NOT involve Brie, the BGP RR for the PEs.  In this section we will NOT be involving Rogue as we are only focusing on the BGP IPv4 MDT.

(One small little side note.  Um… when I say “everything” I’m not quite being completely honest. I’m leaving something out that people commonly forget so we can troubleshoot together in the next (and final) blog so this “sinks in” a little more to y’all and hopefully you won’t forget then.)

Let’s first focus on the 2 PEs – Cheddar and Rogue.

  1. Multicast-Routing Enabled for the customer VRF – via command ip multicast-routing vrf customerA distributed
  2. PIM enabled on Customer facing VRF interface via interface command ip pim sparse-mode
  3. RP Address defined for customer’s vrf. The customer’s multicast environment is ASM. Which means it uses an RP.  Cheddar and Feta both have an interface in the customer’s environment – their interface in the customerA vrf. So Cheddar and Feta are participants in this multicast environment and therefore need to play in that world. Which means they must know who the RP is in that world.  Both Cheddar and Feta will be configured with the global command ip pim vrf customerA rp-address 10.1.1.1 override
  4. Assign a default MDT to be used for VRF customerA. We will assign 232.100.100.1 as the MDT to be used for the customerA VRF.cheddar_feta
  5. Configure a BGP IPv4 MDT neighbor with Brie.  bgpmdt_PEs

Okay… Looking good. Now let’s just go over to Brie as the BGP RR for all the PEs and add the corresponding BGP IPv4 MDT peering back to Cheddar and Feta.  And… while we are at it also for Colby and Gouda.

brie_mdt

Okay… so… are we done? Let’s start the multicast traffic and see. Hmmmm…. Traffic generator SAYS it is sending the multicast traffic… but my customer receiver isn’t seeing it.

Time to play Network Detective together and solve the case of the missing multicast traffic!

Gather the facts… collect the clues… follow the evidence.. interview the witnesses.. questions the suspects and figure out “who done it”.

broken mvpn



Categories: Fun in the Lab

Tags: , , , , , , , , ,

5 replies

  1. Hi Fish, where can we send the solution (missing commands) to? 🙂

Trackbacks

  1. MPLS Fun in the Lab: Troubleshooting the MVPN Cloud – Part 6
  2. Building a MPLS L3VPN Unicast and Multicast Cloud (6 Part Blog Series) : Networking with FISH
  3. mVPN Fun in the Lab: Add the Multicast at the Customer - Part 4 of 6 :

Leave a Reply