mVPN Fun in the Lab: Troubleshooting the MVPN Cloud – Part 6 of 6

In MPLS Fun in the Lab: Add the Multicast in the Cloud – Part 5 we added the support in the cloud for multicast and for MVPN. As I mentioned in that blog, I intentionally left something out so we could have fun troubleshooting together. Please do know that what I left out is also most decidedly a very common thing forgotten by people new to configuring a cloud with MVPN/Rosen draft.

Network Detective Time!

Gather the facts… collect the clues… follow the evidence.. interview the witnesses.. questions the suspects and figure out “who done it”.

broken mvpn

I’ll start you off with 1 fact.

FACT #1

Fact – the missing configs are in the MPLS cloud offering MVPN.  There is nothing missing from the configs in the customer multicast environment. 

Knowledge and book learning really are the key to faster troubleshooting.  I always begin here.  So we have a “who done it”. Right? I mean the multicast packets are not getting where they are supposed to go.

Before we look at what IS happening.  First we really should have the knowledge of what is supposed to happen.  So what is supposed to happen?

Knowledge is Key: What SHOULD Be Happening?

First, the MDT (multicast distribution trees) need to be built.

For multicast to flow from the Tx to the Rx… it needs the MDTs to be built.

build of multicast distribution trees

I already gave you the fact that the missing configs are in the MPLS Cloud offering MVPN and not in the customer environment.  So let’s focus on the *,G between the Rendezvous Point (RP) and the Receiver (Rx).

What needs to happen for that *,G to get built between RP and Rx?

  • Receiver (Rx) in customer environment sends an IGMP Membership report indicating it is interested in receiver multicast for group 239.1.1.1
  • String then goes thru what I call “the checklist” prior to sending out a PIM (*,G) join. The checklist is “Who is the root?”, “Where is the root?”, “What is the PIM RPF neighbor toward the root?” This is a *,G so the root is the RP – which is Mozzarella at 10.1.1.1
  • String then sends the PIM (*,G) Join “up the tree” towards the root. Which, in this case, is the SP cloud offering MVPN services.
  • A PIM (*,G) join needs to make it to Mozzarella from the MVPN cloud.  Remember this is NOT dense mode. Mozzarella is not going to just push multicast out.  PIM Sparse Mode is a “pull” not a “push”.

Time for our next fact.

FACT #2

Fact – Mozzarella is not receiving a PIM (*,G) Join from the cloud.

Well that gives us a place to start. If Mozzarella is not receiving the PIM *,G join for 239.1.1.1.  Then it is Cheddar that is not sending it.  Hmmmm… where to start – Cheddar to see if it has an OIL?  Or Feta to see if it has received the PIM *,G Join from site 11?

mvpn cloud

Let’s start on the right – over in Feta. Specifically over in Feta on the VRF customerA portion.

Remember, every device in our network, when troubleshooting, is a witness to the ‘who done it.’  So let’s go interview Feta and see what facts and clues we can get.

“Interviewing” Feta

Let’s start by “asking” Feta my checklist of 3 questions for troubleshooting multicast.

  • Who is the root?”,
  • Where is the root?”,
  • What is the PIM RPF neighbor towards the root?”

Which MDT are we looking at?  We are looking at the *,G.. also known as the RP tree.. also known as the shared tree.  So… who is the root of the *,G?  The Rendezvous point is.

show ip mroute vrf

Who is the Root?

The MDT we are trying to build is the *,G of *,239.1.1.1.  Looks like Feta, inside of vrf customerA knows who the root is for this MDT.  It is the RP at 10.1.1.1.  However, we can see that the what is the PIM RPF neighbor towards the root is not known – 0.0.0.0.

Where is the Root?

Let’s check if we know where 10.1.1.1 is.  Also… while we are at it, let me show you one other command to see the who.

troubleshooting mvpn

Okay… so we know where.  Well how can we know where but not what the PIM RPF neighbor is towards the root?

What is the PIM RPF Neighbor Towards the Root?

Simplest way for this to happen is to not have a PIM neighbor.

The where is just showing you know the routing to get there.  Where is it. But remember PIM *,G joins are sent hop by hop from PIM neighbor to PIM neighbor.  So we need a PIM neighbor IP address that is our RPF (reverse path forwarding) towards the root.

I do like a lot of Cisco commands. But there is one that annoys me.  πŸ™‚  In IOS and IOS XE in order to see who my PIM RPF neighbor is I have to do a command that doesn’t have PIM in it.  Yeah… I’m not keen on that.

show ip rpf vrf

Alrighty then. So in VRF customerA on Feta we know who the root is, we know where the root is… but we do not know what the PIM RPF neighbor is towards the root.

But actually… thinking about it for a second… who should that RPF neighbor be? I mean… we are looking in Feta over in vrf customerA.  The next “portion” of the network that is “towards” the RP is actual the Service Provider portion of Feta. Which is in the SP’s network and not in the VRF.

Who SHOULD be the PIM RPF Neighbor

Well that magic is what was supposed to have gotten accomplished by doing everything we did in the Part 5 blog.

rosen draft mvpn troubleshooting

What did we do again in that blog?

  • Enabled Multicast Routing
  • Assigned 232/8 to be SSM
  • PIM neighbored between Cheddar and Rogue
  • PIM neighbored between Rogue and Feta
  • BGP IPv4 mdt peered Cheddar to Brie
  • BGP IPv4 mdt peered Feta to Brie

So I check everything above and it looks perfect.

What did Daniel say again in his blog “CCDE – Next Generation Multicast – NG-MVPN”

rosen draft

  • Draft Rosen uses GRE as an overlay protocol. That means that all multicast packets will be encapsulated inside GRE.

  • A virtual LAN is emulated by having all PE routers in the VPN join a multicast group.

  • The default MDT is used for PIM hello’s and other PIM signaling but also for data traffic.

Hmmmmm…… wait. PIM hellos over the default MDT?  But why would I need PIM hellos over the 232.100.100.1? Oh… wait. “A virtual LAN is emulated by having all the PE routers in the VPN join a multicast group.” 

…A Virtual LAN is Emulated

So a virtual LAN is built in the SP core over 232.100.100.1 in order to “stitch” together the customer’s multicast environment.

  • So Cheddar and Feta are going to talk to each other over 232.100.100.1?  Well when they talk to each other…. um…. what Source IP address do they use when sending a PIM hello to 232.100.100.1?
  • But wait… this is SSM we are talking about. There is no RP to act as “dating service” between the sources and the receivers. So another question is how does Feta learn about the source IP address that Cheddar will use to send to 232.100.100.1 so it can send an S,G join to join it?

Enter the BGP IPv4 MDT.

BGP IPv4 MDT

Question – what IP address is Cheddar using for its BGP IPv4 MDT peering with Brie?  It’s using 14.100.100.101.

Question – what IP address is Feta using for its BGP IPv4 MDT peering with Brie? It’s using 14.100.100.201.

So what if I told you that these are the IP addresses that will be used?  Feta will learn the fact that Cheddar will use IP address 14.100.100.101 and group 232.100.100.1  via BGP IPv4 MDT.  So then Feta knows to send a PIM (S,G) join to rogue for this.  Specifically Feta will send a PIM (14.100.100.101, 232.100.100.1) join to Rogue.   Rogue will complete the MDT by sending a PIM (S,G) join to Cheddar.

Conversely, Cheddar will learn via BGP IPv4 MDT that it needs to send a PIM (S,G) join… specifically a PIM (14.100.100.201, 232.100.100.1) join to Rogue. Rogue will complete the MDT by sending a PIM (S,G) join to Feta.

But… riddle me this.

How do Cheddar and Feta do this?  I mean… we didn’t enable PIM on the Loopback0s on these two devices. How can the loopbacks create the “pull” – aka the join?  Also.. the loopbacks will be the source of the GRE encapsulated multicast traffic how did that happen?

So… what is the very common thing people forget to do?  πŸ™‚  What part of the “stitching” do they miss?

Forgetting about the loopbacks that will be used as senders and receivers on the PEs.

Yup… that’s it.  That simple.  And that common an oversight.  Yes… absolutely everything else was configured properly.

Add pim sparse-mode to the loopback 0 on Cheddar and Feta and voila!

Before, on Feta in the VRF, the  *,G for 239.1.1.1 showed an IIF (incoming interface) of Null and an RPF nbr of 0.0.0.0.  Now?  πŸ™‚  Now we have Tunnel1 as the incoming interface and 14.100.100.101 (Cheddar) as the RPF neighbor.

feta_vrf

So the above is the mroute in Feta but on the vrf customerA side.  What does the mroute in Feta look like for the SP side?

feta_global

A whole new world if you’ve never played with this stuff before.  But… at the same time… there really are only a few moving parts to it.  So I always use MVPN/Rosen draft when tossing together a SP cloud providing MVPN.

This blog, part 6, was the last of this blog series. In case you didn’t get your geek on enough.   πŸ™‚    Here is a sniffer trace capturing the creation of the PIM 232.100.100.1 multicast distribution tree.. as well as the GRE encapsulated multicast traffic that goes over it.

mcast-in-cloud.pcap

Hope you had great fun playing in the lab together!



Categories: MPLS, Multicast

Tags: , , , , , , , , ,

2 replies

  1. Hi! Quick question, if I’m not using a route reflector in my topology. Would I enable the BGP ipv4 MDT between my 2 PE routers?

Trackbacks

  1. Building a MPLS L3VPN Unicast and Multicast Cloud (6 Part Blog Series) :

Leave a Reply