Inter-AS Option AB: Fun in the Lab

Inter-AS Option AB: Fun in the Lab

Inter-AS Option AB

where the

data traffic uses the VRF interfaces (or sub-interfaces) and the

control plane (BGP VPNv4) uses the global interfaces (or sub-interfaces).


 

interAS_big

 

 

 

 

 

 

 

 

Inter-AS Option AB has been around for awhile. What are the benefits of it? Let’s just quote from a URL already out there.


 “Benefits of MPLS VPN—Inter-AS Option AB

The MPLS VPN—Inter-AS Option AB feature provides the following benefits for service providers:

  • Network configuration can be simplified because only one BGP session is configured for each VRF on the ASBR.
  • One BGP session reduces CPU utilization.
  • Networks can be scaled because a single MP-BGP session, which is enabled globally on the router, reduces the number of sessions required by multiple VPNs, while continuing to keep VPNs isolated and secured from each other.
  • IP QoS functions between ASBR peers are maintained for customer SLAs.
  • Dataplane traffic is isolated on a per-VRF basis for security purposes”

http://www.cisco.com/c/en/us/td/docs/ios/mpls/configuration/guide/15_0s/mp_15_0s_book/mp_vpn_ias_optab.html


 

 

 

 

 

 

 

 

FUN IN THE LAB

  1. Create the VRFs (HR and PC) in Cowbird and Whiteduck
  2. Config Whiteduck’s  Trunk to the Traffic Generator
  3. Configure Cowbird’s Trunk to the Traffic Generator
  4. Config Whiteduck’s Trunk to Cowbird
  5. Config Cowbird’s Trunk to Whiteduck
  6. Configure BGP VPNv4 between Whiteduck and Cowbird
  7. Look at Control Plane and Data (Forwarding) Plane

 

1) Create the VRFs (HR and PC) in Cowbird and Whiteduck

cowbird_vrfs

 

 

 

 

 

 

 

 

 

 

 

Above the VRF definitions as configured in Cowbird – an ASR920.  For those familiar with BGP VPNv4 and VRFs the RD, RTs, and address-family IPv4 being defined are “standard”.  The thing that is different with Inter-AS Option AB is the “inter-as-hybrid next-hop”.  As you can see from the configs above and the diagram, Cowbird has defined in VRF definition HR Whiteduck’s VRF HR 101.101.101.1 IP address.  Similarly, Cowbird has defined in VRF definition PC Whiteduck’s VRF PC 102.102.102.1 IP address.

Given this, it should not be a surprise if I tell you that Whiteduck’s VRF definitions for HR and PC are similar.

whiteduck_vrfs

 

 

 

 

 

 

 

 

 

 

 

 

2) Config Whiteduck’s  Trunk to the Traffic Generator

whiteduck_tg

 

 

 

 

 

Whiteduck’s Gig0/0/2 is connected to a Spirent TestCenter.  Using sub-interfaces, trunk this port to have vlan 11 in VRF HR and vlan 12 in VRF PC.

 

whiteduck_002_tg

 

 

 

 

 

 

 

 

 

3) Configure Cowbird’s Trunk to the Traffic Generator

cowbird_tg

 

 

 

 

 

Cowbird’s Gig0/0/11 is connected to a Spirent TestCenter.  Trunk this port to have vlan 11 in VRF HR and vlan 12 in VRF PC.

cowbird_0011_tg

 

 

 

 

 

 

 

 

 

 

 

 

What you see above are called an Ethernet Virtual Switch (EVC).  This is actually not a new thing and has been around for quite some time now.  My first experience with these was years ago when the 7600 ES+ line cards came out.


 

What is an EVC?

“Ethernet virtual circuits (EVCs) define a Layer 2 bridging architecture that supports Ethernet services. An EVC is defined by the Metro-Ethernet Forum (MEF) as an association between two or more user network interfaces that identifies a point-to-point or multipoint-to-multipoint path within the service provider network. An EVC is a conceptual service pipe within the service provider network. A bridge domain is a local broadcast domain that exists separately from VLANs.”

http://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst6500/ios/12-2SY/configuration/guide/sy_swcg/ethernet_virtual_connection.html#wp1002578


 

4) Config Whiteduck’s Trunk to Cowbird

middle

 

 

 

 

 

 

On Whiteduck we are going to make the physical interface from Whiteduck to Cowbird a trunk again by  sub-interfacing this interface to have vlan 10 in the global (default VRF) routing table, vlan 101 in VRF HR, and vlan 102 in VRF PC.

whiteduck_middle

 

 

 

 

 

 

 

 

 

 

 

Looks all pretty normal.  Just like the sub-interfaces on Whiteduck towards the Spirent Traffic Generator. Well… except for that “mpls bgp forwarding” command.  Truth is I didn’t actually type that in.  It actually got configured when I later when I was on Whiteduck and configured Cowbird’s 10.32.10.2 IP address as an “inter-as-hybrid” BGP VPNv4 peer.  So it protected me from doing an incomplete configuration that wouldn’t work.  Yes… Cowbird did the same thing for me as we shall see in it’s configuration below.

5) Config Cowbird’s Trunk to Whiteduck

middle

 

 

 

 

 

cowbird_middle2

As we can see, EVCs used again on Cowbird’s interface to make it’s physical connection with Whiteduck a trunk port. Also we can see, again, the “mpls bgp forwarding” that got put on the layer 3 portion of the global (default VRF) interface.  Again, I had not actually typed this in.  It was put in, again, when I configured in Cowbird the BGP VPNv4 peer with Whiteduck as an “inter-as hybrid” peer.

 

6) Configure BGP VPNv4 between Whiteduck and Cowbird

So what exactly does the BGP look like?

Looking at the below 2 BGP configs we see

  • No BGP peering between the two routers over the HR vrf
  • No BGP peering between the two routers over the PC vrf
  • “Typical” BGP VPNv4 commands (activate & send-community extended)
  • “inter-as hybrid” command on the neighbor statement.  This is the new thing.  This is what “triggers” the install of the command MPLS BGP Forwarding on the global (default VRF) layer 3 interface between these 2 routers.

whiteduck_bgp

 

 

 

 

 

 

 

 

 

 

 

 

 

Cowbird’s BGP is pretty similar to Whiteduck’s.

cowbird_bgp

 

 

 

 

 

 

 

 

 

 

 

 

7) Look at Control Plane and Data (Forwarding) Plane

Since you might be more familiar with sub-interfaces than with EVCs and BDIs, we will go to Whiteduck to look at what all this looks like.

Let’s focus specifically on the control plane and the forwarding plane on Whiteduck as it relates to getting to Cowbirds LAN interfaces for the VRFs.

  • VRF HR:
    • 9.22.11.0
  • VRF PC:
    • 9.22.12.0

 

CONTROL PLANE

First let’s look at the control-plane on Whiteduck.

whiteduck_bgpva

 

 

 

 

 

 

 

 

Well that seems kinda odd.  The next hop for 9.22.11.0 is 10.32.10.2 which is in the global (default VRF).  Look…. the next hop for 9.22.12.0 is also 10.32.10.2.

If this were “just” normal BGP VPNv4 this would be correct.  The traffic would be known via the control plane over the global (default VRF) thru BGP VPNv4.  The traffic would also forward over that same physical interface with labels.  So if there were NOT Inter-AS Option AB we would then go see the BGP labels probably.

whiteduck_bgpva_labels

 

 

 

 

 

 

 

Again… IF this were “traditional” BGP VPNv4 and NOT inter-as option AB we would read the above and assume, still that if we want to get to 9.22.11.0 or 9.22.12.0 we need to (according to the control plane) slap a label on the packet and send it over the global interface. What we would see above is that the BGP labels that control plane instructions Cowbird is sending Whiteduck are:

label 25 with next-hop 10.32.10.2 to get to 9.22.11.0

label 23 with next-hop 10.32.10.2 to get to 9.22.12.0

But this is Inter-AS Option AB.


 

Inter-AS Option AB

where the

data traffic uses the VRF interfaces (or sub-interfaces) and the

control plane (BGP VPNv4) uses the global interfaces (or sub-interfaces).


 

DATA PLANE

Show let’s look at the forwarding plane.

whiteduck_cef

 

 

 

 

 

Looks great!

Hope you had “fun in the lab” with me.  🙂

  • Gabor Adam

    Hi Fish,”mpls bgp forwarding” command is triggered by your vpnv4 eBGP connection over a not mpls enabled link.
    MPLS IP is not enabled on the inter-provider IFes, but the ASBRs won’t drop the received mpls encapsulated packets (ethertype 0x8847) on the global inter-provider link, thanks to the command “mpls bgp forwarding”. This feature is needed for CsC, Option B, Option C and Option AB Shared Interface Forwarding.

    • I just think it is kinda interesting cause it is not triggered if I use iBGP VPNv4. Packet itself should look the same regardless of whether the label is sent from an iBGP vpnv4 peer or a eBGP vpnv4 peer. I’ve used this option with CsC before. And… admittedly.. this is over traffic running over DMVPN … over MPLS. So the true internet carrier will never see the label. Just the DMVPN crypto fun. The DMVPN hub and spoke are both also the BGP VPNv4 peers. Just just kinda found it more of a surprise. 🙂 Love fun in the lab.

      • gadam1

        There are some crazy designs with DMVPN :)) I saw the following solution by a customer last year: MPLS over encrypted DMVPN over the MPLS backbone of the SP with CsC.
        It is not a problem, if you don’t have to find the cause of a QoS marking issue 🙂

        “mpls bgp forwarding”:
        It is only triggered if we use eBGP VPNv4, or labeled eBGP unicast (SAFI 4) for label exchange over a non-MPLS enabled interface.
        If I use an eBGP VPNv4 connection, the systems assumes that the 2 ASes (connected by this VPNv4 peering) have their
        own separated IGP/LDP setup and the network admins don’t want to run IGP and LDP on the Inter-AS link.
        –> Without “mpls ip”, the mpls protocol won’t be enabled/allowed on the Inter-AS link –> so the command “mpls bgp forwarding” will be needed.

  • Nage

    Hi Fish,

    You explained this technology in a simple manner, I appreciate it. I have worked/ done INE labs on other options, CSC and U-MPLS. Now, EVC is not my strong suite 🙁 Do you have any good material on that particular topic?

    Thanks Again