Wednesday, August 20, 2014

DMVPN -- MPLS over DMVPN? Oh yeah.


Get your nerd hats on! We're freaking pushing labels over our DMVPN network, like a boss. As you might have gathered thus far, I'm a little excited. The only downer here is your label switch path has to be hub-to-spoke, so no mas' spoke-to-spoke tunnels. If you want labels between your spokes, per Cisco documentation, traffic flow absolutely has to be spoke-hub-spoke. Calm down, dry those tears sunshine... because this is still awesome. I can hear you all now "But Jon! One of the best things about DMVPN is building dynamic tunnels between spokes!" shut up Debbie downer. We do loose dynamic tunnels, but we gain having full blown PEs connected only via DMVPN.


Ok, enough build up. How does this work? Surprisingly easy, if you've configured MPLS before... this isn't going to be super exciting. First things first, here's our topology:



All spokes are connected back to the hub via Serial links in a 192.168.zy.x/30 space (where z=lower router number, and y=higher router number). For example the link between R1-Hub and R2-Spoke is 192.168.12.0/30. Then we have Loopback0 configured on each router in the 192.168.x.x/32 space, this is our tunnel source. All traffic supporting DMVPN backhaul is routed via OSPF. Finally, for routing within the DMVPN cloud we're using good old reliable EIGRP. Here's our base DMVPN configurations.


R1-Hub
 !interface Tunnel100
 ip address 10.10.100.1 255.255.255.0
 no ip redirects
 no ip split-horizon eigrp 100
 ip nhrp map multicast dynamic
 ip nhrp network-id 100
 mpls ip
 tunnel source Loopback0
 tunnel mode gre multipoint

!
!
interface Loopback100

 description BGP peering over DMVPN
 ip address 10.10.1.1 255.255.255.255 

!
 router eigrp 100
 network 10.0.0.0

R2/R3/R4
 interface Tunnel100
 ip address 10.10.100.x 255.255.255.0
 no ip redirects
 ip nhrp map multicast 192.168.1.1
 ip nhrp map 10.10.100.1 192.168.1.1
 ip nhrp network-id 100
 ip nhrp nhs 10.10.100.1
 mpls ip
 tunnel source Loopback0
 tunnel mode gre multipoint

!
interface Loopback100
 description BGP peering over DMVPN
 ip address 10.10.2.2 255.255.255.255

!
router eigrp 100
 network 10.0.0.0
Pretty simple so far right? Alright, lets get some labels in here.

All Routers
mpls ip
mpls ldp router-id lo100
!
int tun100
 mpls ip
!

I know what you're thinking "Jon, here's $5... because you just blew my mind." Well thank you, and I do accept donations. So let's check the output on R1


*Aug 21 01:21:12.009: %LDP-5-NBRCHG: LDP Neighbor 10.10.2.2:0 (1) is UP
*Aug 21 01:21:13.005: %LDP-5-NBRCHG: LDP Neighbor 10.10.3.3:0 (2) is UP
*Aug 21 01:21:14.106: %LDP-5-NBRCHG: LDP Neighbor 10.10.4.4:0 (3) is UP
Sweet sweet success, but do we have labels? Best place to check is on one of the spokes, I'll look at R4 (he seems lonely).


R4-MPLS#show mpls forwarding-table | ex No Label
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop   
Label      Label      or Tunnel Id     Switched      interface             
16         Pop Label  10.10.1.1/32     0                  Tu100      10.10.100.1
17         16              10.10.2.2/32     0                  Tu100      10.10.100.1
18         17              10.10.3.3/32     0                  Tu100      10.10.100.1
Awesome! Don't ignore the next hop, remember that's the secret sauce here. Since we excluded "no ip next-hop-self eigrp 100" from our Hub config, we're forcing all traffic between spokes to route through the hub. As I demonstrate in the video, if we allow the dynamic tunnels this all breaks. So we have a functioning LSP it would seem between spokes so lets get some VRF running and go ping crazy! You don't have to configure BGP on the Hub, but I am and I'll configure spokes as route reflector clients to minimize spoke configuration.


R1-Hub
router bgp 65000
 bgp log-neighbor-changes
 neighbor 10.10.2.2 remote-as 65000
 neighbor 10.10.2.2 update-source Loopback100
 neighbor 10.10.2.2 send-community both
 neighbor 10.10.3.3 remote-as 65000
 neighbor 10.10.3.3 update-source Loopback100
 neighbor 10.10.3.3 send-community both
 neighbor 10.10.4.4 remote-as 65000
 neighbor 10.10.4.4 update-source Loopback100
 neighbor 10.10.4.4 send-community both
 !
 address-family vpnv4
  neighbor 10.10.2.2 activate
  neighbor 10.10.2.2 send-community extended
  neighbor 10.10.2.2 route-reflector-client
  neighbor 10.10.3.3 activate
  neighbor 10.10.3.3 send-community extended
  neighbor 10.10.3.3 route-reflector-client
  neighbor 10.10.4.4 activate
  neighbor 10.10.4.4 send-community extended
  neighbor 10.10.4.4 route-reflector-client
 exit-address-family

Spokes
router bgp 65000
 bgp log-neighbor-changes
 neighbor 10.10.1.1 remote-as 65000
 neighbor 10.10.1.1 update-source Loopback100
 neighbor 10.10.1.1 send-community both
 !
 address-family vpnv4
  neighbor 10.10.1.1 activate
  neighbor 10.10.1.1 send-community extended
 exit-address-family
 !

Now that we have BGP up and running, we'll configure a basic VRF, assign a loopback to said VRF and redistribute connected with our ipv4 address-family (for the VRF).


All Spokes
ip vrf MPLS
 rd 65000:1
 route-target export 65000:65000
 route-target import 65000:65000

!
router bgp 65000
 address-family ipv4 vrf MPLS
  redistribute connected
 exit-address-family
R2
int lo1001
 ip vrf f MPLS
 ip address 172.16.2.1 255.255.255.0
!

R3
int lo1001
 ip vrf f MPLS
 ip address 172.16.3.1 255.255.255.0
!

R4
int lo1001
 ip vrf f MPLS
 ip address 172.16.4.1 255.255.255.0
!
Last but not least, let's test from R3.


R3-MPLS#show ip bgp vpnv4 vrf MPLS | b Route
Route Distinguisher: 65000:1 (default for vrf MPLS)
 *>i 172.16.2.0/24    10.10.2.2                0    100      0 ?
 *>  172.16.3.0/24    0.0.0.0                  0         32768 ?
 *>i 172.16.4.0/24    10.10.4.4                0    100      0 ?

!
R3-MPLS#show ip route vrf MPLS bgp | b Gateway
Gateway of last resort is not set

      172.16.0.0/16 is variably subnetted, 4 subnets, 2 masks
B        172.16.2.0/24 [200/0] via 10.10.2.2, 00:31:16
B        172.16.4.0/24 [200/0] via 10.10.4.4, 00:31:11

!
R3-MPLS#ping vrf MPLS 172.16.2.1 source lo1001
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.2.1, timeout is 2 seconds:
Packet sent with a source address of 172.16.3.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 19/19/20 ms

!
R3-MPLS#traceroute vrf MPLS 172.16.2.1
Type escape sequence to abort.
Tracing the route to 172.16.2.1
VRF info: (vrf in name/id, vrf out name/id)
  1 10.10.100.1 [MPLS: Labels 16/24 Exp 0] 20 msec 20 msec 20 msec
  2 172.16.2.1 20 msec 19 msec 20 msec


Well that's it every body! MPLSoDMVPN! See attached video if you want to hear me talk really fast about doing everything you just read.





Monday, August 4, 2014

DMVPN - Part 2, BGP with dyanmic neighbors

Oh man... BGP Dynamic Neighbors. This is a freaking cool way of setting up BGP on a device like a Hub router where you're expecting numerous BGP neighbors. Before dynamic neighbors, I remember configuring my hub router with peer-groups and having an insane amount of syntax since we had ~30-40 spokes. No more my friends, in this post we'll not only look at how to configure iBGP for DMVPN routing, but also using dynamic neighbors to dramatically reduce the amount of configuration on the hub.

Before I jump into the config, you might wonder "why use BGP for DMVPN routing?" Simple answer my friend, it's awesome. So, since most moderately sized organizations have BGP running anyway (think about your MPLS, unless you're super cool and have a full on VPLS... you're peering with your MPLS provider, and more than likely using BGP) using BGP for DMVPN allows a relatively seamless integration of the DMVPN cloud into your organization. I used to preach about "consistent BGP information", because that model sincerely does allow you to build more stable and scalable networks. Also, per Cisco, distance vector routing protocols just play nicer with DMVPN's hub and spoke model. SO enough with the sales pitch, let's get into it. Here's our topology:


We'll configure the spokes first, since there's nothing too exciting happening there. *This post assumes you already have DMVPN up and running, see DMVPN Part 1 for that*


Spoke1
...
conf t
!
router bgp 65000
 neighbor 172.16.10.1 remote-as 65000
 neighbor 172.16.10.1 send-community
 network 172.17.10.10 mask 255.255.255.255
 That's it... rinse and repeat on Spokes 2 and 3 (just change your network statement). Now here's the magic, configuring the Hub. Dynamic neighbors aside, there's one key feature we're really concerned with on the Hub... route-reflector-client. Specifically telling the hub that all DMVPN peers are RR clients. Why? Well young padawans, what's the rule about iBGP? BGP expects that internal peerings are configured in a full mesh, and to prevent routing loops, it will not advertised iBGP learned prefixes to other iBGP peers... think of this like BGP's split horizon. Well, that's not going to work for us at all, so we're effectively going to turn it off by telling the hub our DMVPN peers are RR clients. Also note the bgp listen syntax... we'll talk about that bit next.

Hub
...
conf t
!
router bgp 65000
 neighbor DMVPN peer-group
 neighbor DMVPN remote-as 65000
 neighbor DMVPN route-reflector-client
 bgp listen range 172.16.10.0/24 peer-group DMVPN
 network 172.17.1.1 mask 255.255.255.255

That's it! Calling "bgp listen range x.x.x.x/x peer-group abcd" is the entire configuration of dynamic neighbors. Now the default behavior allows for 100 dynamic peers, but this can be increased to 5000 with "bgp listen limit 5000". Check out the bgp summary table on the hub, so informative:

HUB#show ip bgp summary | b ^Neighbor
Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
*172.16.10.10   4        65000      30      30       11    0    0 00:22:24        1
*172.16.10.20   4        65000      29      34       11    0    0 00:22:22        1
*172.16.10.30   4        65000      29      34       11    0    0 00:22:24        1
* Dynamically created based on a listen range command
Dynamically created neighbors: 3, Subnet ranges: 1

BGP peergroup DMVPN listen range group members:
  172.16.10.0/24

Total dynamically created neighbors: 3/(5000 max), Subnet ranges: 1



So you can see "Total dynamically created neighbors: 3/(5000 max)" I did bump up the maximum allowed dynamic neighbors to 5000. Also note the "*" next to our neighbors indicating that they were learned dynamically. Alright, the last thing we should look at is a couple pings showing spoke to spoke communication, our routing tables (since iBGP does not update next hop information), and our dmvpn neighbor table after said pings. We'll test between Spoke 2 and Spoke 3.


SPOKE2#ping 172.17.30.30
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.17.30.30, timeout is 2 seconds:
!!!!!

!
SPOKE2#show ip route bgp | b ^Gateway
Gateway of last resort is not set

      172.17.0.0/32 is subnetted, 4 subnets
B        172.17.1.1 [200/0] via 172.16.10.1, 00:28:58
B        172.17.10.10 [200/0] via 172.16.10.10, 00:28:58
B        172.17.30.30 [200/0] via 172.16.10.30, 00:28:58

!

SPOKE2#show dmvpn | b ^ # Ent
 # Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb
 ----- --------------- --------------- ----- -------- -----
     1 1.1.1.1             172.16.10.1    UP 00:38:05     S
     1 1.1.1.10           172.16.10.10    UP 00:28:34     D
     1 1.1.1.30           172.16.10.30    UP 00:00:05     D

Well that's all there is too it! See the linked video for a walk through of this post, and a quick blurb on design considerations.