DMVPN Phase 2
In the first post of this series, DMVPN Phase 1, the DMVPN concept and configuration parameters that were pertinent to the configuration for Phase 1 were explored. Although the parameters are similar to Phase 1 for Phase 2, the actual operation of traffic flows and routing configuration has changed.
So the major differences from Phase 1 are:
- Spoke to Spoke tunnels are now possible – traffic destined for other Spokes, does not need to traverse via the hub for all traffic
- Summarisation/Default Routing is not allowed at the hub
Before we dive into the Phase 2 configuration, let’s quickly refresh our view of the underlay network as per the following diagram:
So with the underlay in mind, the following diagram from Phase 1, will depict the same overlay network we intend to create.
The configuration of the HUB is identical to that of Phase 1, therefore the following code snippet is a copy and paste from the Phase 1 blog post.
1interface Tunnel0 2ip address 192.168.250.1 255.255.255.248 3ip nhrp authentication cisco 4ip nhrp map multicast dynamic 5ip nhrp network-id 1 6ip nhrp holdtime 300 7tunnel source ser1/0 8tunnel mode gre multipoint 9tunnel key 500 10no shut
The spoke configuration is where the differences between Phase 1 and Phase 2 start to become apparent, in Phase 1 we statically defined the tunnel destination and the tunnel was point-to-point instead of point-to-multipoint. So keeping those two parameters in mind, the new spoke configuration will be as follows:
1interface Tunnel0 2ip address 192.168.250.2 255.255.255.248 3no ip redirects 4ip nhrp authentication cisco 5ip nhrp map 184.108.40.206 192.168.250.1 6ip nhrp map multicast 220.127.116.11 7ip nhrp map 192.168.250.1 18.104.22.168 8ip nhrp network-id 1 9ip nhrp holdtime 300 10ip nhrp nhs 192.168.250.1 11tunnel source Serial1/1 12tunnel mode gre multipoint 13tunnel key 500
If we take the same boiler plate configuration and just change the IP address applied to the tunnel interface, you can use it on all our spokes, just as before.
From a configuration perspective, up until now there hasn’t been a tremendous difference from Phase 1, we still need to configure tunnel interfaces, set a Next Hop Server, but instead of statically configuring the hub router, this setting is omitted. The reason is because we want the tunnels to come up dynamically therefore allowing spoke to spoke tunnels.
Let’s see what this configuration does from a packet flow perspective, just as before, run a
traceroute from R4 to R2.
1R4#tr 192.168.250.2 2 3Type escape sequence to abort. 4Tracing the route to 192.168.250.2 5 61 192.168.250.1 40 msec 44 msec 7192.168.250.2 64 msec
As you can see the very first packet went to the Hub router, this is because the spoke sends a NHRP resolution packet to find the other spoke, after the hub replies all subsequent packets will go directly between spoke to spoke, as you can see in the traceflow below..
1R4#tr 192.168.250.2 2 3Type escape sequence to abort. 4Tracing the route to 192.168.250.2 5 61 192.168.250.2 20 msec 40 msec 24 msec
If you perform a
debug ip nhrp on the hub, you will see something similar to the output below:
1*Mar 1 01:11:05.075: NHRP: Receive Registration Request via Tunnel0 vrf 0, packet size: 81 2*Mar 1 01:11:05.079: NHRP: netid_in = 1, to_us = 1 3*Mar 1 01:11:05.079: NHRP: NAT-check: matched destination address 22.214.171.124 4*Mar 1 01:11:05.079: NHRP: Tu0: Found and skipping dynamic multicast mapping NBMA: 126.96.36.199 5*Mar 1 01:11:05.083: NHRP: Attempting to send packet via DEST 192.168.250.2 6*Mar 1 01:11:05.083: NHRP: Encapsulation succeeded. Tunnel IP addr 188.8.131.52 7*Mar 1 01:11:05.083: NHRP: Send Registration Reply via Tunnel0 vrf 0, packet size: 101 8*Mar 1 01:11:05.087: src: 192.168.250.1, dst: 192.168.250.2 9*Mar 1 01:11:05.087: NHRP: 101 bytes out Tunnel0 10R1# 11*Mar 1 01:11:07.079: NHRP: Receive Resolution Request via Tunnel0 vrf 0, packet size: 81 12*Mar 1 01:11:07.079: NHRP: netid_in = 1, to_us = 0 13*Mar 1 01:11:07.079: NHRP: NAT-check: matched destination address 184.108.40.206 14*Mar 1 01:11:07.079: NHRP: nhrp_rtlookup yielded Tunnel0 15*Mar 1 01:11:07.079: NHRP: Tu0: Found and skipping dynamic multicast mapping NBMA: 220.127.116.11 16*Mar 1 01:11:07.079: NHRP: netid_out 1, netid_in 1 17*Mar 1 01:11:07.079: NHRP: nhrp_cache_lookup_comp returned 0x65A20494 18*Mar 1 01:11:07.079: NHRP: Attempting to send packet via DEST 192.168.250.4 19*Mar 1 01:11:07.079: NHRP: Encapsulation succeeded. Tunnel IP addr 18.104.22.168 20*Mar 1 01:11:07.079: NHRP: Send Resolution Reply via Tunnel0 vrf 0, packet size: 109 21*Mar 1 01:11:07.079: src: 192.168.250.1, dst: 192.168.250.4 22*Mar 1 01:11:07.079: NHRP: 109 bytes out Tunnel0 23*Mar 1 01:11:07.135: NHRP: Receive Resolution Request via Tunnel0 vrf 0, packet size: 81 24*Mar 1 01:11:07.139: NHRP: netid_in = 1, to_us = 0 25*Mar 1 01:11:07.139: NHRP: NAT-check: matched destination address 22.214.171.124 26*Mar 1 01:11:07.139: NHRP: nhrp_rtlookup yielded Tunnel0 27*Mar 1 01:11:07.143: NHRP: Tu0: Found and skipping dynamic multicast mapping NBMA: 126.96.36.199 28*Mar 1 01:11:07.143: NHRP: netid_out 1, netid_in 1 29*Mar 1 01:11:07.143: NHRP: nhrp_cache_lookup_comp returned 0x65883B44 30*Mar 1 01:11:07.147: NHRP: Attempting to send packet via DEST 192.168.250.2 31*Mar 1 01:11:07.147: NHRP: Encapsulation succeeded. Tunnel IP addr 188.8.131.52 32*Mar 1 01:11:07.147: NHRP: Send Resolution Reply via Tunnel0 vrf 0, packet size: 109 33*Mar 1 01:11:07.147: src: 192.168.250.1, dst: 192.168.250.2 34*Mar 1 01:11:07.151: NHRP: 109 bytes out Tunnel0
Looking at the figure above, the red arrow depicts the first packet from R4 to R2, and the green arrow depicts all subsequent flows between the pair of routers. To understand this process in even further detail, let’s evaluate our Forwarding Information Base (FIB) – CEF for the IP 192.168.2.250 on R4
1R4#sh ip cef 192.168.250.2 2192.168.250.2/32, version 22, epoch 0, connected 30 packets, 0 bytes 4via 192.168.250.2, Tunnel0, 0 dependencies 5next hop 192.168.250.2, Tunnel0 6valid adjacency
As you can see by looking at the CEF table, R4 has dynamically added the R2 tunnel interface directly into it’s FIB as a more accurate match, as opposed to R3’s tunnel interface 192.168.250.3, for example:
1R4#sh ip cef 192.168.250.3 2192.168.250.0/29, version 17, epoch 0, attached, connected 30 packets, 0 bytes 4via Tunnel0, 0 dependencies 5valid punt adjacency
You can see for R2, a /32 host route is entered into the FIB whereas for the unresolved R3 router, the /29 for the network is left in place, however if you tried to ping the R3 router, after performing a NHRP resolution, the router would enter the more exact match into it’s CEF table (FIB).
The process described above, is used for any route at any of the spokes, the only difference is what routing protocol populates the RIB for you.
Routing – OSPF
Again for DMVPN Phase 2, I have elected to use OSPF as the routing protocol for demonstration, although this time we will need to make some subtle changes to how OSPF is configured, which has consequences on how OSPF operates.
First, you will setup the OSPF network type to broadcast, instead of using point-to-multipoint as per the previous example. This is primarily because, if we use point-to-multipoint, traffic would still traverse via the hub instead of spoke to spoke, therefore to get Phase 2 to operate in the correct way, you must configure the network type as broadcast.
From a topology point of view, we want our HUB router to become the designated router. The Designated Router (DR) will establish an adjacency with all DROther (non DR or BDR) routers. The DR will listen to updates on the AllSPFRouters Multicast address of 184.108.40.206 and flood LSA’s on 220.127.116.11 to the DROthers or Spoke routers.
So let’s configure OSPF on our HUB:
1interface Tunnel0 2ip address 192.168.250.1 255.255.255.248 3no ip redirects 4ip nhrp authentication cisco 5ip nhrp map multicast dynamic 6ip nhrp network-id 1 7ip nhrp holdtime 300 8ip ospf network broadcast 9ip ospf 1 area 0 10tunnel source Serial1/0 11tunnel mode gre multipoint 12tunnel key 500 13end
Our Spoke routers will be set up as follows:
1interface Tunnel0 2ip address 192.168.250.2 255.255.255.248 3no ip redirects 4ip nhrp authentication cisco 5ip nhrp map 18.104.22.168 192.168.250.1 6ip nhrp map multicast 22.214.171.124 7ip nhrp map 192.168.250.1 126.96.36.199 8ip nhrp network-id 1 9ip nhrp holdtime 300 10ip nhrp nhs 192.168.250.1 11ip ospf network broadcast 12ip ospf 1 area 0 13tunnel source Serial1/1 14tunnel mode gre multipoint 15tunnel key 500 16end
Configuring the OSPF parameters as highlighted above, actually creates a unique problem, the problem is that each of the spokes will try to become the DR, and depending on interface priority or router ID one of the routers in the DMVPN topology will, which will result in partial connectivity but not full connectivity of routers in the DMVPN cloud. However, this is not the desired outcome we want, we want the DR to be our Hub, because the Hub router is the only router that will be able to flood LSA’s to all of the DROther (spoke) routers. It also ensures that we have predictability and stability for our DMPVPN network, adding another spoke should just work and not create problems to the hub or any of the other spokes. Therefore there is one extra piece of configuration required, which is to set the OSPF priority on each of the spokes.
You need to set the OSPF priority of the spoke routers to
0, which will ensure none of the spoke routers will partake in the election process. This is extremely important to ensure that the Hub router becomes the DR for the topology.
The command to enter under the tunnel interface for each spoke is:
1interface Tunnel0 2ip ospf priority 0
Note: By default Cisco sets the OSPF priority to 1, so there is no need to change the priority to a higher value on the Hub.
Once routing is configured, the traffic flow will be the same as depicted above and the CEF table should be interrogated for the forwarding path.
To wrap up, this post highlighted the main advantage of Phase 2 DMVPN is that allows for spoke to spoke tunnels to be created dynamically. However due to routing design, it has a few caveats that should be looked at.
In regards to OSPF routing design, you could have used a non broadcast network type, which ensures you would not have the DR election issues present with the broadcast network type, but this presents problems with scalability as you need to manually configure all of the neighbours. In addition if you were to use point-to-multipoint network type, all traffic would need to traverse via the Hub router as seen in DMVPN Phase 1, therefore it’s not advisable. I should also clarify, DMVPN Phase 2 is considered obsolete and only DMVPN Phase 3 should be deployed into production.