DMVPN Phase 2

In the first post of this series, DMVPN Phase 1, the DMVPN concept and configuration parameters that were pertinent to the configuration for Phase 1 were explored. Although the parameters are similar to Phase 1 for Phase 2, the actual operation of traffic flows and routing configuration has changed.

So the major differences from Phase 1 are:

  • Spoke to Spoke tunnels are now possible – traffic destined for other Spokes, does not need to traverse via the hub for all traffic
  • Summarisation/Default Routing is not allowed at the hub

Before we dive into the Phase 2 configuration, let’s quickly refresh our view of the underlay network as per the following diagram:

DMVPN - Underlay v1.0

So with the underlay in mind, the following diagram from Phase 1, will depict the same overlay network we intend to create.

DMVPN - Overlay v1.0

HUB Configuration

The configuration of the HUB is identical to that of Phase 1, therefore the following code snippet is a copy and paste from the Phase 1 blog post.

interface Tunnel0
ip address 192.168.250.1 255.255.255.248
ip nhrp authentication cisco
ip nhrp map multicast dynamic
ip nhrp network-id 1
ip nhrp holdtime 300
tunnel source ser1/0
tunnel mode gre multipoint
tunnel key 500
no shut

Spoke Configuration

The spoke configuration is where the differences between Phase 1 and Phase 2 start to become apparent, in Phase 1 we statically defined the tunnel destination and the tunnel was point-to-point instead of point-to-multipoint. So keeping those two parameters in mind, the new spoke configuration will be as follows:

interface Tunnel0
ip address 192.168.250.2 255.255.255.248
no ip redirects
ip nhrp authentication cisco
ip nhrp map 123.144.150.1 192.168.250.1
ip nhrp map multicast 123.144.150.1
ip nhrp map 192.168.250.1 123.144.150.1
ip nhrp network-id 1
ip nhrp holdtime 300
ip nhrp nhs 192.168.250.1
tunnel source Serial1/1
tunnel mode gre multipoint
tunnel key 500

If we take the same boiler plate configuration and just change the IP address applied to the tunnel interface, you can use it on all our spokes, just as before.

From a configuration perspective, up until now there hasn’t been a tremendous difference from Phase 1, we still need to configure tunnel interfaces, set a Next Hop Server, but instead of statically configuring the hub router, this setting is omitted. The reason is because we want the tunnels to come up dynamically therefore allowing spoke to spoke tunnels.

Let’s see what this configuration does from a packet flow perspective, just as before, run a traceroute from R4 to R2.

R4#tr 192.168.250.2

Type escape sequence to abort.
Tracing the route to 192.168.250.2

1 192.168.250.1 40 msec 44 msec
192.168.250.2 64 msec

As you can see the very first packet went to the Hub router, this is because the spoke sends a NHRP resolution packet to find the other spoke, after the hub replies all subsequent packets will go directly between spoke to spoke, as you can see in the traceflow below..

R4#tr 192.168.250.2

Type escape sequence to abort.
Tracing the route to 192.168.250.2

1 192.168.250.2 20 msec 40 msec 24 msec

If you perform a debug ip nhrp on the hub, you will see something similar to the output below:

*Mar 1 01:11:05.075: NHRP: Receive Registration Request via Tunnel0 vrf 0, packet size: 81
*Mar 1 01:11:05.079: NHRP: netid_in = 1, to_us = 1
*Mar 1 01:11:05.079: NHRP: NAT-check: matched destination address 17.42.59.1
*Mar 1 01:11:05.079: NHRP: Tu0: Found and skipping dynamic multicast mapping NBMA: 17.42.59.1
*Mar 1 01:11:05.083: NHRP: Attempting to send packet via DEST 192.168.250.2
*Mar 1 01:11:05.083: NHRP: Encapsulation succeeded. Tunnel IP addr 17.42.59.1
*Mar 1 01:11:05.083: NHRP: Send Registration Reply via Tunnel0 vrf 0, packet size: 101
*Mar 1 01:11:05.087: src: 192.168.250.1, dst: 192.168.250.2
*Mar 1 01:11:05.087: NHRP: 101 bytes out Tunnel0
R1#
*Mar 1 01:11:07.079: NHRP: Receive Resolution Request via Tunnel0 vrf 0, packet size: 81
*Mar 1 01:11:07.079: NHRP: netid_in = 1, to_us = 0
*Mar 1 01:11:07.079: NHRP: NAT-check: matched destination address 144.161.34.1
*Mar 1 01:11:07.079: NHRP: nhrp_rtlookup yielded Tunnel0
*Mar 1 01:11:07.079: NHRP: Tu0: Found and skipping dynamic multicast mapping NBMA: 144.161.34.1
*Mar 1 01:11:07.079: NHRP: netid_out 1, netid_in 1
*Mar 1 01:11:07.079: NHRP: nhrp_cache_lookup_comp returned 0x65A20494
*Mar 1 01:11:07.079: NHRP: Attempting to send packet via DEST 192.168.250.4
*Mar 1 01:11:07.079: NHRP: Encapsulation succeeded. Tunnel IP addr 144.161.34.1
*Mar 1 01:11:07.079: NHRP: Send Resolution Reply via Tunnel0 vrf 0, packet size: 109
*Mar 1 01:11:07.079: src: 192.168.250.1, dst: 192.168.250.4
*Mar 1 01:11:07.079: NHRP: 109 bytes out Tunnel0
*Mar 1 01:11:07.135: NHRP: Receive Resolution Request via Tunnel0 vrf 0, packet size: 81
*Mar 1 01:11:07.139: NHRP: netid_in = 1, to_us = 0
*Mar 1 01:11:07.139: NHRP: NAT-check: matched destination address 17.42.59.1
*Mar 1 01:11:07.139: NHRP: nhrp_rtlookup yielded Tunnel0
*Mar 1 01:11:07.143: NHRP: Tu0: Found and skipping dynamic multicast mapping NBMA: 17.42.59.1
*Mar 1 01:11:07.143: NHRP: netid_out 1, netid_in 1
*Mar 1 01:11:07.143: NHRP: nhrp_cache_lookup_comp returned 0x65883B44
*Mar 1 01:11:07.147: NHRP: Attempting to send packet via DEST 192.168.250.2
*Mar 1 01:11:07.147: NHRP: Encapsulation succeeded. Tunnel IP addr 17.42.59.1
*Mar 1 01:11:07.147: NHRP: Send Resolution Reply via Tunnel0 vrf 0, packet size: 109
*Mar 1 01:11:07.147: src: 192.168.250.1, dst: 192.168.250.2
*Mar 1 01:11:07.151: NHRP: 109 bytes out Tunnel0

DMVPN Phase 2 - Packet Path

Looking at the figure above, the red arrow depicts the first packet from R4 to R2, and the green arrow depicts all subsequent flows between the pair of routers. To understand this process in even further detail, let’s evaluate our Forwarding Information Base (FIB) – CEF for the IP 192.168.2.250 on R4

R4#sh ip cef 192.168.250.2
192.168.250.2/32, version 22, epoch 0, connected
0 packets, 0 bytes
via 192.168.250.2, Tunnel0, 0 dependencies
next hop 192.168.250.2, Tunnel0
valid adjacency

As you can see by looking at the CEF table, R4 has dynamically added the R2 tunnel interface directly into it’s FIB as a more accurate match, as opposed to R3’s tunnel interface 192.168.250.3, for example:

R4#sh ip cef 192.168.250.3
192.168.250.0/29, version 17, epoch 0, attached, connected
0 packets, 0 bytes
via Tunnel0, 0 dependencies
valid punt adjacency

You can see for R2, a /32 host route is entered into the FIB whereas for the unresolved R3 router, the /29 for the network is left in place, however if you tried to ping the R3 router,  after performing a  NHRP resolution, the router would enter the more exact match into it’s CEF table (FIB).

The process described above, is used for any route at any of the spokes, the only difference is what routing protocol populates the RIB for you.

Routing – OSPF

Again for DMVPN Phase 2, I have elected to use OSPF as the routing protocol for demonstration, although this time we will need to make some subtle changes to how OSPF is configured, which has consequences on how OSPF operates.

First, you will setup the OSPF network type to broadcast, instead of using point-to-multipoint as per the previous example. This is primarily because, if we use point-to-multipoint, traffic would still traverse via the hub instead of spoke to spoke, therefore to get Phase 2 to operate in the correct way, you must configure the network type as broadcast.

From a topology point of view, we want our HUB router to become the designated router. The Designated Router (DR) will establish an adjacency with all DROther (non DR or BDR) routers. The DR will listen to updates on the AllSPFRouters Multicast address of 224.0.0.6 and flood LSA’s on 224.0.0.5 to the DROthers or Spoke routers.

So let’s configure OSPF on our HUB:

interface Tunnel0
ip address 192.168.250.1 255.255.255.248
no ip redirects
ip nhrp authentication cisco
ip nhrp map multicast dynamic
ip nhrp network-id 1
ip nhrp holdtime 300
ip ospf network broadcast
ip ospf 1 area 0
tunnel source Serial1/0
tunnel mode gre multipoint
tunnel key 500
end

Our Spoke routers will be set up as follows:

interface Tunnel0
ip address 192.168.250.2 255.255.255.248
no ip redirects
ip nhrp authentication cisco
ip nhrp map 123.144.150.1 192.168.250.1
ip nhrp map multicast 123.144.150.1
ip nhrp map 192.168.250.1 123.144.150.1
ip nhrp network-id 1
ip nhrp holdtime 300
ip nhrp nhs 192.168.250.1
ip ospf network broadcast
ip ospf 1 area 0
tunnel source Serial1/1
tunnel mode gre multipoint
tunnel key 500
end

Configuring the OSPF parameters as highlighted above, actually creates a unique problem, the problem is that each of the spokes will try to become the DR, and depending on interface priority or router ID one of the routers in the DMVPN topology will, which will result in partial connectivity but not full connectivity of routers in the DMVPN cloud. However, this is not the desired outcome we want, we want the DR to be our Hub, because the Hub router is the only router that will be able to flood LSA’s to all of the DROther (spoke) routers. It also ensures that we have predictability and stability for our DMPVPN network, adding another spoke should just work and not create problems to the hub or any of the other spokes. Therefore there is one extra piece of configuration required, which is to set the OSPF priority on each of the spokes.

You need to set the OSPF priority of the spoke routers to 0, which will ensure none of the spoke routers will partake in the election process. This is extremely important to ensure that the Hub router becomes the DR for the topology.

The command to enter under the tunnel interface for each spoke is:

interface Tunnel0
ip ospf priority 0

Note: By default Cisco sets the OSPF priority to 1, so there is no need to change the priority to a higher value on the Hub.

Once routing is configured, the traffic flow will be the same as depicted above and the CEF table should be interrogated for the forwarding path.

Wrap Up

To wrap up, this post highlighted the main advantage of Phase 2 DMVPN is that allows for spoke to spoke tunnels to be created dynamically. However due to routing design, it has a few caveats that should be looked at.

In regards to OSPF routing design, you could have used a non broadcast network type, which ensures you would not have the DR election issues present with the broadcast network type, but this presents problems with scalability as you need to manually configure all of the neighbours. In addition if you were to use point-to-multipoint network type, all traffic would need to traverse via the Hub router as seen in DMVPN Phase 1, therefore it’s not advisable. I should also clarify, DMVPN Phase 2 is considered obsolete and only DMVPN Phase 3 should be deployed into production.

Advertisements

1 thought on “DMVPN Phase 2”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s