DMVPN Phase 1

Part 1 in a four part series, this post will look at the configuration of DMVPN Phase 1 and the routing implications using OSPF. Although Phase 1 today is considered obsolete, it is still worth reviewing.

Prior to delving into the specifics of DMVPN Phase 1 configuration, let’s start with a underlay network – NBMA (Non-Broadcast Multi-access Network), the underlay can either be the public internet or a MPLS network.

For the examples going forward, the following topology will be used:

DMVPN - Underlay v1.0

As you can see from the figure above, there are four sites, one of which is considered our Headquarters (Hub Site). Connectivity within the cloud is not displayed, as it’s expected to be the Internet in this case and end to end connectivity is already established.

With the scene set, let’s begin with the specifics of Phase 1:

  • Spoke-to-spoke tunnels are not possible, therefore all traffic from each site must traverse via the hub site. This means the hub site will become the bottle neck for all sites. – Phase 2/3 allow for spoke-to-spoke tunnels
  • Even though each spoke site is statically configured to find the hub site, NHRP (Next Hop Resolution Protocol) is still required for the spoke to register to the hub.
  • Routing:
  • Summarisation/default routing is allowed at the hub only
  • Next-Hop on spokes always changed by the hub
  • Hub router public IP must be static
  • Spoke router IP addresses can be dynamic

Even though Phase 1 is referred to as Dynamic Multipoint VPN, in essence it isn’t. This is because each spoke site is statically configured to find it’s hub.

Prior to starting it’s a good idea to test connectivity from the HUB to all of the public IP addresses of the spokes, if they aren’t dynamic of course. I have created a TCL script to enable this functionality.

If you copy and paste the following into your HUB router and execute, it will ping all the IP Addresses and check if your HUB router has connectivity to all of the Spoke Public IP Addresses.

tclsh
foreach VAR {
17.42.59.1
134.20.0.1
144.161.34.1
} { ping $VAR source ser1/0}

Remember: to the following to exit the TCL shell.

tclquit

The following diagram will display the topology of the overlay network with no concern for the underlay network, followed by the configuration of the hub and spoke routers.

DMVPN - Overlay v1.0

HUB Configuration:

The following configuration is intended to be placed onto the HUB site only:

interface Tunnel0
     ip address 192.168.250.1 255.255.255.248
     ip nhrp authentication cisco
     ip nhrp map multicast dynamic
     ip nhrp network-id 1
     ip nhrp holdtime 300
     tunnel source ser1/0
     tunnel mode gre multipoint
     tunnel key 500
     no shut

The configuration parameters that are used, are as follows:

  • ip nhrp authentication: This string (password) ensures that only tunnels with the correct key can join your DMVPN cloud and all Spokes are authenticated from a GRE perspective.
  • ip nhrp map multicast dynamic: This command allows the router to map multicast traffic from other devices, essentially it allows the router to build a NHRP database with the IP addresses of the dynamic spokes.
  • ip nhrp network-id: The network ID, allows the router to distinguish which tunnel interface to terminate the DMVPN connection onto, it’s useful if the router is hosting multiple DMVPN tunnels
  • ip nhrp holdtime: Not required, but this command is to configure how long the NHRP registrations packets are sent from the NHC (Next Hop Client, in this case the spoke to the NHS (Next Hop Server) which is our hub. By default, NHRP registrations are sent every one third of the hold time, unless a reply is not received, in which case the registration requests are sent every 1,2,4,8,32 & 64 seconds.
  • tunnel source: this should be set to the public interface which is connected to your ISP.
  • tunnel mode gre multipoint: Allows a GRE tunnel to be used in multipoint NBMA, this should always be configured on the HUB for phase 1.
  • tunnel key: ensures only traffic destined for a particular tunnel interface is received, must be the same on all routers. Can be any arbitrary number.

Spoke Configuration:

The spoke configuration for Phase 1 is as follows, and can be used as boiler plate template with only minimal substitution needed for each spoke, lines that need to be changed are highlighted.

interface Tunnel0
     ip address 192.168.250.2 255.255.255.248
     ip nhrp authentication cisco
     ip nhrp map 192.168.250.1 123.144.150.1
     ip nhrp map multicast 123.144.150.1
     ip nhrp network-id 1
     ip nhrp holdtime 300
     ip nhrp nhs 192.168.250.1
     tunnel source ser1/1
     tunnel dest 123.144.150.1
     tunnel key 500

The main difference between this configuration and the hub’s is that you are manually mapping the overlay IP of the hub to the underlay IP of the hub, which is what the NHRP protocol uses for registration. However, in Phase 1, you also need to statically define the NHS (Next Hop Server), which in Phase 2 & 3 is not required.

Once the Spoke configuration has been completed, if you ran the command show ip nhrp you should see something similar to:

R1#sh ip nhrp
192.168.250.2/32 via 192.168.250.2, Tunnel0 created 00:00:19, expire 00:04:40
  Type: dynamic, Flags: authoritative unique registered used
  NBMA address: 17.42.59.1

R2#sh ip nhrp
192.168.250.1/32 via 192.168.250.1, Tunnel0 created 00:05:21, never expire
  Type: static, Flags: authoritative
  NBMA address: 123.144.150.1

The main thing to note are the words dynamic and static, as defined earlier the HUB is a gre multipoint interface, you see the word dynamic whereas the spokes show the word static as they are statically configured them.

Doing a debug ip nhrp on the router console, you will see the process in which the Spoke (NHC) registers itself with the Hub (NHS).

*Mar  1 01:17:31.667: NHRP: Receive Registration Request via Tunnel0 vrf 0, packet size: 81
*Mar  1 01:17:31.667: NHRP: netid_in = 1, to_us = 1
*Mar  1 01:17:31.671: NHRP: NAT-check: matched destination address 17.42.59.1
*Mar  1 01:17:31.671: NHRP: Tu0: Creating dynamic multicast mapping  NBMA: 17.42.59.1
*Mar  1 01:17:31.671: NHRP: Attempting to send packet via DEST 192.168.250.2
*Mar  1 01:17:31.671: NHRP: Encapsulation succeeded.  Tunnel IP addr 17.42.59.1
*Mar  1 01:17:31.671: NHRP: Send Registration Reply via Tunnel0 vrf 0, packet size: 101
*Mar  1 01:17:31.675:       src: 192.168.250.1, dst: 192.168.250.2
*Mar  1 01:17:31.675: NHRP: 101 bytes out Tunnel0
R1#
*Mar  1 01:17:31.679: NHRP: Receive Registration Request via Tunnel0 vrf 0, packet size: 81
*Mar  1 01:17:31.679: NHRP: netid_in = 1, to_us = 1
*Mar  1 01:17:31.679: NHRP: NAT-check: matched destination address 17.42.59.1
*Mar  1 01:17:31.679: NHRP: Tu0: Found and skipping dynamic multicast mapping  NBMA: 17.42.59.1
*Mar  1 01:17:31.679: NHRP: Attempting to send packet via DEST 192.168.250.2
*Mar  1 01:17:31.679: NHRP: Encapsulation succeeded.  Tunnel IP addr 17.42.59.1
*Mar  1 01:17:31.679: NHRP: Send Registration Reply via Tunnel0 vrf 0, packet size: 101
*Mar  1 01:17:31.679:       src: 192.168.250.1, dst: 192.168.250.2
*Mar  1 01:17:31.679: NHRP: 101 bytes out Tunnel0

Moving forward you can use the spoke template configuration on the other spokes to create the DMVPN cloud.

Re-issuing the show ip nhrp command, you will now see all of the NHRP NHC’s on the network from the perspective of the Hub:

R1#show ip nhrp
192.168.250.2/32 via 192.168.250.2, Tunnel0 created 00:09:31, expire 00:03:48
  Type: dynamic, Flags: authoritative unique registered
  NBMA address: 17.42.59.1
192.168.250.3/32 via 192.168.250.3, Tunnel0 created 00:00:16, expire 00:04:43
  Type: dynamic, Flags: authoritative unique registered used
  NBMA address: 134.20.0.1
192.168.250.4/32 via 192.168.250.4, Tunnel0 created 00:00:07, expire 00:04:52
  Type: dynamic, Flags: authoritative unique registered used
  NBMA address: 144.161.34.1

Using another TCL script, test the connectivity to them via ICMP echo (Ping).

tclsh
foreach VAR {
192.168.250.2
192.168.250.3
192.168.250.4
} { ping $VAR source tunnel0}

Before introducing any routing protocols into the topology, let’s examine the path R4 takes to get to the tunnel interface on R2, using traceroute:

R4#tr 192.168.250.2

Type escape sequence to abort.
Tracing the route to 192.168.250.2

  1 192.168.250.1 28 msec 48 msec 44 msec
  2 192.168.250.2 64 msec 72 msec 24 msec

Looking at the output above, you can see the first packet went to the HUB router, which decapsulated the packet and then re-encapsulated it prior to sending back out of it’s public interface to R2. This isn’t the most efficient path for the packet to take and as such is one of the major drawbacks of DMVPN Phase 1. The figure below reaffirms this with the use of a diagram.

DMVPN Phase 1 - Packet Path

As you can see Packet 1 starts off at R4 but must traverse to R1 then back down to R4, and the return traffic must also traverse via R1 (Hub).

Routing – OSPF

In a real-world scenario’s it’s typically recommended not to use OSPF over DMVPN, this is primarily because of the hierarchical structure of OSPF. However, for the purposes of this post, OSPF will be used to demonstrate how OSPF would behave over a DMVPN Phase 1 topology, this is extremely important when considering how adjacencies are formed.

So let’s begin by first enabling OSPF on R1, our Hub router.

interface Loopback0
 ip address 10.0.0.1 255.255.255.0
 ip ospf 1 area 0
!
interface Loopback1
 ip address 10.0.1.1 255.255.255.0
 ip ospf 1 area 0
!
interface Loopback2
 ip address 10.0.2.1 255.255.255.0
 ip ospf 1 area 0
!
interface Loopback3
 ip address 10.0.3.1 255.255.255.0
 ip ospf 1 area 0
!
interface Tunnel0
 ip address 192.168.250.1 255.255.255.248
 no ip redirects
 ip nhrp authentication cisco
 ip nhrp map multicast dynamic
 ip nhrp network-id 1
 ip nhrp holdtime 300
 ip ospf 1 area 0
 tunnel source Serial1/0
 tunnel mode gre multipoint
 tunnel key 500

Let’s now do the same on our Spokes, the config below is only shown for R2 but would be the same across all of the spoke routers.

interface Loopback0
 ip address 192.168.0.1 255.255.255.0
 ip ospf 1 area 0
!
interface Loopback1
 ip address 192.168.1.1 255.255.255.0
 ip ospf 1 area 0
!
interface Tunnel0
 ip address 192.168.250.2 255.255.255.248
 ip nhrp authentication cisco
 ip nhrp map 192.168.250.1 123.144.150.1
 ip nhrp map multicast 123.144.150.1
 ip nhrp network-id 1
 ip nhrp holdtime 300
 ip nhrp nhs 192.168.250.1
 ip ospf 1 area 0
 tunnel source Serial1/1
 tunnel destination 123.144.150.1
 tunnel key 500

Keeping in mind by default a GRE Tunnel interface is interpreted as a point-to-point interface by OSPF, which has different values for the hello interval and dead interval. Because of these parameters, a spoke will never go into the FULL state or become adjacent to the hub as the parameters in the Hello packet are different.

Tunnel0 is up, line protocol is up
  Internet Address 192.168.250.2/29, Area 0
  Process ID 1, Router ID 192.168.0.1, Network Type POINT_TO_POINT, Cost: 11111
  Enabled by interface config, including secondary ip addresses
  Transmit Delay is 1 sec, State POINT_TO_POINT
  Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5
    oob-resync timeout 40
    Hello due in 00:00:09
  Supports Link-local Signaling (LLS)
  Index 3/3, flood queue length 0
  Next 0x0(0)/0x0(0)
  Last flood scan length is 1, maximum is 1
  Last flood scan time is 0 msec, maximum is 0 msec
  Neighbor Count is 0, Adjacent neighbor count is 0
  Suppress hello for 0 neighbor(s)

For the Hub to be able to form adjacencies with the other routers, you need to change the OSPF Network Type on the Tunnel interfaces. You must change the network type from point-to-point to point-to-multipoint on all of the spokes, which can be achieved by executing the following command on all the router tunnel interfaces

int tunnel 0
ip ospf network point-to-multipoint

In this mode, all routers should move to OSPF state FULL, each OSPF router will multicast hello’s to the ALLSPFRouters multicast address 224.0.0.5.

Executing the following command on the hub should show all the adjacencies established to the Hub. The Hub only ever becomes adjacent to all of the spokes, the spokes never become adjacent to one another.

R1#show ip ospf neigh

Neighbor ID     Pri   State           Dead Time   Address         Interface
192.168.250.2     0   FULL/  -        00:01:56    192.168.250.2   Tunnel0
192.168.250.4     0   FULL/  -        00:01:56    192.168.250.4   Tunnel0
192.168.250.3     0   FULL/  -        00:01:56    192.168.250.3   Tunnel0
R2#show ip ospf neigh

Neighbor ID     Pri   State           Dead Time   Address         Interface
192.168.250.1     0   FULL/  -        00:01:56    192.168.250.1   Tunnel0

R3#show ip ospf neigh

Neighbor ID     Pri   State           Dead Time   Address         Interface
192.168.250.1     0   FULL/  -        00:01:56    192.168.250.1   Tunnel0

R4#show ip ospf neigh

Neighbor ID     Pri   State           Dead Time   Address         Interface
192.168.250.1     0   FULL/  -        00:01:56    192.168.250.1   Tunnel0

Looking at the routing table on R1 for the subnet 192.168.10.0/24 you see the next hop is the R4 tunnel IP.

R1#show ip route 192.168.10.0
Routing entry for 192.168.10.0/32, 1 known subnets

O       192.168.10.1 [110/11112] via 192.168.250.4, 00:01:10, Tunnel0

However, from R2’s & R3’s perspective the next hop is the tunnel interface for R1.

R2#show ip route 192.168.10.0
Routing entry for 192.168.10.0/32, 1 known subnets

O       192.168.10.1 [110/22223] via 192.168.250.1, 00:01:10, Tunnel0

R3#show ip route 192.168.10.0
Routing entry for 192.168.10.0/32, 1 known subnets

O       192.168.10.1 [110/22223] via 192.168.250.1, 00:01:10, Tunnel0

The above routing table also confirms that the Hub from the Routing Information Base (RIB) is our next logical hop and in Phase 1 this is true, however is Phase 2 and 3, the Forward Information Base (FIB, CEF Table) will determine which path is taken.

Wrap Up

One of the disadvantages of running OSPF over DMVPN is that each router in the area must have an identical copy of the Link State Database (LSDB) across all routers. Therefore, it’s actually much better to configure the OSPF area on the DMVPN cloud as a Stub area or Not So Stubby Area (NSSA) and use the Hub router as an ABR/ASBR to originate a default route into the DMVPN cloud.

But careful consideration needs to be given to the DMVPN routing design, using OSPF is great but if the Spokes are not considered stub and are transit areas, the routing design may become overly complex with no real benefit. Although I will be demonstrating OSPF across the DMVPN cloud for the Phase 1 & 2 in this series, it’s much better to use a Distance Vector Protocol like EIGRP as the routing protocol over your DMVPN cloud.

Lastly, your probably wondering why I haven’t actually mentioned anything about VPN in this post and that’s good, I will cover this last towards the end of the series as it’s actually the same configuration for all three phases and is relatively straight forward to configure.

Advertisements

1 thought on “DMVPN Phase 1”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s