DMVPN Phase 1

Share on:

Part 1 in a 2 part series, this post will look at the configuration of DMVPN Phase 1 and the routing implications using OSPF. Although Phase 1 today is considered obsolete, it is still worth reviewing.

Prior to delving into the specifics of DMVPN Phase 1 configuration, let’s start with a underlay network – NBMA (Non-Broadcast Multi-access Network), the underlay can either be the public internet or a MPLS network.

For the examples going forward, the following topology will be used:

As you can see from the figure above, there are four sites, one of which is considered our Headquarters (Hub Site). Connectivity within the cloud is not displayed, as it’s expected to be the Internet in this case and end to end connectivity is already established.

With the scene set, let’s begin with the specifics of Phase 1:

  • Spoke-to-spoke tunnels are not possible, therefore all traffic from each site must traverse via the hub site. This means the hub site will become the bottle neck for all sites. – Phase 2/3 allow for spoke-to-spoke tunnels
  • Even though each spoke site is statically configured to find the hub site, NHRP (Next Hop Resolution Protocol) is still required for the spoke to register to the hub.
  • Routing:
    • Summarisation/default routing is allowed at the hub only
    • Next-Hop on spokes always changed by the hub
    • Hub router public IP must be static
    • Spoke router IP addresses can be dynamic

Even though Phase 1 is referred to as Dynamic Multipoint VPN, in essence it isn’t. This is because each spoke site is statically configured to find it’s hub.

Prior to starting it’s a good idea to test connectivity from the HUB to all of the public IP addresses of the spokes, if they aren’t dynamic of course. I have created a TCL script to enable this functionality.

If you copy and paste the following into your HUB router and execute, it will ping all the IP Addresses and check if your HUB router has connectivity to all of the Spoke Public IP Addresses.

2foreach VAR {
6} { ping $VAR source ser1/0}

Remember: to the following to exit the TCL shell. execute tclquit

The following diagram will display the topology of the overlay network with no concern for the underlay network, followed by the configuration of the hub and spoke routers.

HUB Configuration:

The following configuration is intended to be placed onto the HUB site only:

 1interface Tunnel0
 2     ip address
 3     ip nhrp authentication cisco
 4     ip nhrp map multicast dynamic
 5     ip nhrp network-id 1
 6     ip nhrp holdtime 300
 7     tunnel source ser1/0
 8     tunnel mode gre multipoint
 9     tunnel key 500
10     no shut

The configuration parameters that are used, are as follows:

  • ip nhrp authentication: This string (password) ensures that only tunnels with the correct key can join your DMVPN cloud and all Spokes are authenticated from a GRE perspective.
  • ip nhrp map multicast dynamic: This command allows the router to map multicast traffic from other devices, essentially it allows the router to build a NHRP database with the IP addresses of the dynamic spokes.
  • ip nhrp network-id: The network ID, allows the router to distinguish which tunnel interface to terminate the DMVPN connection onto, it’s useful if the router is hosting multiple DMVPN tunnels
  • ip nhrp holdtime: Not required, but this command is to configure how long the NHRP registrations packets are sent from the NHC (Next Hop Client, in this case the spoke to the NHS (Next Hop Server) which is our hub. By default, NHRP registrations are sent every one third of the hold time, unless a reply is not received, in which case the registration requests are sent every 1,2,4,8,32 & 64 seconds.
  • tunnel source: this should be set to the public interface which is connected to your ISP.
  • tunnel mode gre multipoint: Allows a GRE tunnel to be used in multipoint NBMA, this should always be configured on the HUB for phase 1.
  • tunnel key: ensures only traffic destined for a particular tunnel interface is received, must be the same on all routers. Can be any arbitrary number.

Spoke Configuration:

The spoke configuration for Phase 1 is as follows, and can be used as boiler plate template with only minimal substitution needed for each spoke, lines that need to be changed are highlighted.

 1interface Tunnel0
 2     ip address
 3     ip nhrp authentication cisco
 4     ip nhrp map
 5     ip nhrp map multicast
 6     ip nhrp network-id 1
 7     ip nhrp holdtime 300
 8     ip nhrp nhs
 9     tunnel source ser1/1
10     tunnel dest
11     tunnel key 500

The main difference between this configuration and the hub’s is that you are manually mapping the overlay IP of the hub to the underlay IP of the hub, which is what the NHRP protocol uses for registration. However, in Phase 1, you also need to statically define the NHS (Next Hop Server), which in Phase 2 & 3 is not required.

Once the Spoke configuration has been completed, if you ran the command show ip nhrp you should see something similar to:

1R1#sh ip nhrp
2192.168.250.2/32 via, Tunnel0 created 00:00:19, expire 00:04:40
3  Type: dynamic, Flags: authoritative unique registered used
4  NBMA address:
6R2#sh ip nhrp
7192.168.250.1/32 via, Tunnel0 created 00:05:21, never expire
8  Type: static, Flags: authoritative
9  NBMA address:

The main thing to note are the words dynamic and static, as defined earlier the HUB is a gre multipoint interface, you see the word dynamic whereas the spokes show the word static as they are statically configured them.

Doing a debug ip nhrp on the router console, you will see the process in which the Spoke (NHC) registers itself with the Hub (NHS).

 1*Mar  1 01:17:31.667: NHRP: Receive Registration Request via Tunnel0 vrf 0, packet size: 81
 2*Mar  1 01:17:31.667: NHRP: netid_in = 1, to_us = 1
 3*Mar  1 01:17:31.671: NHRP: NAT-check: matched destination address
 4*Mar  1 01:17:31.671: NHRP: Tu0: Creating dynamic multicast mapping  NBMA:
 5*Mar  1 01:17:31.671: NHRP: Attempting to send packet via DEST
 6*Mar  1 01:17:31.671: NHRP: Encapsulation succeeded.  Tunnel IP addr
 7*Mar  1 01:17:31.671: NHRP: Send Registration Reply via Tunnel0 vrf 0, packet size: 101
 8*Mar  1 01:17:31.675:       src:, dst:
 9*Mar  1 01:17:31.675: NHRP: 101 bytes out Tunnel0
11*Mar  1 01:17:31.679: NHRP: Receive Registration Request via Tunnel0 vrf 0, packet size: 81
12*Mar  1 01:17:31.679: NHRP: netid_in = 1, to_us = 1
13*Mar  1 01:17:31.679: NHRP: NAT-check: matched destination address
14*Mar  1 01:17:31.679: NHRP: Tu0: Found and skipping dynamic multicast mapping  NBMA:
15*Mar  1 01:17:31.679: NHRP: Attempting to send packet via DEST
16*Mar  1 01:17:31.679: NHRP: Encapsulation succeeded.  Tunnel IP addr
17*Mar  1 01:17:31.679: NHRP: Send Registration Reply via Tunnel0 vrf 0, packet size: 101
18*Mar  1 01:17:31.679:       src:, dst:
19*Mar  1 01:17:31.679: NHRP: 101 bytes out Tunnel0

Moving forward you can use the spoke template configuration on the other spokes to create the DMVPN cloud.

Re-issuing the show ip nhrp command, you will now see all of the NHRP NHC’s on the network from the perspective of the Hub:

 1R1#show ip nhrp
 2192.168.250.2/32 via, Tunnel0 created 00:09:31, expire 00:03:48
 3  Type: dynamic, Flags: authoritative unique registered
 4  NBMA address:
 5192.168.250.3/32 via, Tunnel0 created 00:00:16, expire 00:04:43
 6  Type: dynamic, Flags: authoritative unique registered used
 7  NBMA address:
 8192.168.250.4/32 via, Tunnel0 created 00:00:07, expire 00:04:52
 9  Type: dynamic, Flags: authoritative unique registered used
10  NBMA address:

Using another TCL script, test the connectivity to them via ICMP echo (Ping).

2foreach VAR {
6} { ping $VAR source tunnel0}

Before introducing any routing protocols into the topology, let’s examine the path R4 takes to get to the tunnel interface on R2, using traceroute:

3Type escape sequence to abort.
4Tracing the route to
6  1 28 msec 48 msec 44 msec
7  2 64 msec 72 msec 24 msec

Looking at the output above, you can see the first packet went to the HUB router, which decapsulated the packet and then re-encapsulated it prior to sending back out of it’s public interface to R2. This isn’t the most efficient path for the packet to take and as such is one of the major drawbacks of DMVPN Phase 1. The figure below reaffirms this with the use of a diagram.

As you can see Packet 1 starts off at R4 but must traverse to R1 then back down to R4, and the return traffic must also traverse via R1 (Hub).

Routing – OSPF

In a real-world scenario’s it’s typically recommended not to use OSPF over DMVPN, this is primarily because of the hierarchical structure of OSPF. However, for the purposes of this post, OSPF will be used to demonstrate how OSPF would behave over a DMVPN Phase 1 topology, this is extremely important when considering how adjacencies are formed.

So let’s begin by first enabling OSPF on R1, our Hub router.

 1interface Loopback0
 2 ip address
 3 ip ospf 1 area 0
 5interface Loopback1
 6 ip address
 7 ip ospf 1 area 0
 9interface Loopback2
10 ip address
11 ip ospf 1 area 0
13interface Loopback3
14 ip address
15 ip ospf 1 area 0
17interface Tunnel0
18 ip address
19 no ip redirects
20 ip nhrp authentication cisco
21 ip nhrp map multicast dynamic
22 ip nhrp network-id 1
23 ip nhrp holdtime 300
24 ip ospf 1 area 0
25 tunnel source Serial1/0
26 tunnel mode gre multipoint
27 tunnel key 500

Let’s now do the same on our Spokes, the config below is only shown for R2 but would be the same across all of the spoke routers.

 1interface Loopback0
 2 ip address
 3 ip ospf 1 area 0
 5interface Loopback1
 6 ip address
 7 ip ospf 1 area 0
 9interface Tunnel0
10 ip address
11 ip nhrp authentication cisco
12 ip nhrp map
13 ip nhrp map multicast
14 ip nhrp network-id 1
15 ip nhrp holdtime 300
16 ip nhrp nhs
17 ip ospf 1 area 0
18 tunnel source Serial1/1
19 tunnel destination
20 tunnel key 500

Keeping in mind by default a GRE Tunnel interface is interpreted as a point-to-point interface by OSPF, which has different values for the hello interval and dead interval. Because of these parameters, a spoke will never go into the FULL state or become adjacent to the hub as the parameters in the Hello packet are different.

 1Tunnel0 is up, line protocol is up
 2  Internet Address, Area 0
 3  Process ID 1, Router ID, Network Type POINT_TO_POINT, Cost: 11111
 4  Enabled by interface config, including secondary ip addresses
 5  Transmit Delay is 1 sec, State POINT_TO_POINT
 6  Timer intervals configured, Hello 10, Dead 40, Wait 40, Retransmit 5
 7    oob-resync timeout 40
 8    Hello due in 00:00:09
 9  Supports Link-local Signaling (LLS)
10  Index 3/3, flood queue length 0
11  Next 0x0(0)/0x0(0)
12  Last flood scan length is 1, maximum is 1
13  Last flood scan time is 0 msec, maximum is 0 msec
14  Neighbor Count is 0, Adjacent neighbor count is 0
15  Suppress hello for 0 neighbor(s)

For the Hub to be able to form adjacencies with the other routers, you need to change the OSPF Network Type on the Tunnel interfaces. You must change the network type from point-to-point to point-to-multipoint on all of the spokes, which can be achieved by executing the following command on all the router tunnel interfaces

1int tunnel 0
2ip ospf network point-to-multipoint

In this mode, all routers should move to OSPF state FULL, each OSPF router will multicast hello’s to the ALLSPFRouters multicast address

Executing the following command on the hub should show all the adjacencies established to the Hub. The Hub only ever becomes adjacent to all of the spokes, the spokes never become adjacent to one another.

 1R1#show ip ospf neigh
 3Neighbor ID     Pri   State           Dead Time   Address         Interface
 4192.168.250.2     0   FULL/  -        00:01:56   Tunnel0
 5192.168.250.4     0   FULL/  -        00:01:56   Tunnel0
 6192.168.250.3     0   FULL/  -        00:01:56   Tunnel0
 8R2#show ip ospf neigh
10Neighbor ID     Pri   State           Dead Time   Address         Interface
11192.168.250.1     0   FULL/  -        00:01:56   Tunnel0
13R3#show ip ospf neigh
15Neighbor ID     Pri   State           Dead Time   Address         Interface
16192.168.250.1     0   FULL/  -        00:01:56   Tunnel0
18R4#show ip ospf neigh
20Neighbor ID     Pri   State           Dead Time   Address         Interface
21192.168.250.1     0   FULL/  -        00:01:56   Tunnel0

Looking at the routing table on R1 for the subnet you see the next hop is the R4 tunnel IP.

1R1#show ip route
2Routing entry for, 1 known subnets
4O [110/11112] via, 00:01:10, Tunnel0

However, from R2’s & R3’s perspective the next hop is the tunnel interface for R1.

1R2#show ip route
2Routing entry for, 1 known subnets
4O [110/22223] via, 00:01:10, Tunnel0
6R3#show ip route
7Routing entry for, 1 known subnets
9O [110/22223] via, 00:01:10, Tunnel0

The above routing table also confirms that the Hub from the Routing Information Base (RIB) is our next logical hop and in Phase 1 this is true, however is Phase 2 and 3, the Forward Information Base (FIB, CEF Table) will determine which path is taken.


One of the disadvantages of running OSPF over DMVPN is that each router in the area must have an identical copy of the Link State Database (LSDB) across all routers. Therefore, it’s actually much better to configure the OSPF area on the DMVPN cloud as a Stub area or Not So Stubby Area (NSSA) and use the Hub router as an ABR/ASBR to originate a default route into the DMVPN cloud.

But careful consideration needs to be given to the DMVPN routing design, using OSPF is great but if the Spokes are not considered stub and are transit areas, the routing design may become overly complex with no real benefit. Although I will be demonstrating OSPF across the DMVPN cloud for the Phase 1 & 2 in this series, it’s much better to use a Distance Vector Protocol like EIGRP as the routing protocol over your DMVPN cloud.

Lastly, your probably wondering why I haven’t actually mentioned anything about VPN in this post and that’s good, I will cover this last towards the end of the series as it’s actually the same configuration for all three phases and is relatively straight forward to configure.