MVPN is essentially a technique to enable the SP to send the multicast
traffic data from the customer. In this example, I will use the Multicast
Distribution Tree (MDT) in the MPLS core network to enable the multicast
traffic pass through the MPLS Core network from the customer.
So the basic Idea of this technology is that PE will use its own
multicast address to send the traffic to the other PE router. Note that this
multicast address will represent one customer at the same time. This means that
traffic from the customer that having source an Multicast sender and the
destination of their own multicast address, let say 239.29.28.1, will then be
encapsulated with an additional header, where the source of the traffic is the
loopback address of the PE and the destination is the multicast address that is
represent the customer, for this example, let say 239.0.0.1.
This 239.0.0.1 is only used for the MPLS Core network and it is not
related to the customer, the customer may use the same multicast address J. With this approach,
the SP can forward the customer multicast address inside the MPLS core network,
how cool is that ;)
In general, there are 3 steps to deploy MDT, there are consist of:
- Configuring Multicast support on the MPLS Core Devices
- Configuring multicast support on the specific VRF
- Configuring BGP to use MDT
|
- The Customer, under VRF VPN_A, want to use Multicast traffic using group 239.29.28.1, where, currently, the receiver is behind XR2.
- Using MDT, Configure the MPLS Cloud to supporting this customer requirement
- Note that XR1/XR2 using regular IOS
- The PEs (R2,R5,and XR1) are already being configured for MPLS L3 VPN to the Customer using OSPF as the IGP
- The CE’s is already being configured with basic PIM SM, where R1 is the RP using BSR, other switch, XR2 and SW1 will see the RP mapping after the configuration in the MPLS core is completed.
1.
Configuring Multicast support on the MPLS Core
Devices
This is straight forward, assuming that you guys are familiar with the
Multicast configuration, where R4 will be act as a RP using BSR.
R2,R3,R4,R5,R6,XR1
!
Ip multicast-routing
!
Interface loopback0
Ip pim
sparse-mode
!
Interface
Ip pim
sparse-mode
!
end
|
We should able to see that each one of them are adjacent with others
R4#show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router,
N - Default DR Priority,
P -
Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor
Interface
Uptime/Expires Ver DR
Address
Prio/Mode
20.2.4.2
FastEthernet0/0.24
00:46:46/00:01:42 v2 1 / S P
G
20.3.4.3
FastEthernet0/0.34
00:46:43/00:01:44 v2 1 / S P
G
20.4.6.6
FastEthernet0/0.46
00:45:38/00:01:24 v2 1 / DR S
P G
20.4.5.5
FastEthernet0/0.45
00:46:11/00:01:16 v2 1 / DR S
P G
|
Then we will configure the R4 to be both RP-Candidate and BSR router
R4
!
ip pim bsr-candidate Loopback0 0
ip pim rp-candidate Loopback0
!
end
|
We can see from the MPLS router that they will know R4 as an RP
XR1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP
4.4.4.4 (?), v2
Info
source: 4.4.4.4 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:46:01, expires: 00:01:44
|
·
2. Configuring multicast support on the specific
VRF
R2
!
ip multicast-routing vrf VPN_A
!
interface FastEthernet0/1
vrf
forwarding VPN_A
ip pim
sparse-mode
!
end
|
R5
!
ip multicast-routing vrf VPN_A
!
interface FastEthernet0/1
vrf
forwarding VPN_A
ip pim
sparse-mode
!
end
|
XR1
!
ip multicast-routing vrf VPN_A
!
interface POS2/0
vrf
forwarding VPN_A
ip pim
sparse-mode
!
end
|
Verification
R2#show ip pim vrf VPN_A neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router,
N - Default DR Priority,
P -
Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor
Interface Uptime/Expires Ver
DR
Address
Prio/Mode
10.1.2.1
FastEthernet0/1
00:46:44/00:01:19 v2 1 / S P
G
R5#show ip pim vrf VPN_A neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router,
N - Default DR Priority,
P -
Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor
Interface
Uptime/Expires Ver DR
Address
Prio/Mode
10.5.9.9
FastEthernet0/1
00:46:50/00:01:37 v2 1 / DR S
G
XR1#show ip pim vrf VPN_A neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router,
N - Default DR Priority,
P -
Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor
Interface
Uptime/Expires Ver DR
Address
Prio/Mode
10.19.20.20
POS2/0
00:47:03/00:01:24 v2 1 / S P
G
|
At this point, both XR2 and SW1 haven’t got any information about the
RP
XR2#sh ip pim rp map
PIM Group-to-RP Mappings
SW1#sh ip pim rp map
PIM Group-to-RP Mappings
|
3.
Configuring BGP to use MDT
Now this is the brain of the MVPN feature
R2
!
vrf definition VPN_A
!
address-family ipv4
mdt
default 239.0.0.1
!
router bgp 100
!
address-family ipv4 mdt
neighbor
5.5.5.5 activate
neighbor
5.5.5.5 send-community extended
neighbor
5.5.5.5 route-reflector-client
neighbor
19.19.19.19 activate
neighbor
19.19.19.19 send-community extended
neighbor
19.19.19.19 route-reflector-client
exit-address-family
!
end
|
R5
!
vrf definition VPN_A
address-family ipv4
mdt
default 239.0.0.1
!
router bgp 100
!
address-family ipv4 mdt
neighbor
2.2.2.2 activate
neighbor
2.2.2.2 send-community extended
exit-address-family
!
end
|
XR1
!
vrf definition VPN_A
address-family ipv4
mdt
default 239.0.0.1
!
router bgp 100
!
address-family ipv4 mdt
neighbor
2.2.2.2 activate
neighbor
2.2.2.2 send-community extended
exit-address-family
!
end
|
Now let see the verification from the MPLS side, from the vrf point of
view, the router will have an adjacency to the CE routers and the other PE
router that participate in the MDT
R2#show ip pim vrf VPN_A neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router,
N - Default DR Priority,
P -
Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor
Interface
Uptime/Expires Ver
DR
Address
Prio/Mode
10.1.2.1
FastEthernet0/1
00:46:44/00:01:19 v2 1 / S P
G
19.19.19.19 Tunnel2 00:40:41/00:01:22 v2 1 / DR S P G
5.5.5.5 Tunnel2 00:41:02/00:01:23 v2 1 / S P G
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router,
N - Default DR Priority,
P -
Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor
Interface
Uptime/Expires Ver DR
Address
Prio/Mode
10.5.9.9
FastEthernet0/1
00:53:17/00:01:34 v2 1 / DR S
G
19.19.19.19 Tunnel1 00:47:47/00:01:37 v2 1 / DR S P G
2.2.2.2 Tunnel1 00:48:08/00:01:40 v2 1 / S P G
XR1#show ip pim vrf VPN_A neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router,
N - Default DR Priority,
P -
Proxy Capable, S - State Refresh Capable, G - GenID Capable
Neighbor
Interface
Uptime/Expires Ver DR
Address
Prio/Mode
10.19.20.20
POS2/0
00:47:03/00:01:24 v2 1 / S P G
2.2.2.2 Tunnel1 00:41:32/00:01:32 v2 1 / S P G
5.5.5.5 Tunnel1 00:42:01/00:01:32 v2 1 / S P G
|
The three PE router should have seen the 239.0.0.1 in their mroute,
which this multicast address is representing a VPN_A customer, they should seen
both (*,G) and the (S,G) for the specific group and PE Routers.
XR1#show ip mroute
<…SNIP…>
(*, 239.0.0.1), 00:49:06/stopped, RP 4.4.4.4, flags: SJCFZ
Incoming
interface: FastEthernet0/0.619, RPF nbr 20.6.19.6
Outgoing
interface list:
MVRF
VPN_A, Forward/Sparse, 00:49:06/00:01:52
(5.5.5.5, 239.0.0.1), 00:49:05/00:01:29, flags: JTZ
Incoming
interface: FastEthernet0/0.519, RPF nbr 20.5.19.5
Outgoing
interface list:
MVRF
VPN_A, Forward/Sparse, 00:49:05/00:01:54
(2.2.2.2, 239.0.0.1), 00:49:06/00:01:29, flags: JTZ
Incoming
interface: FastEthernet0/0.619, RPF nbr 20.6.19.6
Outgoing
interface list:
MVRF
VPN_A, Forward/Sparse, 00:49:06/00:01:53
(19.19.19.19, 239.0.0.1), 00:49:06/00:03:26, flags: FT
Incoming
interface: Loopback0, RPF nbr 0.0.0.0
Outgoing
interface list:
FastEthernet0/0.619, Forward/Sparse, 00:49:06/00:02:33
FastEthernet0/0.519, Forward/Sparse, 00:49:06/00:02:37
(*, 224.0.1.40), 01:03:53/00:02:06, RP 0.0.0.0,
flags: DCL
Incoming
interface: Null, RPF nbr 0.0.0.0
Outgoing
interface list:
Loopback0, Forward/Sparse, 01:03:52/00:02:06
R2#show ip mroute | inc 239.0.0.1
(*, 239.0.0.1), 00:54:10/stopped, RP 4.4.4.4, flags: SJCFZ
(19.19.19.19, 239.0.0.1), 00:50:38/00:01:22, flags: JTZ
(5.5.5.5, 239.0.0.1), 00:50:59/00:02:49, flags: JTZ
(2.2.2.2, 239.0.0.1), 00:54:10/00:03:26, flags: FT
R5#show ip mroute | inc 239.0.0.1
(*, 239.0.0.1), 00:51:39/stopped, RP 4.4.4.4, flags: SJCFZ
(19.19.19.19, 239.0.0.1), 00:51:18/00:02:39, flags: JTZ
(2.2.2.2, 239.0.0.1), 00:51:39/00:02:26, flags: JTZ
(5.5.5.5, 239.0.0.1), 00:51:39/00:03:11, flags: FT
|
R2#show bgp ipv4 mdt vrf VPN_A
BGP table version is 4, local router ID is
2.2.2.2
Status codes: s suppressed, d damped, h history,
* valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, x best-external
Origin codes: i - IGP, e - EGP, ? - incomplete
Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 100:1 (default for vrf
VPN_A)
*> 2.2.2.2/32 0.0.0.0 0 ?
*>i5.5.5.5/32 5.5.5.5 0 100
0 ?
*>i19.19.19.19/32 19.19.19.19 0 100
0 ?
|
R2#show ip pim mdt bgp
MDT (Route Distinguisher + IPv4) Router ID Next Hop
MDT group
239.0.0.1
100:1:5.5.5.5 5.5.5.5 5.5.5.5
100:1:19.19.19.19 19.19.19.19 19.19.19.19
R5#show ip pim mdt bgp
MDT (Route Distinguisher + IPv4) Router ID Next Hop
MDT group
239.0.0.1
100:1:2.2.2.2 2.2.2.2 2.2.2.2
100:1:19.19.19.19 2.2.2.2 19.19.19.19
XR1#show ip pim mdt
* implies
mdt is the default MDT
MDT
Group/Num Interface Source VRF
* 239.0.0.1
Tunnel1 Loopback0 VPN_A
XR1#show ip pim mdt bg
XR1#show ip pim mdt bgp
MDT (Route Distinguisher + IPv4) Router ID Next Hop
MDT group
239.0.0.1
100:1:2.2.2.2 2.2.2.2 2.2.2.2
100:1:5.5.5.5 2.2.2.2 5.5.5.5
|
Now we can see that all the CE will got the RP information from R1
XR2#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP
1.1.1.1 (?), v2
Info
source: 1.1.1.1 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:53:12, expires: 00:01:43
SW1#show ip pim rp mapping
PIM Group-to-RP Mappings
Group(s) 224.0.0.0/4
RP
1.1.1.1 (?), v2
Info
source: 1.1.1.1 (?), via bootstrap, priority 0, holdtime 150
Uptime: 00:53:25, expires: 00:01:28
|
And now, if we specified a traffic from the CE, it would succeed
XR2
!
interface FastEthernet0/0
ip igmp
join-group 239.29.28.1
!
End
R7#ping 239.29.28.1 rep 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 239.29.28.1,
timeout is 2 seconds:
Reply to request 0 from 10.8.20.20, 468 ms
Reply to request 1 from 10.8.20.20, 316 ms
Reply to request 2 from 10.8.20.20, 272 ms
Reply to request 3 from 10.8.20.20, 264 ms
Reply to request 4 from 10.8.20.20, 340 ms
|
If we verified the mroute table on the XR2 we could see that the
source of the traffic is 10.1.7.7
XR2#show ip mroute 239.29.28.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s
- SSM Group, C - Connected,
L -
Local, P - Pruned, R - RP-bit set, F - Register flag,
T -
SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
X -
Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U -
URD, I - Received Source Specific Host Report,
Z -
Multicast Tunnel, z - MDT-data group sender,
Y -
Joined MDT-data group, y - Sending to MDT-data group,
V -
RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched,
A - Assert winner
Timers:
Uptime/Expires
Interface
state: Interface, Next-Hop or VCD, State/Mode
(*, 239.29.28.1), 00:52:56/stopped, RP 1.1.1.1,
flags: SJCL
Incoming
interface: POS2/0, RPF nbr 10.19.20.19
Outgoing
interface list:
FastEthernet0/0, Forward/Sparse, 00:52:56/00:02:59
(10.1.7.7, 239.29.28.1), 00:00:49/00:02:10, flags: LJT
Incoming
interface: POS2/0, RPF nbr 10.19.20.19
Outgoing
interface list:
FastEthernet0/0, Forward/Sparse, 00:00:49/00:02:59
|
If we take a look on a packet level, we could see that the traffic
from R7 to the 239.29.28.1 to the XR2 will be encapsulated in the MPLS Core
|
Note that using MDT, the customer multicast packet is not using Label
Switching as a transport mechanism in the MPLS Core. Other option are now
available, to optimize the multicast forwarding using MPLS.
It is indeed a cool stuf ;)
I hope that this writing would be informative, and I’d like to thank
you for reading ;)
No comments:
Post a Comment