Wednesday 1 January 2020

PIM Sparse Mode (Auto RP)




Here we are going to assign R2 ad R3 as the candidate RP  and R1 as the mapping agent.
Note all the router preconfigured with the ipv4 address and OSPF is used as the routing protocol, all routers are in area 0. PIM sparse dense mode it running.
IP ADDRESS: Fast Ethernet 155.1.XY.X/24
                       Loopback         X.X.X.X/32
Where X & Y is the router host number.(X <Y)

R2(config)#ip pim send-rp-announce loopback 0 scope 255

Once the R2 configured as the candidate R2 will be having the mroute as (S,G) (2.2.2.2,224.0.1.39)
R2#show ip mroute 2.2.2.2 224.0.1.39 | be Inter
 Interface state: Interface, Next-Hop or VCD, State/Mode
(2.2.2.2, 224.0.1.39), 00:09:33/00:02:28, flags: PT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list: Null

 Now lets configure the R3 box
R3(config)#ip pim send-rp-announce loopback 0 scope 255

R3#smr 3.3.3.3 224.0.1.39 | be Int
 Interface state: Interface, Next-Hop or VCD, State/Mode

(3.3.3.3, 224.0.1.39), 00:03:27/00:02:33, flags: PT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list: Null

Lets move to the mapping agent. R4 will be the mapping agent here. Am keeping the mapping agent one hop away form the candidate RP. Its because the directly connected routers will about the candidate RP.
R4(config)#ip pim send-rp-discovery loopback 0 scope 255

As we know the egg and the chicken problem. Since R2 and R3 are the candidate RP they should register themselves to the mapping agent but to register they need to join 224.0.1.40 group for that they should know the RP. So solve this method we have three solution
  1. PIM Sparse-Dense Mode
  2. Auto RP listener
  3. PIM dense mode for 224.0.1.39/40 other sparse

Auto listener feature we should enable on all the routers.
Rx(config)#ip pim autorp listener

Lets check the mapping agent.

R4#sh ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
  RP 3.3.3.3 (?), v2v1
    Info source: 3.3.3.3 (?), elected via Auto-RP
         Uptime: 00:03:04, expires: 00:02:53
  RP 2.2.2.2 (?), v2v1
    Info source: 2.2.2.2 (?), via Auto-RP
         Uptime: 00:03:00, expires: 00:01:59
R4#

The above details say that for all the multicast group R3 and R2 are the candidate RP's. In both of them R3 is the Winner due to the highest IP.
The ? Is for the DNS lookup. So lets configure the DNS lookup.

R4(config)#ip host R3 3.3.3.3
R4(config)#ip host R2 2.2.2.2

R4#sh ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/4
  RP 3.3.3.3 (R3), v2v1
    Info source: 3.3.3.3 (R3), elected via Auto-RP
         Uptime: 00:09:38, expires: 00:02:20
  RP 2.2.2.2 (R2), v2v1
    Info source: 2.2.2.2 (R2), via Auto-RP
         Uptime: 00:09:33, expires: 00:02:28
R4#

Lets have a source for 224.1.1.1

R7#ping
Protocol [ip]:
Target IP address: 224.1.1.1
Repeat count [1]: 10000000
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: yes
Interface [All]: loopback0
Time to live [255]:
Source address: 7.7.7.7
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 10000000, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Packet sent with a source address of 7.7.7.7
..




R7 will generate the multicast feed and as the first hop router receive the feed its checks for the RP info. So lets check the RP info on R1.
R1#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/4
  RP 3.3.3.3 (?), v2v1
    Info source: 4.4.4.4 (?), elected via Auto-RP
         Uptime: 00:26:28, expires: 00:02:15

R1 details say that R4(4.4.4.4) update the RP(3.3.3.3) via Auto-RP.
(S,G) entry will be seen only on R1 and R3.

 
R3#smr 7.7.7.7 224.1.1.1 | be Inter
 Interface state: Interface, Next-Hop or VCD, State/Mode
(7.7.7.7, 224.1.1.1), 00:23:07/00:01:55, flags: P
  Incoming interface: Serial1/1, RPF nbr 155.1.13.1
  Outgoing interface list: Null

As we don’t have any receivers for the group 224.1.1.1 the OIL is NULL. Lets have a client asking for the 224.1.1.1 group
R6(config)#int lo0
R6(config-if)#ip igmp join-group 224.1.1.1

We are getting reply from R6.


R6#mtrace 7.7.7.7
Type escape sequence to abort.
Mtrace from 7.7.7.7 to 155.1.146.6 via RPF
From source (?) to destination (?)
Querying full reverse path...
 0  155.1.146.6
-1  155.1.146.6 PIM  [7.7.7.7/32]
-2  155.1.146.4 PIM  [7.7.7.7/32]
-3  155.1.14.1 PIM  [7.7.7.7/32]
-4  155.1.17.7 PIM  [7.7.7.7/32]
-5  7.7.7.7

From trace we found like multicast feed coming from R7---R1---R4----R6 but as we know we have a R3 as RP. Lets check why R3 is not used

R3#smr 224.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:33:09/00:03:08, RP 3.3.3.3, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial1/1, Forward/Sparse, 00:07:15/00:03:08

(7.7.7.7, 224.1.1.1), 00:33:09/00:01:53, flags: PT==========> PT is the reason
  Incoming interface: Serial1/1, RPF nbr 155.1.13.1
  Outgoing interface list: Null

R1 will check, its waste of bandwidth to pass the feed to R3 and again R3 anyhow send back the feed to R1. So R1 will send a special PIM Prune msg with RP bit set and switch to shortest path tree.

We can assign the RP/Group. To do this just create the access-list and match the multicast group.
R2(config)#access-list 1 permit 224.0.0.0 0.255.255.255
R2(config)#ip pim send-rp-announce loopback 0 scope 255 group-list 1

R3(config)#access-list 1 permit 239.0.0.0 0.255.255.255
R3(config)#ip pim send-rp-announce lo0 scope 255 group-list 1


R4#sh ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)
Group(s) 224.0.0.0/8
  RP 2.2.2.2 (R2), v2v1
    Info source: 2.2.2.2 (R2), elected via Auto-RP
         Uptime: 00:03:01, expires: 00:01:56
Group(s) 239.0.0.0/8
  RP 3.3.3.3 (R3), v2v1
    Info source: 3.3.3.3 (R3), elected via Auto-RP
         Uptime: 00:01:50, expires: 00:02:08

In Mapping agent it don’t have any security to find whose and which multicast group it should have the RP/Group.
To Provide this feature cisco has RP-announce filter option
R4#sh ip access-lists
Standard IP access list Any
    10 deny   any
Standard IP access list Ay+Group
    10 deny   any
Standard IP access list R2
    10 permit 2.2.2.2
Standard IP access list R2_Group
    10 permit 224.0.0.0, wildcard bits 0.255.255.255
Standard IP access list R3
    10 permit 3.3.3.3
Standard IP access list R3_Group
    10 permit 239.0.0.0, wildcard bits 0.255.255.255

R4(config)#ip pim rp-announce-filter rp-list R3 group-list R3_Group
R4(config)#ip pim rp-announce-filter rp-list R2 group-list R2_Group
R4(config)#ip pim rp-announce-filter rp-list Any group-list Any_Group

To check this lets do debugging
R4#
*May 31 01:09:13.083: Auto-RP(0): Received RP-announce, from 5.5.5.5, RP_cnt 1, ht 181
*May 31 01:09:13.083: Auto-RP(0): Filtered 224.0.0.0/4 for RP 5.5.5.5
*May 31 01:09:13.083: Auto-RP(0): Received RP-announce, from 5.5.5.5, RP_cnt 1, ht 181
*May 31 01:09:13.083: Auto-RP(0): Filtered 224.0.0.0/4 for RP 5.5.5.5
R4#
See R5 is been filtered :)
R4#sh ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)

Group(s) 224.0.0.0/8
  RP 2.2.2.2 (R2), v2v1
    Info source: 2.2.2.2 (R2), elected via Auto-RP
         Uptime: 00:00:12, expires: 00:02:44
Group(s) 239.0.0.0/8
  RP 3.3.3.3 (R3), v2v1
    Info source: 3.3.3.3 (R3), elected via Auto-RP
         Uptime: 00:00:44, expires: 00:02:12
R4#



No comments:

Post a Comment