I know one mechanism called route-reflector ,
it's a way to prevent iBGP full mesh.
I have no experience to construct any network that uses iBGP.
So maybe this verification helps me to understand how is the difficulty of iBGP full-mesh network.
the configurations are used in this post, are placed in here.
We know iBGP messages are exchanged with TTL more than
This indicates iBGP peers can be established via multiple intermediate nodes.
Next, routes received from iBGP peers aren't advertised to another iBGP peers generally. This behavior is called BGP split-horizon.
BGP split-horizon are used for preventing the loops of route advertisements.
We should know the one rule, "iBGP speakers are connected by full mesh iBGP peer in general".
If you want to understand in detailed,
maybe this document help you to make sense.
By the way,
Using split-horizon to prevent loops of route advertisements is just popular way.
It's not limited with BGP.
For example, RIP applies a similar rule to archieve the same goal.
Let's consider if the number of iBGP speakers are increased rapidly.
we regard the number of iBGP speakers as
we can represents the number of iBGP sessions as
S * (S - 1) / 2.
the difficulty of the iBGP config management will be appeared in this situation,
And maybe some performance issues are occured.
For examples, the caluculation speed for each prefix of RIBs.
So now we have a motivation to introduce the route-reflector mechanism.
For now, we'll start from the configuration of iBGP full mesh for understanding its inconvenience.
I prepared a tinet specification for constructing a local verification environment.
please don't care if you don't know how to use tinet,
I'll introduce the configurations of FRR side.
In general, to establish sessions among routers,
network engineers use loopback interfaces in the iBGP/OSPF/etc network.
because loopback interfaces are never down until network nodes actually down.
It's good characteristic for keeping iBGP sessions.
in FRR bgpd, we can configure it like
neighbor PEER update-source <IFNAME|ADDRESS>.
this post won't discuss about loopback interfaces,
but I recommend you to use loopback interfaces if you play with iBGP network.
1frr version 8.0 2frr defaults traditional 3hostname R0 4log syslog informational 5no ipv6 forwarding 6service integrated-vtysh-config 7! 8interface net0 9 ip address 10.0.1.100/24 10! 11interface net1 12 ip address 10.0.2.1/24 13! 14router bgp 100 15 bgp router-id 10.0.1.100 16 neighbor 10.0.1.101 remote-as 100 17 neighbor 10.0.1.102 remote-as 100 18 neighbor 10.0.1.103 remote-as 100 19 neighbor 10.0.1.104 remote-as 100 20 neighbor 10.0.1.105 remote-as 100 21 ! 22 address-family ipv4 unicast 23 network 10.0.2.0/24 24 exit-address-family 25! 26line vty 27!
1tinet upconf -c spec | sudo sh -x 2tinet test -c spec | sudo sh -x
To construct the network we execute the commands such as above.
the contents of each rib was dumped in the repository.
1$ docker exec -it R0 tcpdump -nni net0 -w /tinet/r0-net0.pcap # 別のshellで実行 2$ tinet test -c spec.yaml | sudo sh -x
Even through there are 6 iBGP speakers in our network,
we felt the inconvenience of full mesh.
So now we'll move the next step.
We'll configure iBGP speakers as route-reflector-clients,
and aggregate the neighbor configurations to a route-reflector.
First, we'll look at the
RR0 's configuration.
1interface net0 2 ip address 10.0.1.99/24 3! 4router bgp 100 5 bgp router-id 10.0.1.99 6 neighbor 10.0.1.100 remote-as 100 7 neighbor 10.0.1.101 remote-as 100 8 neighbor 10.0.1.102 remote-as 100 9 neighbor 10.0.1.103 remote-as 100 10 neighbor 10.0.1.104 remote-as 100 11 neighbor 10.0.1.105 remote-as 100 12 ! 13 address-family ipv4 unicast 14 neighbor 10.0.1.100 route-reflector-client 15 neighbor 10.0.1.101 route-reflector-client 16 neighbor 10.0.1.102 route-reflector-client 17 neighbor 10.0.1.103 route-reflector-client 18 neighbor 10.0.1.104 route-reflector-client 19 neighbor 10.0.1.105 route-reflector-client 20 exit-address-family 21!
It seems like a star topology.
R0 's configurations slight be slim after previous.
1interface net0 2 ip address 10.0.1.100/24 3! 4interface net1 5 ip address 10.0.2.1/24 6! 7router bgp 100 8 bgp router-id 10.0.1.100 9 neighbor 10.0.1.99 remote-as 100 10 ! 11 address-family ipv4 unicast 12 network 10.0.2.0/24 13 exit-address-family 14!
yay we archieved the construction of simple iBGP route-reflector!
By the way,
Do you remember the weakness of iBGP full mesh?
Let's consider if the number of iBGP speakers are increased furthermore.
What will be occured if the route-reflector fails?
So we should recognize the route-reflector is the SPoF.
Let's try to construct route-reflector clusters to prevent that issues!
1interface net0 2 ip address 10.0.1.91/24 3! 4router bgp 100 5 bgp router-id 10.0.1.91 6 bgp cluster-id 10.0.1.90 7 neighbor 10.0.1.92 remote-as 100 8 neighbor 10.0.1.100 remote-as 100 9 neighbor 10.0.1.101 remote-as 100 10 neighbor 10.0.1.102 remote-as 100 11 neighbor 10.0.1.103 remote-as 100 12 neighbor 10.0.1.104 remote-as 100 13 neighbor 10.0.1.105 remote-as 100 14 ! 15 address-family ipv4 unicast 16 neighbor 10.0.1.92 route-reflector-client 17 neighbor 10.0.1.100 route-reflector-client 18 neighbor 10.0.1.101 route-reflector-client 19 neighbor 10.0.1.102 route-reflector-client 20 neighbor 10.0.1.103 route-reflector-client 21 neighbor 10.0.1.104 route-reflector-client 22 neighbor 10.0.1.105 route-reflector-client 23 exit-address-family 24! 25line vty 26!
we'll check the below conditions will be satisfied.
Wow it works correctly!
We forgot the discussion of
CLUSTER_LIST path attribute,
but this verification has succeed for now.