[dev] MPLS use cases and Linux Kernel Interactions

Donald Sharp sharpd at cumulusnetworks.com
Thu Feb 1 08:20:51 EST 2018


Here is what we discussed( yes this is formatted awfully ).

Use cases we need to make sure the kernel handles


Kernel Version is important for 4.9+ for use case #1

The issue discussed in yesterday's meeting about kernel versions seems
to be traced to tcpdump stop working when we move an interface into a
vrf.  Investigating further.


Do we need to turn on mpls for loopback and vrf’s?

net.mpls.conf.enp3s0.input=1

net.mpls.conf.lo.input=1

net.mpls.conf.DONNA.input=1

net.mpls.platform_labels=10000


1) Packet Flowing in MPLS Cloud -> vrf

Assign a label/VRF.  Current suggestion was to do the functional equivalent of:

ip -M route add <label> dev <VRF-DEVICE|lo>

Testing was done via this topology:
https://hastebin.com/haxipuzoqi.txt and installing the label `ip -M
route add 100 dev vrf-red` on rt1 and on rt3 installing `ip route add
172.16.1.0/24 encap mpls 100 via 10.0.2.1` We were able to ping
rt1-eth0 from rt3 but not ce1-eth0 from rt3( this is a problem)
*tcpdump* was the culprit here, this does work.

NEW TEST:
https://github.com/FRRouting/topotests/pull/60

Using bgp_l3vpn_to_bgp_vrf/test_bgp_l3vpn_to_bgp_vrf.py

(Uncomment line 38 of test_bgp_l3vpn_to_bgp_vrf.py to get cli )

ce4 ip route add default via 192.168.2.1

ce1 ip route add default via 192.168.1.1

r1 ip route add 99.0.0.1 vrf cust1 dev r1-eth4 via 192.168.1.2

r4 ip route add 99.0.0.4 vrf cust2 dev r4-eth4 via 192.168.2.2

r1 ip -M route add 101 dev cust1

r4 ip -M route add 104 dev cust2


mininet>  r2 ip -M route show

16 proto 193

      nexthopvia inet 10.0.3.3  dev r2-eth2

      nexthopvia inet 10.0.2.3  dev r2-eth1

17 via inet 10.0.2.4 dev r2-eth1 proto 193

18 via inet 10.0.1.1 dev r2-eth0 proto 193


r4 ip route add 99.0.0.1/32 vrf cust2 nexthop encap mpls 18/101 via
10.0.2.2 dev r4-eth0

r1 ip route add 99.0.0.4/32 vrf cust1 nexthop encap mpls 17/104 via
10.0.1.2 dev r1-eth0


ce1 ping 99.0.0.4 -I 99.0.0.1 -c 1

ce4 ping 99.0.0.1 -I 99.0.0.1 -c 1


Currently we think Zebra will need a new zapi message to notify kernel
of VRF and Label so it can be installed, properly.

ZEBRA_VRF_LABEL

ZAPI message that takes a vrf_id and a label, zebra receives this data
stores it on the zvrf? And installs the data in the kernel via
netlink.

We can also have the use case where the VPN label results in directly
forwarding to a next hop (CE) without needing a route lookup in the
VRF. This should already work in the kernel as it is technically no
different from pop-and-forward of the LSP label.

-> This new work is in https://github.com/FRRouting/frr/pull/1701
please take a look at it.

2) Packet Flowing vrf -> MPLS Cloud

Install routes in the VRF with one or more labels using a device from
the global VRF as the nexthop

Zebra will need to get changes to install routes with a label stack
generated from multiple nexthops( Renato has some code here that needs
to be finished up )

LB: tested this data path in line kernel, with two labels being pushed
and it seemed to work as verified by tcpdump

-> Renato to work on this issue in the next week or so.  Donald is
willing to help out here.

3 )VRF Route Leaking ( Both VRF <-> VRF and VRF <-> DEFAULT )

Install routes in VRF A with nexthops that are in VRF B

Zebra has this basic functionality now.

Linux Kernel works from my testing

4) GRE Tunnels( MPLS Tunnelling? )

Are these anything other than a nexthop from zebra’s perspective?  Nope!


5) SR

Pop label and replace w/ stack

<should work>

Pop and look at next label in stack and act accordingly to what that
label tells us to do

Add `ip -M route add <label to be popped> lo` (tested with 4.13.0-31)

http://packetpushers.net/yet-another-blog-about-segment-routing-part-1/

Can we route with labels from self generated packets

Yes

Can we support 1 route that points into 2 different label lookups so
we duplicate the packet out 2 interfaces

thanks!

donald
On Mon, Jan 29, 2018 at 7:11 PM, Donald Sharp
<sharpd at cumulusnetworks.com> wrote:
> All -
>
> We plan to have a MPLS use case and Linux Kernel Interactions meeting
> this Wed at 10am EDT.  If you would like to attend please let me know
> and I'll get you an invite.
>
> thanks!
>
> donald



More information about the dev mailing list