[dev] Dataplane API
sharpd at cumulusnetworks.com
Mon Jul 23 12:59:00 EDT 2018
You are actually thinking about something I've been contemplating for
a while now as well. We need to provide a better api to handle
non-local interfaces for remote data planes. I do think that there
is a lot of work here beyond just abstracting the ability for zebra to
know about an interface as that most of the daemons think about
interfaces as being local. This is just work that needs to be done.
We are in active talks about the dataplane interface. Would you like
to be included?
On Mon, Jul 23, 2018 at 9:32 AM, Mark Stapp <mjs at voltanet.io> wrote:
> This is an interesting case - and I have to say that I have been focussed on
> the zebra-to-dataplane path, and haven't given much thought to the inbound
> path yet (and there are several kinds of info that flow _into_ zebra). a
> couple of things:
> 0. if the interfaces aren't in the kernel ... does that mean the daemons
> don't use them?
> 1. the general software model that I've been trying to work with is that
> there will be some fairly neutral data structure that represents the
> information that's being exchanged between the vendor-specific and frr sides
> of the system. for incoming notifications, those structures will be queued
> towards the main zebra processing context/pthread - the vendor plugin will
> be running in a different pthread/context. the transport and marshalling
> between your own system and frr is your own business - a plugin will need to
> deal with that, and it sounds like you're already on that path. at some
> point, we'll add the 'interface notification' type, migrate the existing
> code to use it, and make it available to plugins.
> 2. you used the phrase "interface configurations", and I just want to
> clarify that a bit. incoming configuration is going to be using the
> management api, imo - not some special zebra-specific path. there's work
> going on to provide a comprehensive "northbound" management interface. the
> existence of a device is sort of a special case, and that's the case that
> will need some specific handling.
> On Fri, Jul 20, 2018 at 5:04 PM, Jia Chen <jchen1 at paloaltonetworks.com>
>> In order to port FRR to our firewall/router platform, two things need to
>> be differ from the current FRR paradigm. One is that we have a data plane
>> and hardware forwarding. The other one is that we have an interface manager
>> for interface configurations and their up/down status update.
>> For the first one, we are able to use FPM mechanism and successfully
>> redirect FIB from Zebra to our data plane.
>> Zebra(FIB via. FPM) --- routed --- data plane
>> For interfaces, we are developing the API from “routed(IFM) to Zebra”. I
>> would like to start the discussion here, such that our development can be
>> aligned with FRR and can be contributed back to FRR.
>> A. A TCP connection between Routed and Zebra, that handle bidirectional
>> communications about request (from Zebra) and reply (from routed)
>> B. At Zebra start/restart, requests to Routed about interfaces
>> (interfaces, IP addresses, interface up/down status)
>> C. Routed IFM (interface manager) reply to the requests with all interface
>> information requested (formatted same as if it comes from kernel)
>> D. If Routed restart (IFM), once the TCP connection re-established, routed
>> will resend all configured interfaces to Zebra
>> This is what we are planning to handle the interface are not in the kernel
>> Please share your thought, any comments, questions, caveat, or
>> suggestions? I remember Donald have investigated some a while back, anything
>> we should watch for?
More information about the dev