Re: [dev] Dataplane API
I actually agree with both Vincent and Lou :) The current paradigm is a kernel based infrastructure and will be so for the foreseeable future. So if you are doing development *now* I would highly recommend working towards this paradigm. So Vincent is correct. Having said that, we are looking towards more formally defining the DataPlane API so that it becomes possible to allow a fully implemented dataplane api that if someone wanted to implement non-kernel based interfaces they could. So Lou is correct too! There is a lot of work here though, mainly of a infrastructure updates. I've currently (slowly) started trying to define this api( See https://github.com/FRRouting/frr/pull/2292 ). The current api line for the dataplane is fuzzy at best and lots of assumptions are made about behavior and how data structures are created. This line must be wedged apart as a first step. If you are unsure where to start here, please ask and we'll have suggestions. We also have additional goals for zebra such as true pthreads and nexthop-group route entry indirection to name a few. Please note we are not doing this work specifically to allow a full dataplane outside of a kernel, it should fall out if I do the work correctly for what I am interested in. I am doing this work because I think this work will allow me to do some work with route-aggregation as well as more efficiently pass data to the kernel for route installs. I'm sure other people have their own reasons, just as long as we keep those in mind and work together. thanks! donald On Tue, May 29, 2018 at 3:04 PM, Jay Chen <jchen1@paloaltonetworks.com> wrote:
A quick question about FRR interfaces. The Zebra get interface information/status from Kernel.
In our platform, it is almost impossible to put interface into kernel (due to history reasons people object to do so). Is there anyone else facing the same situation and any suggestions for a work around? Or anything similarly to FPM existing for interface to bypass kernel (from interface manager to Zebra instead)?
Thanks, Jay
Thanks Donald, VNC/RFAPI can also be used today (with some environment specific integration) to support controller based models such as defined in NVO3 where forwarders (NVEs) are completely disjoint from anything else on the controller. As has been discussed in the past, VNC/RFAPI was designed about 10 years ago with a BGP-centric optimization approach - and included 2 distinct parts: L3VPN VRF management and NVA style remote forwarder (NVE) control. We've agreed that the long term right answer for FRR is to separate these two, where the first stays in BGP and the second moves under zebra using FPM or it's successor (e.g., the PR mentioned below). The first part was recently completed and is in 5.0. The second part remains on our todo (wish) list - and I expect will be facilitated by Donald's work. Lou On 05/30/2018 09:19 AM, Donald Sharp wrote:
I actually agree with both Vincent and Lou :)
The current paradigm is a kernel based infrastructure and will be so for the foreseeable future. So if you are doing development *now* I would highly recommend working towards this paradigm. So Vincent is correct.
Having said that, we are looking towards more formally defining the DataPlane API so that it becomes possible to allow a fully implemented dataplane api that if someone wanted to implement non-kernel based interfaces they could. So Lou is correct too!
There is a lot of work here though, mainly of a infrastructure updates. I've currently (slowly) started trying to define this api( See https://github.com/FRRouting/frr/pull/2292 ). The current api line for the dataplane is fuzzy at best and lots of assumptions are made about behavior and how data structures are created. This line must be wedged apart as a first step. If you are unsure where to start here, please ask and we'll have suggestions. We also have additional goals for zebra such as true pthreads and nexthop-group route entry indirection to name a few.
Please note we are not doing this work specifically to allow a full dataplane outside of a kernel, it should fall out if I do the work correctly for what I am interested in. I am doing this work because I think this work will allow me to do some work with route-aggregation as well as more efficiently pass data to the kernel for route installs. I'm sure other people have their own reasons, just as long as we keep those in mind and work together.
thanks!
donald
On Tue, May 29, 2018 at 3:04 PM, Jay Chen <jchen1@paloaltonetworks.com> wrote:
A quick question about FRR interfaces. The Zebra get interface information/status from Kernel.
In our platform, it is almost impossible to put interface into kernel (due to history reasons people object to do so). Is there anyone else facing the same situation and any suggestions for a work around? Or anything similarly to FPM existing for interface to bypass kernel (from interface manager to Zebra instead)?
Thanks, Jay
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
Hi Lou, Donald, I have also in mind another use case: ROADM. Indeed, optical devices may take the opportunity of FRR to support GMPLS (OSPF-TE + RSVP-TE). In this scenario, interface i.e. optical ports are also decoupled from the control plane. Optical ports that are not yet fire up (from an optical point of view) are not forward any packets, so no routing protocol, but they must be announce at the Traffic Engineering level at least for their availability. When a light path is activated tough RSVP-TE, the signalling is received by the control plane through the management interface, but to activate a different optical interface. Regards Olivier Le 30/05/2018 à 15:29, Lou Berger a écrit :
Thanks Donald,
VNC/RFAPI can also be used today (with some environment specific integration) to support controller based models such as defined in NVO3 where forwarders (NVEs) are completely disjoint from anything else on the controller. As has been discussed in the past, VNC/RFAPI was designed about 10 years ago with a BGP-centric optimization approach - and included 2 distinct parts: L3VPN VRF management and NVA style remote forwarder (NVE) control. We've agreed that the long term right answer for FRR is to separate these two, where the first stays in BGP and the second moves under zebra using FPM or it's successor (e.g., the PR mentioned below). The first part was recently completed and is in 5.0. The second part remains on our todo (wish) list - and I expect will be facilitated by Donald's work.
Lou
On 05/30/2018 09:19 AM, Donald Sharp wrote:
I actually agree with both Vincent and Lou :)
The current paradigm is a kernel based infrastructure and will be so for the foreseeable future. So if you are doing development *now* I would highly recommend working towards this paradigm. So Vincent is correct.
Having said that, we are looking towards more formally defining the DataPlane API so that it becomes possible to allow a fully implemented dataplane api that if someone wanted to implement non-kernel based interfaces they could. So Lou is correct too!
There is a lot of work here though, mainly of a infrastructure updates. I've currently (slowly) started trying to define this api( See https://github.com/FRRouting/frr/pull/2292 ). The current api line for the dataplane is fuzzy at best and lots of assumptions are made about behavior and how data structures are created. This line must be wedged apart as a first step. If you are unsure where to start here, please ask and we'll have suggestions. We also have additional goals for zebra such as true pthreads and nexthop-group route entry indirection to name a few.
Please note we are not doing this work specifically to allow a full dataplane outside of a kernel, it should fall out if I do the work correctly for what I am interested in. I am doing this work because I think this work will allow me to do some work with route-aggregation as well as more efficiently pass data to the kernel for route installs. I'm sure other people have their own reasons, just as long as we keep those in mind and work together.
thanks!
donald
On Tue, May 29, 2018 at 3:04 PM, Jay Chen <jchen1@paloaltonetworks.com> wrote:
A quick question about FRR interfaces. The Zebra get interface information/status from Kernel.
In our platform, it is almost impossible to put interface into kernel (due to history reasons people object to do so). Is there anyone else facing the same situation and any suggestions for a work around? Or anything similarly to FPM existing for interface to bypass kernel (from interface manager to Zebra instead)?
Thanks, Jay
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
Olivier, On 5/30/2018 10:24 AM, Olivier Dugeon wrote:
Hi Lou, Donald,
I have also in mind another use case: ROADM.
Indeed, optical devices may take the opportunity of FRR to support GMPLS (OSPF-TE + RSVP-TE). In this scenario, interface i.e. optical ports are also decoupled from the control plane. Optical ports that are not yet fire up (from an optical point of view) are not forward any packets, so no routing protocol, but they must be announce at the Traffic Engineering level at least for their availability. When a light path is activated tough RSVP-TE, the signalling is received by the control plane through the management interface, but to activate a different optical interface. I think this is certainly workable.
as I think I mentioned before, LabN actually has some GMPLS-TE code (including path computation and RSVP-TE) we'd love to open source, but haven't found the time/support to strip out the non-source compatible code and integrate with FRR. Lou
Regards
Olivier
Le 30/05/2018 à 15:29, Lou Berger a écrit :
Thanks Donald,
VNC/RFAPI can also be used today (with some environment specific integration) to support controller based models such as defined in NVO3 where forwarders (NVEs) are completely disjoint from anything else on the controller. As has been discussed in the past, VNC/RFAPI was designed about 10 years ago with a BGP-centric optimization approach - and included 2 distinct parts: L3VPN VRF management and NVA style remote forwarder (NVE) control. We've agreed that the long term right answer for FRR is to separate these two, where the first stays in BGP and the second moves under zebra using FPM or it's successor (e.g., the PR mentioned below). The first part was recently completed and is in 5.0. The second part remains on our todo (wish) list - and I expect will be facilitated by Donald's work.
Lou
On 05/30/2018 09:19 AM, Donald Sharp wrote:
I actually agree with both Vincent and Lou :)
The current paradigm is a kernel based infrastructure and will be so for the foreseeable future. So if you are doing development *now* I would highly recommend working towards this paradigm. So Vincent is correct.
Having said that, we are looking towards more formally defining the DataPlane API so that it becomes possible to allow a fully implemented dataplane api that if someone wanted to implement non-kernel based interfaces they could. So Lou is correct too!
There is a lot of work here though, mainly of a infrastructure updates. I've currently (slowly) started trying to define this api( See https://github.com/FRRouting/frr/pull/2292 ). The current api line for the dataplane is fuzzy at best and lots of assumptions are made about behavior and how data structures are created. This line must be wedged apart as a first step. If you are unsure where to start here, please ask and we'll have suggestions. We also have additional goals for zebra such as true pthreads and nexthop-group route entry indirection to name a few.
Please note we are not doing this work specifically to allow a full dataplane outside of a kernel, it should fall out if I do the work correctly for what I am interested in. I am doing this work because I think this work will allow me to do some work with route-aggregation as well as more efficiently pass data to the kernel for route installs. I'm sure other people have their own reasons, just as long as we keep those in mind and work together.
thanks!
donald
On Tue, May 29, 2018 at 3:04 PM, Jay Chen <jchen1@paloaltonetworks.com> wrote:
A quick question about FRR interfaces. The Zebra get interface information/status from Kernel.
In our platform, it is almost impossible to put interface into kernel (due to history reasons people object to do so). Is there anyone else facing the same situation and any suggestions for a work around? Or anything similarly to FPM existing for interface to bypass kernel (from interface manager to Zebra instead)?
Thanks, Jay
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
Lou, Add Julien Meuric in the loop (PCE WG co-chair) who work with me on this subject. We are ready to collaborate with you to facilitate open source of your code. How can we help ? Regards Olivier Le 30/05/2018 à 17:48, Lou Berger a écrit :
Olivier,
On 5/30/2018 10:24 AM, Olivier Dugeon wrote:
Hi Lou, Donald,
I have also in mind another use case: ROADM.
Indeed, optical devices may take the opportunity of FRR to support GMPLS (OSPF-TE + RSVP-TE). In this scenario, interface i.e. optical ports are also decoupled from the control plane. Optical ports that are not yet fire up (from an optical point of view) are not forward any packets, so no routing protocol, but they must be announce at the Traffic Engineering level at least for their availability. When a light path is activated tough RSVP-TE, the signalling is received by the control plane through the management interface, but to activate a different optical interface. I think this is certainly workable.
as I think I mentioned before, LabN actually has some GMPLS-TE code (including path computation and RSVP-TE) we'd love to open source, but haven't found the time/support to strip out the non-source compatible code and integrate with FRR.
Lou
Regards
Olivier
Le 30/05/2018 à 15:29, Lou Berger a écrit :
Thanks Donald,
VNC/RFAPI can also be used today (with some environment specific integration) to support controller based models such as defined in NVO3 where forwarders (NVEs) are completely disjoint from anything else on the controller. As has been discussed in the past, VNC/RFAPI was designed about 10 years ago with a BGP-centric optimization approach - and included 2 distinct parts: L3VPN VRF management and NVA style remote forwarder (NVE) control. We've agreed that the long term right answer for FRR is to separate these two, where the first stays in BGP and the second moves under zebra using FPM or it's successor (e.g., the PR mentioned below). The first part was recently completed and is in 5.0. The second part remains on our todo (wish) list - and I expect will be facilitated by Donald's work.
Lou
On 05/30/2018 09:19 AM, Donald Sharp wrote:
I actually agree with both Vincent and Lou :)
The current paradigm is a kernel based infrastructure and will be so for the foreseeable future. So if you are doing development *now* I would highly recommend working towards this paradigm. So Vincent is correct.
Having said that, we are looking towards more formally defining the DataPlane API so that it becomes possible to allow a fully implemented dataplane api that if someone wanted to implement non-kernel based interfaces they could. So Lou is correct too!
There is a lot of work here though, mainly of a infrastructure updates. I've currently (slowly) started trying to define this api( See https://github.com/FRRouting/frr/pull/2292 ). The current api line for the dataplane is fuzzy at best and lots of assumptions are made about behavior and how data structures are created. This line must be wedged apart as a first step. If you are unsure where to start here, please ask and we'll have suggestions. We also have additional goals for zebra such as true pthreads and nexthop-group route entry indirection to name a few.
Please note we are not doing this work specifically to allow a full dataplane outside of a kernel, it should fall out if I do the work correctly for what I am interested in. I am doing this work because I think this work will allow me to do some work with route-aggregation as well as more efficiently pass data to the kernel for route installs. I'm sure other people have their own reasons, just as long as we keep those in mind and work together.
thanks!
donald
On Tue, May 29, 2018 at 3:04 PM, Jay Chen <jchen1@paloaltonetworks.com> wrote:
A quick question about FRR interfaces. The Zebra get interface information/status from Kernel.
In our platform, it is almost impossible to put interface into kernel (due to history reasons people object to do so). Is there anyone else facing the same situation and any suggestions for a work around? Or anything similarly to FPM existing for interface to bypass kernel (from interface manager to Zebra instead)?
Thanks, Jay
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
Olivier, Julien, Great! Let's setup a time to talk. Drop me a line off list to coordinate time - please suggest some times next week. All, drop me a line if interested in participating (I'll start a doodle if enough are interested). Lou On 5/30/2018 12:57 PM, Olivier Dugeon wrote:
Lou,
Add Julien Meuric in the loop (PCE WG co-chair) who work with me on this subject.
We are ready to collaborate with you to facilitate open source of your code. How can we help ?
Regards
Olivier
Le 30/05/2018 à 17:48, Lou Berger a écrit :
Olivier,
On 5/30/2018 10:24 AM, Olivier Dugeon wrote:
Hi Lou, Donald,
I have also in mind another use case: ROADM.
Indeed, optical devices may take the opportunity of FRR to support GMPLS (OSPF-TE + RSVP-TE). In this scenario, interface i.e. optical ports are also decoupled from the control plane. Optical ports that are not yet fire up (from an optical point of view) are not forward any packets, so no routing protocol, but they must be announce at the Traffic Engineering level at least for their availability. When a light path is activated tough RSVP-TE, the signalling is received by the control plane through the management interface, but to activate a different optical interface. I think this is certainly workable.
as I think I mentioned before, LabN actually has some GMPLS-TE code (including path computation and RSVP-TE) we'd love to open source, but haven't found the time/support to strip out the non-source compatible code and integrate with FRR.
Lou
Regards
Olivier
Le 30/05/2018 à 15:29, Lou Berger a écrit :
Thanks Donald,
VNC/RFAPI can also be used today (with some environment specific integration) to support controller based models such as defined in NVO3 where forwarders (NVEs) are completely disjoint from anything else on the controller. As has been discussed in the past, VNC/RFAPI was designed about 10 years ago with a BGP-centric optimization approach - and included 2 distinct parts: L3VPN VRF management and NVA style remote forwarder (NVE) control. We've agreed that the long term right answer for FRR is to separate these two, where the first stays in BGP and the second moves under zebra using FPM or it's successor (e.g., the PR mentioned below). The first part was recently completed and is in 5.0. The second part remains on our todo (wish) list - and I expect will be facilitated by Donald's work.
Lou
On 05/30/2018 09:19 AM, Donald Sharp wrote:
I actually agree with both Vincent and Lou :)
The current paradigm is a kernel based infrastructure and will be so for the foreseeable future. So if you are doing development *now* I would highly recommend working towards this paradigm. So Vincent is correct.
Having said that, we are looking towards more formally defining the DataPlane API so that it becomes possible to allow a fully implemented dataplane api that if someone wanted to implement non-kernel based interfaces they could. So Lou is correct too!
There is a lot of work here though, mainly of a infrastructure updates. I've currently (slowly) started trying to define this api( See https://github.com/FRRouting/frr/pull/2292 ). The current api line for the dataplane is fuzzy at best and lots of assumptions are made about behavior and how data structures are created. This line must be wedged apart as a first step. If you are unsure where to start here, please ask and we'll have suggestions. We also have additional goals for zebra such as true pthreads and nexthop-group route entry indirection to name a few.
Please note we are not doing this work specifically to allow a full dataplane outside of a kernel, it should fall out if I do the work correctly for what I am interested in. I am doing this work because I think this work will allow me to do some work with route-aggregation as well as more efficiently pass data to the kernel for route installs. I'm sure other people have their own reasons, just as long as we keep those in mind and work together.
thanks!
donald
On Tue, May 29, 2018 at 3:04 PM, Jay Chen <jchen1@paloaltonetworks.com> wrote:
A quick question about FRR interfaces. The Zebra get interface information/status from Kernel.
In our platform, it is almost impossible to put interface into kernel (due to history reasons people object to do so). Is there anyone else facing the same situation and any suggestions for a work around? Or anything similarly to FPM existing for interface to bypass kernel (from interface manager to Zebra instead)?
Thanks, Jay
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
_______________________________________________ dev mailing list dev@lists.frrouting.org https://lists.frrouting.org/listinfo/dev
Continue on this topic. For our POC of FRRrouting, we have two issues to differ a little bit. A. FIB push to separate data plane instead to kernel B. Interface come from “interface manager” other than from kernel For A, what we have done is to add a FPM server. It works with Zebra FPM client. Now we can push the FIB to our data plane. For B, I would like to start a discussion here. If we are going to use the same FPM server/client, and send the interface info from server to zebra client. Will that be doable as a quick work around? Let us know if something for B is already done or any other suggestions? If any change we have to do to Zebra, we would like it to upstream the change Thanks, Jay On 5/30/18, 10:45 AM, "Lou Berger" <lberger@labn.net> wrote: Olivier, Julien, Great! Let's setup a time to talk. Drop me a line off list to coordinate time - please suggest some times next week. All, drop me a line if interested in participating (I'll start a doodle if enough are interested). Lou On 5/30/2018 12:57 PM, Olivier Dugeon wrote: > > Lou, > > Add Julien Meuric in the loop (PCE WG co-chair) who work with me on > this subject. > > We are ready to collaborate with you to facilitate open source of your > code. > How can we help ? > > Regards > > Olivier > > Le 30/05/2018 à 17:48, Lou Berger a écrit : >> Olivier, >> >> On 5/30/2018 10:24 AM, Olivier Dugeon wrote: >>> Hi Lou, Donald, >>> >>> I have also in mind another use case: ROADM. >>> >>> Indeed, optical devices may take the opportunity of FRR to support >>> GMPLS >>> (OSPF-TE + RSVP-TE). In this scenario, interface i.e. optical ports are >>> also decoupled from the control plane. Optical ports that are not >>> yet fire >>> up (from an optical point of view) are not forward any packets, so no >>> routing protocol, but they must be announce at the Traffic Engineering >>> level at least for their availability. When a light path is activated >>> tough RSVP-TE, the signalling is received by the control plane >>> through the >>> management interface, but to activate a different optical interface. >> I think this is certainly workable. >> >> as I think I mentioned before, LabN actually has some GMPLS-TE code >> (including path computation and RSVP-TE) we'd love to open source, >> but haven't found the time/support to strip out the non-source >> compatible code and integrate with FRR. >> >> Lou >> >> >>> >>> Regards >>> >>> Olivier >>> >>> Le 30/05/2018 à 15:29, Lou Berger a écrit : >>>> Thanks Donald, >>>> >>>> VNC/RFAPI can also be used today (with some environment specific >>>> integration) to support controller based models such as defined in >>>> NVO3 >>>> where forwarders (NVEs) are completely disjoint from anything else on >>>> the controller. As has been discussed in the past, VNC/RFAPI was >>>> designed about 10 years ago with a BGP-centric optimization approach - >>>> and included 2 distinct parts: L3VPN VRF management and NVA style >>>> remote >>>> forwarder (NVE) control. We've agreed that the long term right answer >>>> for FRR is to separate these two, where the first stays in BGP and the >>>> second moves under zebra using FPM or it's successor (e.g., the PR >>>> mentioned below). The first part was recently completed and is in >>>> 5.0. >>>> The second part remains on our todo (wish) list - and I expect will be >>>> facilitated by Donald's work. >>>> >>>> Lou >>>> >>>> On 05/30/2018 09:19 AM, Donald Sharp wrote: >>>>> I actually agree with both Vincent and Lou :) >>>>> >>>>> The current paradigm is a kernel based infrastructure and will be so >>>>> for the foreseeable future. So if you are doing development *now* I >>>>> would highly recommend working towards this paradigm. So Vincent is >>>>> correct. >>>>> >>>>> Having said that, we are looking towards more formally defining the >>>>> DataPlane API so that it becomes possible to allow a fully >>>>> implemented >>>>> dataplane api that if someone wanted to implement non-kernel based >>>>> interfaces they could. So Lou is correct too! >>>>> >>>>> There is a lot of work here though, mainly of a infrastructure >>>>> updates. I've currently (slowly) started trying to define this api( >>>>> See https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FRRouting_fr... ). The current api >>>>> line for the dataplane is fuzzy at best and lots of assumptions are >>>>> made about behavior and how data structures are created. This line >>>>> must be wedged apart as a first step. If you are unsure where to >>>>> start here, please ask and we'll have suggestions. We also have >>>>> additional goals for zebra such as true pthreads and nexthop-group >>>>> route entry indirection to name a few. >>>>> >>>>> Please note we are not doing this work specifically to allow a full >>>>> dataplane outside of a kernel, it should fall out if I do the work >>>>> correctly for what I am interested in. I am doing this work >>>>> because I >>>>> think this work will allow me to do some work with route-aggregation >>>>> as well as more efficiently pass data to the kernel for route >>>>> installs. I'm sure other people have their own reasons, just as long >>>>> as we keep those in mind and work together. >>>>> >>>>> thanks! >>>>> >>>>> donald >>>>> >>>>> On Tue, May 29, 2018 at 3:04 PM, Jay Chen >>>>> <jchen1@paloaltonetworks.com> wrote: >>>>>> A quick question about FRR interfaces. The Zebra get interface >>>>>> information/status from Kernel. >>>>>> >>>>>> In our platform, it is almost impossible to put interface into >>>>>> kernel (due to history reasons people object to do so). Is there >>>>>> anyone else facing the same situation and any suggestions for a >>>>>> work around? Or anything similarly to FPM existing for interface >>>>>> to bypass kernel (from interface manager to Zebra instead)? >>>>>> >>>>>> Thanks, >>>>>> Jay >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> dev mailing list >>>>> dev@lists.frrouting.org >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.frrouting.org_lis... >>>>> >>>> _______________________________________________ >>>> dev mailing list >>>> dev@lists.frrouting.org >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.frrouting.org_lis... >>> >>> >> >> >
Currently the DataPlane interface for interfaces(ha!) is not very well designed. There is some very very tight couplings between zebra and the various kernel interfaces that needs to be broken up as a first step before we can start down the path of really adding the ability to receive interface information from a remote data plane. I've been meaning to add to my zebra slide deck some more information along these lines. I'll make it a bit higher priority and see if I can add something that points in a direction as a place to start a discussion on how to do this. donald On Fri, Jun 22, 2018 at 11:05 AM, Jia Chen <jchen1@paloaltonetworks.com> wrote:
Continue on this topic. For our POC of FRRrouting, we have two issues to differ a little bit. A. FIB push to separate data plane instead to kernel B. Interface come from “interface manager” other than from kernel
For A, what we have done is to add a FPM server. It works with Zebra FPM client. Now we can push the FIB to our data plane.
For B, I would like to start a discussion here. If we are going to use the same FPM server/client, and send the interface info from server to zebra client. Will that be doable as a quick work around?
Let us know if something for B is already done or any other suggestions?
If any change we have to do to Zebra, we would like it to upstream the change
Thanks, Jay
On 5/30/18, 10:45 AM, "Lou Berger" <lberger@labn.net> wrote:
Olivier, Julien,
Great! Let's setup a time to talk. Drop me a line off list to coordinate time - please suggest some times next week.
All,
drop me a line if interested in participating (I'll start a doodle if enough are interested).
Lou
On 5/30/2018 12:57 PM, Olivier Dugeon wrote: > > Lou, > > Add Julien Meuric in the loop (PCE WG co-chair) who work with me on > this subject. > > We are ready to collaborate with you to facilitate open source of your > code. > How can we help ? > > Regards > > Olivier > > Le 30/05/2018 à 17:48, Lou Berger a écrit : >> Olivier, >> >> On 5/30/2018 10:24 AM, Olivier Dugeon wrote: >>> Hi Lou, Donald, >>> >>> I have also in mind another use case: ROADM. >>> >>> Indeed, optical devices may take the opportunity of FRR to support >>> GMPLS >>> (OSPF-TE + RSVP-TE). In this scenario, interface i.e. optical ports are >>> also decoupled from the control plane. Optical ports that are not >>> yet fire >>> up (from an optical point of view) are not forward any packets, so no >>> routing protocol, but they must be announce at the Traffic Engineering >>> level at least for their availability. When a light path is activated >>> tough RSVP-TE, the signalling is received by the control plane >>> through the >>> management interface, but to activate a different optical interface. >> I think this is certainly workable. >> >> as I think I mentioned before, LabN actually has some GMPLS-TE code >> (including path computation and RSVP-TE) we'd love to open source, >> but haven't found the time/support to strip out the non-source >> compatible code and integrate with FRR. >> >> Lou >> >> >>> >>> Regards >>> >>> Olivier >>> >>> Le 30/05/2018 à 15:29, Lou Berger a écrit : >>>> Thanks Donald, >>>> >>>> VNC/RFAPI can also be used today (with some environment specific >>>> integration) to support controller based models such as defined in >>>> NVO3 >>>> where forwarders (NVEs) are completely disjoint from anything else on >>>> the controller. As has been discussed in the past, VNC/RFAPI was >>>> designed about 10 years ago with a BGP-centric optimization approach - >>>> and included 2 distinct parts: L3VPN VRF management and NVA style >>>> remote >>>> forwarder (NVE) control. We've agreed that the long term right answer >>>> for FRR is to separate these two, where the first stays in BGP and the >>>> second moves under zebra using FPM or it's successor (e.g., the PR >>>> mentioned below). The first part was recently completed and is in >>>> 5.0. >>>> The second part remains on our todo (wish) list - and I expect will be >>>> facilitated by Donald's work. >>>> >>>> Lou >>>> >>>> On 05/30/2018 09:19 AM, Donald Sharp wrote: >>>>> I actually agree with both Vincent and Lou :) >>>>> >>>>> The current paradigm is a kernel based infrastructure and will be so >>>>> for the foreseeable future. So if you are doing development *now* I >>>>> would highly recommend working towards this paradigm. So Vincent is >>>>> correct. >>>>> >>>>> Having said that, we are looking towards more formally defining the >>>>> DataPlane API so that it becomes possible to allow a fully >>>>> implemented >>>>> dataplane api that if someone wanted to implement non-kernel based >>>>> interfaces they could. So Lou is correct too! >>>>> >>>>> There is a lot of work here though, mainly of a infrastructure >>>>> updates. I've currently (slowly) started trying to define this api( >>>>> See https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FRRouting_fr... ). The current api >>>>> line for the dataplane is fuzzy at best and lots of assumptions are >>>>> made about behavior and how data structures are created. This line >>>>> must be wedged apart as a first step. If you are unsure where to >>>>> start here, please ask and we'll have suggestions. We also have >>>>> additional goals for zebra such as true pthreads and nexthop-group >>>>> route entry indirection to name a few. >>>>> >>>>> Please note we are not doing this work specifically to allow a full >>>>> dataplane outside of a kernel, it should fall out if I do the work >>>>> correctly for what I am interested in. I am doing this work >>>>> because I >>>>> think this work will allow me to do some work with route-aggregation >>>>> as well as more efficiently pass data to the kernel for route >>>>> installs. I'm sure other people have their own reasons, just as long >>>>> as we keep those in mind and work together. >>>>> >>>>> thanks! >>>>> >>>>> donald >>>>> >>>>> On Tue, May 29, 2018 at 3:04 PM, Jay Chen >>>>> <jchen1@paloaltonetworks.com> wrote: >>>>>> A quick question about FRR interfaces. The Zebra get interface >>>>>> information/status from Kernel. >>>>>> >>>>>> In our platform, it is almost impossible to put interface into >>>>>> kernel (due to history reasons people object to do so). Is there >>>>>> anyone else facing the same situation and any suggestions for a >>>>>> work around? Or anything similarly to FPM existing for interface >>>>>> to bypass kernel (from interface manager to Zebra instead)? >>>>>> >>>>>> Thanks, >>>>>> Jay >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> dev mailing list >>>>> dev@lists.frrouting.org >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.frrouting.org_lis... >>>>> >>>> _______________________________________________ >>>> dev mailing list >>>> dev@lists.frrouting.org >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.frrouting.org_lis... >>> >>> >> >> >
Hi Donald, What is the road map of decoupling between zebra and kernel interfaces? How much work or effort? The interface in FRR is: kernel --- zebra --- protocols, but we need a separate path (if configurations and up/down status update): User X --- zebra --- protocols, We are willing to put resource into help out there, if you can highlight what need to be done? It will be great. We can discuss more details offline too. Thanks, Jay On 6/22/18, 8:41 AM, "Donald Sharp" <sharpd@cumulusnetworks.com> wrote: Currently the DataPlane interface for interfaces(ha!) is not very well designed. There is some very very tight couplings between zebra and the various kernel interfaces that needs to be broken up as a first step before we can start down the path of really adding the ability to receive interface information from a remote data plane. I've been meaning to add to my zebra slide deck some more information along these lines. I'll make it a bit higher priority and see if I can add something that points in a direction as a place to start a discussion on how to do this. donald On Fri, Jun 22, 2018 at 11:05 AM, Jia Chen <jchen1@paloaltonetworks.com> wrote: > Continue on this topic. > For our POC of FRRrouting, we have two issues to differ a little bit. > A. FIB push to separate data plane instead to kernel > B. Interface come from “interface manager” other than from kernel > > For A, what we have done is to add a FPM server. It works with Zebra FPM client. Now we can push the FIB to our data plane. > > For B, I would like to start a discussion here. If we are going to use the same FPM server/client, and send the interface info from server to zebra client. Will that be doable as a quick work around? > > Let us know if something for B is already done or any other suggestions? > > If any change we have to do to Zebra, we would like it to upstream the change > > Thanks, > Jay > > > > On 5/30/18, 10:45 AM, "Lou Berger" <lberger@labn.net> wrote: > > Olivier, Julien, > > Great! Let's setup a time to talk. Drop me a line off list to > coordinate time - please suggest some times next week. > > All, > > drop me a line if interested in participating (I'll start a doodle > if enough are interested). > > Lou > > On 5/30/2018 12:57 PM, Olivier Dugeon wrote: > > > > Lou, > > > > Add Julien Meuric in the loop (PCE WG co-chair) who work with me on > > this subject. > > > > We are ready to collaborate with you to facilitate open source of your > > code. > > How can we help ? > > > > Regards > > > > Olivier > > > > Le 30/05/2018 à 17:48, Lou Berger a écrit : > >> Olivier, > >> > >> On 5/30/2018 10:24 AM, Olivier Dugeon wrote: > >>> Hi Lou, Donald, > >>> > >>> I have also in mind another use case: ROADM. > >>> > >>> Indeed, optical devices may take the opportunity of FRR to support > >>> GMPLS > >>> (OSPF-TE + RSVP-TE). In this scenario, interface i.e. optical ports are > >>> also decoupled from the control plane. Optical ports that are not > >>> yet fire > >>> up (from an optical point of view) are not forward any packets, so no > >>> routing protocol, but they must be announce at the Traffic Engineering > >>> level at least for their availability. When a light path is activated > >>> tough RSVP-TE, the signalling is received by the control plane > >>> through the > >>> management interface, but to activate a different optical interface. > >> I think this is certainly workable. > >> > >> as I think I mentioned before, LabN actually has some GMPLS-TE code > >> (including path computation and RSVP-TE) we'd love to open source, > >> but haven't found the time/support to strip out the non-source > >> compatible code and integrate with FRR. > >> > >> Lou > >> > >> > >>> > >>> Regards > >>> > >>> Olivier > >>> > >>> Le 30/05/2018 à 15:29, Lou Berger a écrit : > >>>> Thanks Donald, > >>>> > >>>> VNC/RFAPI can also be used today (with some environment specific > >>>> integration) to support controller based models such as defined in > >>>> NVO3 > >>>> where forwarders (NVEs) are completely disjoint from anything else on > >>>> the controller. As has been discussed in the past, VNC/RFAPI was > >>>> designed about 10 years ago with a BGP-centric optimization approach - > >>>> and included 2 distinct parts: L3VPN VRF management and NVA style > >>>> remote > >>>> forwarder (NVE) control. We've agreed that the long term right answer > >>>> for FRR is to separate these two, where the first stays in BGP and the > >>>> second moves under zebra using FPM or it's successor (e.g., the PR > >>>> mentioned below). The first part was recently completed and is in > >>>> 5.0. > >>>> The second part remains on our todo (wish) list - and I expect will be > >>>> facilitated by Donald's work. > >>>> > >>>> Lou > >>>> > >>>> On 05/30/2018 09:19 AM, Donald Sharp wrote: > >>>>> I actually agree with both Vincent and Lou :) > >>>>> > >>>>> The current paradigm is a kernel based infrastructure and will be so > >>>>> for the foreseeable future. So if you are doing development *now* I > >>>>> would highly recommend working towards this paradigm. So Vincent is > >>>>> correct. > >>>>> > >>>>> Having said that, we are looking towards more formally defining the > >>>>> DataPlane API so that it becomes possible to allow a fully > >>>>> implemented > >>>>> dataplane api that if someone wanted to implement non-kernel based > >>>>> interfaces they could. So Lou is correct too! > >>>>> > >>>>> There is a lot of work here though, mainly of a infrastructure > >>>>> updates. I've currently (slowly) started trying to define this api( > >>>>> See https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FRRouting_fr... ). The current api > >>>>> line for the dataplane is fuzzy at best and lots of assumptions are > >>>>> made about behavior and how data structures are created. This line > >>>>> must be wedged apart as a first step. If you are unsure where to > >>>>> start here, please ask and we'll have suggestions. We also have > >>>>> additional goals for zebra such as true pthreads and nexthop-group > >>>>> route entry indirection to name a few. > >>>>> > >>>>> Please note we are not doing this work specifically to allow a full > >>>>> dataplane outside of a kernel, it should fall out if I do the work > >>>>> correctly for what I am interested in. I am doing this work > >>>>> because I > >>>>> think this work will allow me to do some work with route-aggregation > >>>>> as well as more efficiently pass data to the kernel for route > >>>>> installs. I'm sure other people have their own reasons, just as long > >>>>> as we keep those in mind and work together. > >>>>> > >>>>> thanks! > >>>>> > >>>>> donald > >>>>> > >>>>> On Tue, May 29, 2018 at 3:04 PM, Jay Chen > >>>>> <jchen1@paloaltonetworks.com> wrote: > >>>>>> A quick question about FRR interfaces. The Zebra get interface > >>>>>> information/status from Kernel. > >>>>>> > >>>>>> In our platform, it is almost impossible to put interface into > >>>>>> kernel (due to history reasons people object to do so). Is there > >>>>>> anyone else facing the same situation and any suggestions for a > >>>>>> work around? Or anything similarly to FPM existing for interface > >>>>>> to bypass kernel (from interface manager to Zebra instead)? > >>>>>> > >>>>>> Thanks, > >>>>>> Jay > >>>>>> > >>>>>> > >>>>> _______________________________________________ > >>>>> dev mailing list > >>>>> dev@lists.frrouting.org > >>>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.frrouting.org_lis... > >>>>> > >>>> _______________________________________________ > >>>> dev mailing list > >>>> dev@lists.frrouting.org > >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.frrouting.org_lis... > >>> > >>> > >> > >> > > > > >
In order to port FRR to our firewall/router platform, two things need to be differ from the current FRR paradigm. One is that we have a data plane and hardware forwarding. The other one is that we have an interface manager for interface configurations and their up/down status update. For the first one, we are able to use FPM mechanism and successfully redirect FIB from Zebra to our data plane. Zebra(FIB via. FPM) --- routed --- data plane For interfaces, we are developing the API from “routed(IFM) to Zebra”. I would like to start the discussion here, such that our development can be aligned with FRR and can be contributed back to FRR. A. A TCP connection between Routed and Zebra, that handle bidirectional communications about request (from Zebra) and reply (from routed) B. At Zebra start/restart, requests to Routed about interfaces (interfaces, IP addresses, interface up/down status) C. Routed IFM (interface manager) reply to the requests with all interface information requested (formatted same as if it comes from kernel) D. If Routed restart (IFM), once the TCP connection re-established, routed will resend all configured interfaces to Zebra This is what we are planning to handle the interface are not in the kernel case. Please share your thought, any comments, questions, caveat, or suggestions? I remember Donald have investigated some a while back, anything we should watch for? Thanks, Jay On 05/30/2018 09:19 AM, Donald Sharp wrote: > I actually agree with both Vincent and Lou :) > > The current paradigm is a kernel based infrastructure and will be so > for the foreseeable future. So if you are doing development *now* I > would highly recommend working towards this paradigm. So Vincent is > correct. > > Having said that, we are looking towards more formally defining the > DataPlane API so that it becomes possible to allow a fully implemented > dataplane api that if someone wanted to implement non-kernel based > interfaces they could. So Lou is correct too! > > There is a lot of work here though, mainly of a infrastructure > updates. I've currently (slowly) started trying to define this api( > See https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_FRRouting_fr... ). The current api > line for the dataplane is fuzzy at best and lots of assumptions are > made about behavior and how data structures are created. This line > must be wedged apart as a first step. If you are unsure where to > start here, please ask and we'll have suggestions. We also have > additional goals for zebra such as true pthreads and nexthop-group > route entry indirection to name a few. > > Please note we are not doing this work specifically to allow a full > dataplane outside of a kernel, it should fall out if I do the work > correctly for what I am interested in. I am doing this work because I > think this work will allow me to do some work with route-aggregation > as well as more efficiently pass data to the kernel for route > installs. I'm sure other people have their own reasons, just as long > as we keep those in mind and work together. > > thanks! > > donald > > On Tue, May 29, 2018 at 3:04 PM, Jay Chen <jchen1@paloaltonetworks.com> wrote: >> A quick question about FRR interfaces. The Zebra get interface information/status from Kernel. >> >> In our platform, it is almost impossible to put interface into kernel (due to history reasons people object to do so). Is there anyone else facing the same situation and any suggestions for a work around? Or anything similarly to FPM existing for interface to bypass kernel (from interface manager to Zebra instead)? >> >> Thanks, >> Jay >> >> > > _______________________________________________ > dev mailing list > dev@lists.frrouting.org > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.frrouting.org_lis... >
This is an interesting case - and I have to say that I have been focussed on the zebra-to-dataplane path, and haven't given much thought to the inbound path yet (and there are several kinds of info that flow _into_ zebra). a couple of things: 0. if the interfaces aren't in the kernel ... does that mean the daemons don't use them? 1. the general software model that I've been trying to work with is that there will be some fairly neutral data structure that represents the information that's being exchanged between the vendor-specific and frr sides of the system. for incoming notifications, those structures will be queued towards the main zebra processing context/pthread - the vendor plugin will be running in a different pthread/context. the transport and marshalling between your own system and frr is your own business - a plugin will need to deal with that, and it sounds like you're already on that path. at some point, we'll add the 'interface notification' type, migrate the existing code to use it, and make it available to plugins. 2. you used the phrase "interface configurations", and I just want to clarify that a bit. incoming configuration is going to be using the management api, imo - not some special zebra-specific path. there's work going on to provide a comprehensive "northbound" management interface. the existence of a device is sort of a special case, and that's the case that will need some specific handling. Cheers, Mark On Fri, Jul 20, 2018 at 5:04 PM, Jia Chen <jchen1@paloaltonetworks.com> wrote:
In order to port FRR to our firewall/router platform, two things need to be differ from the current FRR paradigm. One is that we have a data plane and hardware forwarding. The other one is that we have an interface manager for interface configurations and their up/down status update.
For the first one, we are able to use FPM mechanism and successfully redirect FIB from Zebra to our data plane. Zebra(FIB via. FPM) --- routed --- data plane
For interfaces, we are developing the API from “routed(IFM) to Zebra”. I would like to start the discussion here, such that our development can be aligned with FRR and can be contributed back to FRR.
A. A TCP connection between Routed and Zebra, that handle bidirectional communications about request (from Zebra) and reply (from routed) B. At Zebra start/restart, requests to Routed about interfaces (interfaces, IP addresses, interface up/down status) C. Routed IFM (interface manager) reply to the requests with all interface information requested (formatted same as if it comes from kernel) D. If Routed restart (IFM), once the TCP connection re-established, routed will resend all configured interfaces to Zebra
This is what we are planning to handle the interface are not in the kernel case.
Please share your thought, any comments, questions, caveat, or suggestions? I remember Donald have investigated some a while back, anything we should watch for?
Thanks, Jay
Jia - You are actually thinking about something I've been contemplating for a while now as well. We need to provide a better api to handle non-local interfaces for remote data planes. I do think that there is a lot of work here beyond just abstracting the ability for zebra to know about an interface as that most of the daemons think about interfaces as being local. This is just work that needs to be done. We are in active talks about the dataplane interface. Would you like to be included? donald On Mon, Jul 23, 2018 at 9:32 AM, Mark Stapp <mjs@voltanet.io> wrote:
This is an interesting case - and I have to say that I have been focussed on the zebra-to-dataplane path, and haven't given much thought to the inbound path yet (and there are several kinds of info that flow _into_ zebra). a couple of things:
0. if the interfaces aren't in the kernel ... does that mean the daemons don't use them?
1. the general software model that I've been trying to work with is that there will be some fairly neutral data structure that represents the information that's being exchanged between the vendor-specific and frr sides of the system. for incoming notifications, those structures will be queued towards the main zebra processing context/pthread - the vendor plugin will be running in a different pthread/context. the transport and marshalling between your own system and frr is your own business - a plugin will need to deal with that, and it sounds like you're already on that path. at some point, we'll add the 'interface notification' type, migrate the existing code to use it, and make it available to plugins.
2. you used the phrase "interface configurations", and I just want to clarify that a bit. incoming configuration is going to be using the management api, imo - not some special zebra-specific path. there's work going on to provide a comprehensive "northbound" management interface. the existence of a device is sort of a special case, and that's the case that will need some specific handling.
Cheers, Mark
On Fri, Jul 20, 2018 at 5:04 PM, Jia Chen <jchen1@paloaltonetworks.com> wrote:
In order to port FRR to our firewall/router platform, two things need to be differ from the current FRR paradigm. One is that we have a data plane and hardware forwarding. The other one is that we have an interface manager for interface configurations and their up/down status update.
For the first one, we are able to use FPM mechanism and successfully redirect FIB from Zebra to our data plane. Zebra(FIB via. FPM) --- routed --- data plane
For interfaces, we are developing the API from “routed(IFM) to Zebra”. I would like to start the discussion here, such that our development can be aligned with FRR and can be contributed back to FRR.
A. A TCP connection between Routed and Zebra, that handle bidirectional communications about request (from Zebra) and reply (from routed) B. At Zebra start/restart, requests to Routed about interfaces (interfaces, IP addresses, interface up/down status) C. Routed IFM (interface manager) reply to the requests with all interface information requested (formatted same as if it comes from kernel) D. If Routed restart (IFM), once the TCP connection re-established, routed will resend all configured interfaces to Zebra
This is what we are planning to handle the interface are not in the kernel case.
Please share your thought, any comments, questions, caveat, or suggestions? I remember Donald have investigated some a while back, anything we should watch for?
Thanks, Jay
Hi Mark, Sorry for late reply. I definitely agree with you with 2 below that interface to use management API. (refer to inline green color) Thank you, Jay From: Mark Stapp <mjs@voltanet.io> Date: Monday, July 23, 2018 at 6:32 AM To: Jia Chen <jchen1@paloaltonetworks.com> Cc: Lou Berger <lberger@labn.net>, Donald Sharp <sharpd@cumulusnetworks.com>, Renato Westphal <renato@opensourcerouting.org>, JP Senior <jp@apstra.com>, FRRouting-Dev <dev@lists.frrouting.org> Subject: Re: [dev] Dataplane API This is an interesting case - and I have to say that I have been focussed on the zebra-to-dataplane path, and haven't given much thought to the inbound path yet (and there are several kinds of info that flow _into_ zebra). a couple of things: 1. if the interfaces aren't in the kernel ... does that mean the daemons don't use them? The protocols still use them. Only the interface differs from “kernel ====> Zebra ====> OSPF” rather it looks like “ x-user-daemon ====> Zebra ====> OSPF” 1. the general software model that I've been trying to work with is that there will be some fairly neutral data structure that represents the information that's being exchanged between the vendor-specific and frr sides of the system. for incoming notifications, those structures will be queued towards the main zebra processing context/pthread - the vendor plugin will be running in a different pthread/context. the transport and marshalling between your own system and frr is your own business - a plugin will need to deal with that, and it sounds like you're already on that path. at some point, we'll add the 'interface notification' type, migrate the existing code to use it, and make it available to plugins. 2. you used the phrase "interface configurations", and I just want to clarify that a bit. incoming configuration is going to be using the management api, imo - not some special zebra-specific path. there's work going on to provide a comprehensive "northbound" management interface. the existence of a device is sort of a special case, and that's the case that will need some specific handling. Cheers, Mark On Fri, Jul 20, 2018 at 5:04 PM, Jia Chen <jchen1@paloaltonetworks.com<mailto:jchen1@paloaltonetworks.com>> wrote: In order to port FRR to our firewall/router platform, two things need to be differ from the current FRR paradigm. One is that we have a data plane and hardware forwarding. The other one is that we have an interface manager for interface configurations and their up/down status update. For the first one, we are able to use FPM mechanism and successfully redirect FIB from Zebra to our data plane. Zebra(FIB via. FPM) --- routed --- data plane For interfaces, we are developing the API from “routed(IFM) to Zebra”. I would like to start the discussion here, such that our development can be aligned with FRR and can be contributed back to FRR. A. A TCP connection between Routed and Zebra, that handle bidirectional communications about request (from Zebra) and reply (from routed) B. At Zebra start/restart, requests to Routed about interfaces (interfaces, IP addresses, interface up/down status) C. Routed IFM (interface manager) reply to the requests with all interface information requested (formatted same as if it comes from kernel) D. If Routed restart (IFM), once the TCP connection re-established, routed will resend all configured interfaces to Zebra This is what we are planning to handle the interface are not in the kernel case. Please share your thought, any comments, questions, caveat, or suggestions? I remember Donald have investigated some a while back, anything we should watch for? Thanks, Jay
participants (5)
-
Donald Sharp -
Jia Chen -
Lou Berger -
Mark Stapp -
Olivier Dugeon