[dev] Multiple FRR installs and the damage done.

Rafael Zalamena rzalamena at opensourcerouting.org
Wed Jul 18 12:17:21 EDT 2018


Hi Donald,

On Wed, Jul 18, 2018 at 6:16 AM, Donald Sharp
<sharpd at cumulusnetworks.com> wrote:
> All -
>
> Just wanted to document a little bit of a mis-config that I frequently
> self-inflict with and have just helped another person resolve.  When
> you run one of the daemons, or run vtysh and get this type of message:
>
> root at dev:~/frr# vtysh
> vtysh: Symbol `frr_vtydir' has different size in shared object,
> consider re-linking
> Exiting: failed to connect to any daemons.
> root at dev:~/frr#

I've run into the same thing when I accidentally installed FRR in
different places and the order of the PATH variable pointed to the
wrong vtysh.

>
> This means that you have managed to install FRR into 2 different
> locations.  How you say?  I've seen it from 2 different methodologies:
>
> 1) I've run `./configure ..`, `make` and `make install` 2 times.  With
> the first and second configure line being different.  I am now fairly
> paranoid about this issue and double check my configure line.
>
> 2) I've installed FRR from packaging (a .deb or .rpm ) and then run
> `./configure ....` line that installs FRR into a different spot.
>
> How to clean this mess up?
>
> In the undesired set of paths you need to clean up the FRR install,
> this includes the lib directories.  I do this with manual `find /path
> -name ... | xargs rm -rf`.  I'm sure someone more clever than myself
> knows how to do this with `make` but I have not experimented with
> this.

Normally for quick tests I don't install binaries from source into my
system anymore, I just run them inside the source tree.

For example:
zebra/zebra -d &
bfdd/bfdd -d &

The only downside is that it makes it harder to run `gdb`, since the
file we are running is actually a shell script that sets the proper
environment (e.g. library location in the source folder).

>
> donald
>
> _______________________________________________
> dev mailing list
> dev at lists.frrouting.org
> https://lists.frrouting.org/listinfo/dev

One thing that I may suggest is to start using Docker to install & run
FRR in a controller environment (worst case cenario is stop the
container and run a new one with a clean root filesystem). I've been
doing this myself with some success even on Mac OS X.

To avoid the source code editing/compiling problem I'm using the
VOLUME feature, so all changes made in the "host" machine are
reflected in the container. Same thing applies to topotests source
code.

If you guys are interested in this, feel free to try this:
https://github.com/opensourcerouting/topotests/tree/docker/docker

Some explanation for Docker newcomers:
* Dockerfile: the docker receipt for "building" the guest OS. It is
basically an Ubuntu 18.04 with a few packages to build/run FRR and
topotests;
* docker.sh: a shell script with boilerplate shell code to build the
docker image;
* entrypoint.sh: the "init" script that the container runs when
invoked: it configures openvswitch (to run topotests) and
builds/install FRR;
* topotests_run.sh: the boilerplate shell code to start the container
with every required VOLUME options to share the source code between
host <-> guest plus some X tricks to be able to run `mininet` xterms
and redirect to host display;

Step by step guide to use it:

1. Clone the topotests docker branch:

git clone --branch docker https://github.com/opensourcerouting/topotests.git


2. Create the docker image:

cd topotests/docker
bash docker.sh


3. Tweak topotest_run.sh to your needs: basically change the
environment variables to what you have.

FRR_DIR=$HOME/src/frr
TOPOTESTS_DIR=$HOME/src/topotests

These are the defaults.


4. Run the guest container bash (by default it spawns a tmux instance
in the guest container, so you can run as many bash shells as you
want)

bash topotests_run.sh

You'll be greeted with a help message with common commands available
inside the guest container.


5. Do whatever you do when developing / running FRR

Regards,
Rafael



More information about the dev mailing list