3

We are developing a benchmarking framework in C++, with Make and CMake as build tools. The aim of the framework is to allow others who build algorithms to perform head-to-head comparison against prior work. As such, we have integrated a number of state-of-the-art algorithms which users can clone and test within the framework. As many of the algorithms have standard dependencies (e.g Eigen, OpenCV etc), our current solution is to clone all dependencies with specific revisions into a deps/ directory and ensure that all the different algorithms link against these dependencies.

We are finding that this approach makes it increasingly difficult to maintain something that can easily be built. As more algorithms are integrated into the framework, it becomes virtually impossible to find revisions that will work across all algorithms. Furthermore, it's unlikely that all users will be interested in building all algorithms currently integrated.

One solution would be to clone each algorithm's dependencies separately and build each algorithm separately. There are three issues I can find here:

  • Some of the common dependencies, such as OpenCV are very large and take lots of time to clone and compile.
  • In my experience, having multiple versions of the same library is bound to end with CMake picking the wrong version for some reason, and is often hard to constrain. One often ends up with dependency conflicts across different projects, the system, etc.
  • Wherever possible, it's preferable to make sure that the same version of dependencies is used across algorithms. If one is to perform fair head-to-head comparison, the probability that a difference in performance between algorithm1 and algorithm2 is due to different versions of their dependencies should be minimized.

Attached the tree, perhaps it can give a better idea of the overall structure

├── algorithms
│  ├── README.md
│  ├── CMakeLists.txt
│  ├── algo1/
│  ├── algo2/
├── build
│  ├── algorithms/
│  ├── framework/
│  ├── bin/
├── datasets
│  ├── README.md
│  └── dataset1/
│  └── dataset2/
├── deps
│  ├── dep1/
│  ├── dep2/
│  ├── build/
├── framework
│  ├── CMakeLists.txt
│  ├── include/
│  ├── src/
├── LICENSE
├── CMakeLists.txt
└── README.md

What are some common ways of mitigating such issues using C++ and CMake? Are there any generally accepted guidelines? Python commonly uses virtual environments or package managers which create separate environments. This would point towards the solution of cloning the dependencies along with each project, and also assumes the projects are unrelated. Are there any ways to mitigate or reduce the impact of the problems I noted?

asked Jul 25, 2019 at 14:38
2
  • 1
    What, to be specific, is your question? Commented Jul 25, 2019 at 17:59
  • 2
    I think using a Package Manager is your best bet. Here is a good treatment of the possible options: reddit.com/r/cpp/comments/8t0ufu/… Commented Jul 25, 2019 at 18:33

0

Know someone who can answer? Share a link to this question via email, Twitter, or Facebook.

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.