Packaging in CMake: Modern Techniques for Writing and Distributing C++ Libraries in CMake and Vcpkg
I spent 10 weeks this summer developing a simulation software library in C++, and at the end I though it was more or less done. The physics works, the API was clean, and we’d finished ironing out the bugs. Little did I know that the hardest part would be packaging - coming up with a system for compiling that library on any computer with any operating system and any file structure. We’d been using CMake from the beginning, but despite its reputation as “the build system of the future”, the process is still painful and full of unintuitive requirements and undocumented behavior. In the end, after another 3 weeks of work, we ended up with a build system I’m relatively proud of. It can link statically or dynamically on all 3 major operating systems, and, perhaps most dauntingly of all, links with the Nvidia CUDA library. Now let’s see how we got there:
Starting Out
At the beginning, my goal was simple. I had a library which, with some effort, I could compile on my computer with a variety of hacks to link with graphics dependencies such as OpenGL, GLFW, GLM, and the NVIDIA GPU library CUDA. After considering the possibility of redistributing many of these libraries, I settled on the new package manager Vcpkg developed my Microsot and recently released with cross-platform support. Vcpkg works almost as seamlessly as something like Python’s pip package manager. You can download the source from their GitHub, build it, and then install new packages as easily as
./vcpkg install glfw3 glm glew
These are all stored in a self-contained environment in a location of your chosing, and can be linked into any CMake build by passing -DCMAKE_TOOLCHAIN_FILE=[path to vcpkg installation] to the cmake command. Once these are installed, CMake makes linking the library as easy as adding
find_package(glfw3 CONFIG REQUIRED)
target_link_libraries(Loch PRIVATE glfw3)
Commands like this make up the majority of our CMakeLists.cmake file.
Working with CUDA
Nvidia has done an excellent job integrating CUDA into CMake, but especially when building libraries, there are many quirks to contend with. For one thing, to separate code into header files (.h) and implementation (.cpp or .cu), the
set_target_properties(my_library PROPERTIES CUDA_SEPARABLE_COMPILATION ON) # needed for library compilation
must be set in the CMakeLists.txt file. Also, when building a library, CUDA seems to require
set_target_properties(my_library PROPERTIES CUDA_RESOLVE_DEVICE_SYMBOLS ON)
to be set, producing a colorful set of linking errors otherwise. Sometimes, even this is not enough, and I was forced to set this as a global property at the head of the file, using
set(CUDA_SEPARABLE_COMPILATION ON)
set_property(GLOBAL PROPERTY CUDA_SEPARABLE_COMPILATION ON)
for no apparent reason. Otherwise, integration is easy. In modern CMake, CUDA can be included simply by specifying CUDA as a language in the initial project setup, i.e.
project(my_library LANGUAGES CXX CUDA)
at the beginning of the project, and then finding and linking the cuda and cuda runtime libraries
find_package(CUDA REQUIRED) # find and include CUDA
if (CUDA_FOUND)
message(STATUS "CUDA FOUND")
target_include_directories(my_library PUBLIC ${CUDA_INCLUDE_DIRS})
target_link_libraries(my_library PRIVATE ${CUDA_LIBRARIES} cuda cudart)
else()
message(STATUS "CUDA NOT FOUND")
endif()
Handling Static and Shared Libraries
Especially when building the vcpkg, the ability to link as either a shared or static library is important, and CMake and CUDA require a few unintuitive settings. We enable a CMake flag, LIB_SHARED_BUILD in the vcpkg portfile, and then included this if statement in the CMakeLists.txt:
if (LIB_SHARED_BUILD)
set(BUILD_SHARED_LIBS ON)
set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON)
set(POSITION_INDEPENDENT_CODE ON)
endif()
Many of these may be set automatically by various build systems, but they all caused painful failures at various points in the build process.
Exporting Targets as a Library
After the library is setup comes the most difficult part - exporting CMake files which allow other projects to find the library and link it seamlessly, including the dependencies. The following is our final solution, arrived at after long hours of debugging. Our file structure looked like this:
CMakeLists.txt src file1.cu file2.cu include my_lib file1.h file2.h cmake LochConfig.cmake.in
Our goal was a) to make it possible to find and link our library and headers for both a static and dynamic build system on any platform. Thankfully, CMake provides the install(EXPORT) command which automatically generates a <mylib>Config.cmake file needed to setup the library. However, for static libraries, this is not enough. Linking libraries need to be pointed to the dependency libraries used in the original library which were not fully linked into the static library. That is the purpose of our MyLibConfig.cmake.in file, which uses CMake’s find_dependency macro to link library dependencies.
CMakeLists.txt
target_include_directories(my_lib PUBLIC
$<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include/MyLib>
$<INSTALL_INTERFACE:include>)
set_target_properties(my_lib PROPERTIES PUBLIC_HEADER "${LOCH_HEADER_FILES}")
install(TARGETS MyLib
EXPORT MyLibTargets
PUBLIC_HEADER DESTINATION include/MyLib
LIBRARY DESTINATION bin
ARCHIVE DESTINATION lib
RUNTIME DESTINATION bin
)
if (LIB_SHARED_BUILD)
install(EXPORT MyLibTargets
FILE MyLibConfig.cmake
DESTINATION share/mylib)
else()
install(EXPORT MyLibTargets
FILE MyLibTargets.cmake
DESTINATION share/mylib)
configure_file(cmake/MyLibConfig.cmake.in
${CMAKE_INSTALL_PREFIX}/share/mylib/MyLibConfig.cmake @ONLY)
endif()
cmake/MyLibConfig.cmake.in
include(CMakeFindDependencyMacro)
find_dependency(glfw3)
find_dependency(GLEW)
find_dependency(glm)
find_dependency(CUDA)
include("${CMAKE_CURRENT_LIST_DIR}/LochTargets.cmake")
The target_include_directories uses a complicated generator expression to avoid certain issues with circular dependencies. The install command copies the generated .lib/.a, .dylib/.dll and all headers into the appropriate directories determined by the CMAKE_INSTALL_PREFIX, usually set to /usr/local on Mac and Linux and C:/Program Files on Windows. However, when built in the vcpkg environment, this is set to a path in the vcpkg directory, where all the library files are stored.
The EXPORT keyword specifies a variable where details about the libraries targets are stored, a variable which is then passed to a second install command which generates a file used to find and link the library in future CMake projects. For shared libraries, that is enough, but for static libraries, we instead export that to a LochTargets.cmake, and generate our own MyLibConfig.cmake file which contains the find_dependency calls and then calls the generated file.
Again, the LIB_SHARED_BUILD variable is set in the vcpkg portfile.
Vcpkg Magic
Now that our CMakeLists.txt works, we can talk about installing it and our other dependencies using Vcpkg. Vcpkg has a very simple structure. The main vcpkg directory contains a ports folder, which contains information about installing libraries. Each library provides a portfile.cmake file specifying how it should be installed, and a CONTROL file which describes the library and specifies dependencies. For example, our CONTROL file merely reads:
Source: loch
Version: 1.0
Description: A CUDA-accelerated physics library
Build-Depends: glfw3, glew, glm, cuda
Nothing to it. When vcpkg install loch is called, it checks to make sure all the dependencies are installed and installs them if they are not. Simple. The portfile.cmake does a bit more. First it downloads the library from our github using the vcpkg_from_github command, included in CMake with the include(vcpkg_common_functions)
directive.
vcpkg_from_github(OUT_SOURCE_PATH SOURCE_PATH
REPO jacobaustin123/Loch
REF 70a6c2217f1b5dd199123e419e51853fe7a290cc
SHA512 8291639d16ebf9f5394f601b7951028dbb095dfece47c57deb1b2e43f5a893c0b128632d82157bfe1f0ce26f21837a598a638cc399e16d1906d825a3ed5e10ac
HEAD_REF package
)
Where REPO specifies the repository, REF gives the commit of our desired stable version, SHA512 is the hash of the entire repo file, and HEAD_REF gives the branch to be used in case –head is passed to vcpkg install. The directory where it is downloaded is then stored in SOURCE_PATH, which can be used to run CMake.
CUDA issues
We ran into a very frustrating bug with vcpkg and CUDA, where despite CUDA being fully installed, vcpkg failed to find it. To rectify this, we eventually used a CMake command to find CUDA, and then passed its path to CMake as a flag.
find_program(NVCC
NAMES nvcc nvcc.exe
PATHS
ENV CUDA_PATH
ENV CUDA_BIN_PATH
PATH_SUFFIXES bin bin64
DOC "Toolkit location."
NO_DEFAULT_PATH
)
if (NVCC)
message(STATUS "Found CUDA compiler at " ${NVCC})
else()
message(FATAL_ERROR "CUDA compiler not found")
endif()
set(CMAKE_CUDA_COMPILER:FILEPATH ${NVCC})
This sets the NVCC variable to the path of the NVCC compiler, and then sets the CMAKE_CUDA_COMPILER:FILEPATH to that path. This is then passed to the vcpkg_configure_cmake
call using
vcpkg_configure_cmake(
SOURCE_PATH ${SOURCE_PATH}
PREFER_NINJA
OPTIONS
-DLIB_SHARED_BUILD=ON
-DCMAKE_CUDA_COMPILER:FILEPATH=${NVCC}
)
As you can see, here we build it as a shared library, and pass the path to the CMakeLists.txt directory as well as our flags to the function. We still encountered some incompatibilities in vcpkg when compiling statically, with several colorful errors suggesting that, while vcpkg was compiling the library statically, it was being dynamically linked with our executable. We were never able to resolve this on Windows, so we eventually disabled the static target on Windows (what is known as the x64-windws-static triplet).
After this, we just install the cmake files and copy a copyright found in the vcpkg directory in our library.
vcpkg_install_cmake()
file(
REMOVE_RECURSE
${CURRENT_PACKAGES_DIR}/debug/include
${CURRENT_PACKAGES_DIR}/debug/share
)
# Handle copyright
file(INSTALL ${SOURCE_PATH}/vcpkg/copyright.txt DESTINATION ${CURRENT_PACKAGES_DIR}/share/mylib RENAME copyright)
Final installation.
With this done, all that was left is to copy the CONTROL and portfile.cmake files to the vcpkg/ports/mylib directory, and then run ./vcpkg install mylib –triplet x64-windows (the latter command a requirement for CUDA). This builds and installs the library, and makes its headers and all the dependency headers accessible from CMake using
find_package(mylib CONFIG REQUIRED)
target_link_libraries(my_executable PRIVATE mylib)
Nothing more. Damn that’s simple.
If you liked this, check out our library and my other projects at https://github.com/jacobaustin123, and my other writings on this blog. Feel free to ask any questions by email.