I have a C++ program where I need the current path to later create a folder. The location of my executable is, let's say /home/me/foo/bin. This is what I run:
//Here I expect '/home/me/foo/bin/', but get ''
auto currentPath = boost::filesystem::current_path();
//Here I expect '/home/me/foo/', but get ''
auto parentPath = currentPath.parent_path();
//Here I expect '/home/me/foo/foo2/', but get 'foo2/'
string subFolder = "foo2";
string folderPath = parentPath.string() + "/" + subFolder + "/";
//Here I expect to create '/home/me/foo/foo2/', but get a core dump
boost::filesystem::path boostPath{ folderPath};
boost::filesystem::create_directories( boostPath);
I am running on Ubuntu 16.04, using Boost 1.66 installed with the package manager Conan.
I used to run this successfully with a previous version of Boost (1.45 I believe) without using Conan. Boost was just normally installed on my machine. I now get a core dump when running create_directories( boostPath);.
Two questions:
Why isn't current_path() providing me with the actual path, and returns and empty path instead?
Even if current_path() returned nothing why would I still have a core dump even if I run it with sudo? Wouldn't I simply create the folder(s) at root?
Edit:
Running the compiled program, having some cout outputs of the above variables in between the lines rather than using debug mode, normally gives me the following output:
currentPath: ""
parentPath: ""
folderPath: /foo2/
Segmentation fault (core dumped)
But sometimes (about 20% of the times) gives me the following output:
currentPath: "/"
parentPath: "/home/me/fooA�[oFw�[oFw#"
folderPath: /home/me/fooA�[oFw�[oFw#/foo2/
terminate called after throwing an instance of 'boost::filesystem::filesystem_error'
what(): boost::filesystem::create_directories: Invalid argument
Aborted (core dumped)
Edit 2:
Running conan profile show default I get:
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=5
compiler.libcxx=libstdc++
build_type=Release
[options]
[build_requires]
[env]
There is some discrepance between the libcxx used in the dependencies, and the one that you are using to build your application.
In g++ (linux) there are 2 standard library modes you can use, libstdc++, built without C++11 enabled, and libstdc++11, built with C++11 enabled. When you are building an executable (application or shared library), all the individual libraries linked together must link with the same libcxx.
libstdc++11 was made the default for g++ >= 5, but this also depends on the linux distro. It happens that even if you install a g++ >=5 in older distros like Ubuntu 14, the default libcxx will still be libstdc++, apparently it is not easy to upgrade it without breaking. It also happens that very popular CI services used in open-source, like travis-ci, used older linux distros, and thus libstdc++ linkage was the most popular.
libstdc++ was the default for g++ < 5.
For historical and backwards compatibility reasons, conan default profile always use libstdc++, even for modern compilers in modern distros. You can read your default profile the first time conan is executed, but also find it as a file in .conan/profiles/default, or show it with conan profile show default. This will likely change in conan 2.0 (or even sooner), and the correct libcxx will be detected for each compiler if possible.
So, if you are not changing the default profile (using your own profiles is recommended for production), then when you execute conan install, the depedencies which are installed are built against libstdc++. Note that this conan install is independent on the build in most cases, it just downloads, unzip and configure the dependencies you want, with the requested configuration (from the default profile).
Then, when you are building, if you are not changing _GLIBCXX_USE_CXX11_ABI, then you can be using your system compiler default, in this case, libstdc++11. In most cases, a linking error appears that shows this discrepance. But in your case you were unlucky, and your application managed to link, but then crashed at runtime.
There are a couple of approaches to solve this:
Build your application with libstdc++ too. Make sure to define _GLIBCXX_USE_CXX11_ABI=0.
Install your dependencies for libstdc++11. Edit your default profile to use libstdc++11, then issue a new conan install and rebuild your app.
Related
I have a conan recipe of a package, named Package, that requires boost as a shared library:
def requirements(self):
self.requires("boost/1.79.0#")
self.options["boost"].shared = True
self.options["boost"].bzip2 = False
self.options["boost"].without_stacktrace = True
The used generators are CMakeDeps and CMakeToolchain.
def generate(self):
tc = CMakeToolchain(self)
tc.variables['BUILD_SHARED_LIBS'] = "ON" if self.options.shared == True else "OFF"
tc.variables['CMAKE_FIND_ROOT_PATH_MODE_PACKAGE'] = 'NEVER'
tc.variables['CMAKE_POSITION_INDEPENDENT_CODE'] = 'ON'
tc.generate()
The unit tests of this conan package use a CMakeLists.txt which defines a PackageTests target that links against Package and boost::boost.
Building the Package and PackageTests works fine for both Ubuntu and Windows but only on Ubuntu the tests run without issues. On Windows I get exceptions for all the tests because the boost dlls are not found. Using ldd PackageTests and readelf -d PackageTests on Ubuntu shows that boost so files are used from the conan cache.
Using conans VirtualRunEnv generator and then activating the generated environement helps to also run the PackageTests.exe on Windows but I would like to know if there is another way using for example pure CMake to install/copy the required boost dlls to the bin/PackageTests.exe folder? Or is there a way to extend the conan recipe to install the dlls on Windows?
Why are boost shared libraries found correctly in the conan cache but not on Windows? Is there some extra manual work needed or shouldn't this be handeled by conan as well?
Edit:
Trying to use the following to copy the dlls results in a cmake command usage error because the TARGET_RUNTIME_DLLS generator expression is empty.
add_custom_command(TARGET PackageTest POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different $<TARGET_RUNTIME_DLLS:PackageTest > $<TARGET_FILE_DIR:PackageTest>
COMMAND_EXPAND_LISTS
)
Also IMPORTED_LOCATION property for the following targets are *-NOTFOUND:
get_target_property(VAR-boost boost::boost IMPORTED_LOCATION)
message(${VAR-boost})
> VAR-boost-NOTFOUND
get_target_property(VAR-Package Package IMPORTED_LOCATION)
message(${VAR-Package})
> VAR-Package-NOTFOUND
get_target_property(VAR-Package PackageTest IMPORTED_LOCATION)
message(${VAR-Package})
> VAR-PackageTest-NOTFOUND
From the boost conan recipe package_info() I can see that the CMakeDeps generator will only create the BoostConfig.cmake and BoostTargets.cmake scripts, which is also the case for Package. There is no FindBoost.cmake generated. By default CMakeDeps only creates config scripts, except the recipes define cmake_find_mode to be both. Not sure though if adding both to the recipe could help. Even if it would help, this is no immediate solution as this is not directly in my control (hosted on conan-center-index repo). I am still not able to see the reason why everything works fine on Ubuntu but on Windows the dlls are not found/copied at all by conan.
I am trying to use the latest version of the appImage-builder because appimages of my application built with the old version of appImage-builder do not run on ubuntu 22.04 anymore. So I got the order to try and see if it works with the new appImage-builder.
Currently (June 2022), only versions below 1.0 which are based on ubuntu 18.04 are available on docker (which we previously used to build our appimage).
The newer versions are available via github (https://github.com/AppImageCrafters/appimage-builder/releases).
However, I seem to be unable to execute:
appimage-builder --generate
or
appimage-builder --recipe AppImageBuilder.yml
Is there any documentation available on how to correctly use the .appimage version of appImage-builder? All I could find in https://appimage-builder.readthedocs.io/en/latest/ seems to refer to the docker version or a manually built version of appImage-builder.
Depending on the error message you get, there could be a couple of issues at play here.
If you got an error related to FUSE, then you need to install the libfuse2 package with apt install libfuse2. AppImages rely on libfuse2, but Ubuntu has stopped including it since 22.04, in favor of libfuse3.
If you get an error related to "file not found", then it could be that you do not have AppImageLauncher installed. Sadly, with type 2 AppImages the design decision was taken to modify the ELF header of the executable with 3 magic bytes at offset 8 of the executable. This means that Linux linkers will not run the file. AppImageLauncher actually copies the file to a temporary directory and zeroes out the magic number in order to be able to execute it.
A good starting point for debugging issues like this is to run the strace command, which will let you see which system call likely cause the error. Keep in mind that if you try to execute a file and you get File not found, it might mean that the linker specified by the file can not be found on the system or the ELF header is not valid. You can also run the executable by using the linker directly, which might give you more clues. For example with: /lib64/ld-linux-x86-64.so.2 <NAME-OF-YOUR-EXECUTABLE>.
I'm trying to use pp (the perl compiler) to create an application that can run independent of the perl installed library and interpreter.
It successfully creates a compiled executable although I had to use the -x -c options to get it to find dependencies successfully. It will run on my machine but when I try it on another machine I get this error so clearly there is still some dependency:
501 Protocol scheme 'https' is not supported (LWP::Protocol::https not installed)
I am running it on MacOS 10.14.1 if that makes any difference. Thanks!
LWP::Protocol::https is loaded dynamically when needed, so pp has no way of knowing it's needed by default.
Solution 1
Pass -x to pp, and make sure the module is actually loaded in the run pp uses to determine the modules to include. This would probably be achieved by using LWP to make an HTTPS request during that run. --xargs=... might come in useful for this.
Solution 2
Pass -M LWP::Protocol::https to pp. You could also pass -M 'LWP::Protocol::**' to get all protocols handlers you have installed.
Solution 3
Add use LWP::Protocol::https (); to your script or an included module. Including a comment indicating why you are doing this would be appropriate.
You were building Net::SSLeay on MacOS 10.14 linking it to libssl.44.dylib which is not present on MacOS 10.12 where you try to run it.
I've found it annoying having to switch between build and test systems to find out which of the libraries are missing or incompatible and need to be packed.
I am now using the following strategy:
I use perlbrew instead of system perl.
For alien dependencies I use homebrew instead of the system libraries.
I build the packed executable using pp and run the resulting program with export DYLD_PRINT_LIBRARIES=YES being set (on the development machine)
I examine the list of loaded libraries and add all those referenced in the homebrew directory tree (/usr/local/opt/ and /usr/local/cellar/in my case) using pp -l /full/path/name -l ...
I rebuild the executable.
I still check on a target machine before deploying, but chances are very high now that it just works.
I'm trying to compile the libxkbcommon library for kodi for my Raspberry Pi 2.
The host machine is a dedicated Server running Ubuntu 16.04 x64.
Now there are two errors when I'm trying to compile libxkbcommon, depending on what yacc I'm using:
byacc:
YACC src/xkbcomp/parser.c
yacc: e - line 219 of
"/opt/kodi/xbmc/tools/depends/target/libxkbcommon/raspberry-pi2-release/src/xkbcomp/parser.y", syntax error
%destructor { FreeStmt((ParseCommon *) $$); }
^
Makefile:1637: recipe for target 'src/xkbcomp/parser.c' failed
btyacc:
parser.y:85: syntax error
Here is the source code of libxkbcommon:
https://github.com/xkbcommon/libxkbcommon
The xbcomp/parser.y file requires a number of (very useful) bison extensions, so it can't be processed by all yacc variants.
btyacc does not support bison-compatible pure-parser declarations. (It has a different, not entirely compatible mechanism which implements the same feature.) So it fails on the first instance of one of those declarations.
It should be possible to use byacc, but not the version which is available in the Ubuntu package repository. Although the Ubuntu package repository change history seems to suggest that the intention was to include the build option which allows %destructor, the actual binary currently available in the byacc repository was built without that option. (It is also several years old, and I think it would be useful to use a more recent version.) I reported this as launchpad bug 1776270, along with a suggestion for a possible fix.
I'm sure you'll be able to build the software using Gnu bison, which is available as the Ubuntu package bison. Since that's the most popular yacc version installed on developer machines, a failure to build with bison would probably have been noticed long ago.
If you would prefer to use byacc, for whatever reason, you'll have to download and build it yourself. You can get the most recent version from Thomas Dickey's byacc page, and then build it with the usual procedure: untar, configure, make, make install. When I tested this, I used the following configure line:
./configure --enable-btyacc --program-prefix=b --prefix=/usr
Only the first option is mandatory
* --program-prefix=b Install it as `byacc` rather than `yacc`
* --enable-btyacc Necessary for %destructor support
* --prefix=/usr Install it in /usr/bin and /usr/man. The default
is /usr/local/bin and /usr/local/man, which failed on
my Ubuntu install because of a missing -D option in the
install command in the Makefile.
I've been trying to install LuaJIT on Windows 10 for some time following the official guide, and I actually get to install it. For example, if I execute luajit I get into the prompt. Also, luajit -v returns the version of luajit (2.0.4). And I can also execute code with luajit -e <lua code>. However, whenever I try to save bytecode with luajit -b, I get the following message:
luajit: unknown luaJIT command or jit.* modules not installed
I tried to make all sort of installations: using Cygwin, luajit-rocks, MinGW, ... However, no matter what I try, I always get the same result, and I have no clue of what to do.
Could you point me to some potential problems I might be overlooking?
I have on my system Lua 5.1 and Luarocks.
Some extra LuaJIT features are implemented as separate Lua modules (e.g. jit.bcsave for bytecode saving), and LuaJIT depends on package.path to find those modules. The suggested install location for those modules is in the default package.path, but if you override it via the LUA_PATH environment variable, you have to make sure to include that location there. One easy way to do that is to put two consecutive semicolons into LUA_PATH: Double semicolons are replaced by the compile-time default value of package.path.
You need place modules to "jit" folder near with juajit.exe. That folder include some system modules (bcsave too). package.path can dont work, becouse it hardlinked, how i understand. That folders distributed with source code.
Download lua from official sice: https://luajit.org/download.html
You can see "jit" folder inside archive:
LuaJIT-2.0.5.zip\LuaJIT-2.0.5\src\jit\