I'm just trying to compile Eigen library at version 3.3.3 for armv7-a with NEON enabled. CMake reports me continuosly error because it cannot find a alid toolchain for CUDA. I don't want it enabled!
The question so is: How to compile without enabling CUDA? I don't want it!
On 3.2.3 (latest stable as of posting), simply leaving CUDA_SDK_ROOT_DIR as CUDA_SDK_ROOT_DIR-NOTFOUND results in a valid configuration with CUDA disabled (you might get warnings but the project will still be generated).
Since 3.3.3 is unstable, perhaps there is a bug in the cmake procedure which means that it does not automatically recover if CUDA is not found.
That said, Eigen is a header-only library, and therefore you don't actually need to build it (just include it).
Related
I downloaded the latest source code of Opensplice DDS from https://github.com/ADLINK-IST/opensplice and tried to build it by following its instructions (source setenv, source ./configure, then make ..) in my Cygwin 64 bit.
The build (make command) appeared to be completed, but a number of modules such as dcpsisocpp2, durability, spliced didn't get built (I can't find dcpsisocpp2.dll, etc).
I wonder if anyone who is familar with Opensplice's makefile system can direct me to solve the problem.
You should identify you are going to use community or enterprise version.
It seems the community version doesn't have spliced and durability services. Also, dcpsisocpp2 use C++03 which is a very old C++ standard, that when you use C++11 or C++14 writing your application, you might get some warning or error and spend lots of time fixing compile problems.
Try to use dcpssacpp which follows the C++11 standard.
I have to compile and run a modern program on a cluster with an outdated OS. The program employs some c++11 features and STL templates. The cluster's compiler toolchain (g++ v 4.4.7) supports almost all of c++11 features, but some important STL templates/classes are missing.
To make it work I'll have to either:
modify the program's source code, or
compile a newer version of STL library on a cluster and link against it instead of system-wide STL libs.
The 1'st route seems sub-optimal, because the program is currently in an active development by our lab, and it means we'll need to patch it by hand daily, or make a separate branch and regularly merge from trunk, or drop the support for c++11 features altogether.
So, is it possible to build a libstdc++ version that is newer than the one installed system-wide, and link against it? And if it is possible, how could it be done?
From some Googling around it seems that clang's support for windows has been improving recently and boost's support for clang may also have improved. But I'm fairly new to all this heavy-lifting compiler configuration stuff and new to boost, so I'm not sure what the current status really is.
I'm trying to run the command:
b2 --build-dir=build toolset=clang --build-type=complete stage
as suggested in section 5.2.4 in www.boost.org/.../getting_started/windows.
This does work to some extent, but watching the logs being printed to screen I see a few worrying things:
statements starting clang-linux.compile.c++.... even though I am on windows.
12 warnings generated. (or similar) perhaps always these are -Wunused-local-typedef, but I'm not sure.
2 warnings and 8 errors generated (or similar) surely if there are errors the build has failed? How am I supposed to know which component of boost has not built properly and what can I do to fix this?
I'm not clear whether I need MSVC the compiler, Visual Stufio the IDE, and/or MinGW and whether I need to manually set flags to pass to the compiler? Perhaps clang+boost is not ready for windows yet?
Ultimately I want to use boost.python, and at a later date maybe boost.coroutine.
Presumably if I want to use clang for my own projects I need to compile boost with clang too?
bootstrap --with-toolset=clang-win
b2 toolset=clang-win
Make sure that clang.exe is on your PATH.
I'm trying to build a project (namely, Angband's source - http://rephial.org/downloads/3.3/angband-v3.3.2.tar.gz) with Emscripten's emcc in order to port it to Javascript and ultimately build an online version.
I've managed to get the process started with
emconfigure ./configure
make
which begins to successfully start generating LLVM bitcode .o files, but then it hangs up on main-gcu.c with 'main-gcu.c:43:11: fatal error: 'ncurses.h' file not found'
I believe main-gcu.c is the only file that references ncurses, but I just can't figure out how to include the library while compiling. Is there a way to specify including ncurses with 'make', or should I compile the main-gcu.c file individually, with 'emcc main-gcu.c -c -lncurses'? I tried doing that but that led to another error with emcc being unable to find other actually included header files two levels down (it couldn't find headers that were included by a header included by main-gcu.c - anyway to fix that?).
I'm also not certain if I have/need to install the ncurses library on Mac OSX. All I can really find are references to libncurses5-dev for Linux.
Thanks!
I think you misunderstand the compilation via Emscripten. I will try to point out a few problems you are facing.
The general rule is that all tools of Emscripten ONLY can turn LLVM formats (e.g. BITCODE) into JavaScript. emconfigure, emmake, ... modify the build environment so that your sourcecode is compiled to one of the LLVM formats (there are exceptions to the rule but nevermind). So anything you want to link against your final result has to be in a LLVM format, as well (which by default ncurses is not).
Since the output is JavaScript, there is no chance to execute any program code in different threads. While a lot of C/C++ code does use a thread for the UI and others for processing, such a model does NOT work for Emscripten. So in order to get the software compiling/running you will have to rewrite the parts that use threading. See emscripten_set_main_loop for pointers.
Even if you have the libraries compiled you then have to statically link them to Emscripten. At this point it is less of a technical problem but more of a license issue since if your library is licensed under e.g. LGPL due to static linking the GPL terms are effective.
I hope all clarity finally vanished ;)
I'm trying to migrate a project which uses Boost (particularly boost::thread and boost::asio) to VxWorks.
I can't get boost to compile using the vxworks gnu compiler. I figured that this wasn't going to be an issue as I'd seen patches on the boost trac that purport to make this possible, and since the vxworks compiler is part of the gnu tool chain I should be able to follow the directions in the boost docs for cross compilation.
I'm building on windows for a ppc vxworks.
I changed the user-config.jam file as specified in the boost docs, and used the target-os=linux option to bjam, but bjam appears to hang before it can compile. Closer inspection of the commands issued by bjam (by invoking it using the -n option) reveal that it's trying to compile with boost::thread's win32 files. This can't be right, as vxworks uses pthreads.
My bjam command: .\bjam --with-thread toolset=gcc-ppc target-os=linux gcc-ppc is set in user-config to point to the g++ppc vxworks cross compiler.
What am I doing wrong? I believe I have followed the docs to the letter.
If it's #including win32 headers instead of the pthread ones, there could be a discrepancy between the set of macros your compiler is defining and the macros the boost headers are checking for. I had a problem like that with the smart pointer headers, which in an older version of boost would check for __ppc but my compiler defined __ppc__ (or vice versa, can't remember).
touch empty.cpp
ccppc -dD -E empty.cpp
That will show you what macros are predefined by your compiler.
I never tried to compile boost for VxWorks, since I only needed a few of the headers.
Try also adding
threadapi=pthread
The documentation you mention is for Boost.Build -- which is standalone build tool -- and the above flag is something specific to Boost.Thread library. What do you mean by "hang"? Because Boost libraries are huge, it sometimes take a lot of time to scan dependencies prior to build.
If it actually hangs, can you catch bjam in a debugger and produce a backtrace? Also, log of any output will help.