I have some C++ code using NEON intrinsics. From what I have read, all you need to do is include arm_neon.h to your project.. And then I read that this arm_neon.h header is not actually readily available to you automatically, you have to get it from the web. So I found and added this version to my project:
http://clang.llvm.org/doxygen/arm__neon_8h-source.html
In my project's prefix.pch I added:
#import "arm_neon.h"
And when I try to build on my iPhone6 device (I am not using the simulator), I get a billion errors inside the arm_neon.h file:
Can anyone please explain to me what I am missing here?
You've been misinformed about being able to pick up an arm_neon.h from the Internet. Generally the header is not just compiler specific, but compiler version (even compiler revision) specific. For GCC it relies on a number of compiler built in function calls, and from your screenshot of Clang the same holds there. As you'd expect, if the name of these internal-only functions changes, the header will fail to compile.
What surprises me is that you're unable to use an include of whichever arm_neon.h ships with your build environment. The only thing I can think of that would cause this is the build command trying to build for x86_64 (for the simulator) but you say this isn't what is happening. It might be worth checking your build settings one more time.
If you're still not getting anywhere, remember that arm_neon.h is sometimes considered as a system header, so in C++ you might need to #include <arm_neon.h> rather than #include "arm_neon.h" to get the compiler to search the system paths.
Related
I have been trying to go through this tutorial and I always get stuck in the second build of GCC when making the cross-toolchain. It errors out saying that I am attempting to call a poisoned calloc. I have gone through several patches and what they all seem to do is just #include the offending system header (in this case pthread.h) earlier in the source code. Since there are no patches for my particular problem, I have gone ahead and emulated their solutions in my case. While this works (compilation now fails because I don't have some ISL files) it feels like a hack, and I suspect that the root problem is further back in the build.
Thus, I wanted to ask:
Why are symbols poisoned? Why would the GCC maintainers want some symbols not to be used?
What are the general causes for this problem? Is it really just a bug or is this a problem that arises in more general situations?
I am more interested in the generalities of this issue, but if it helps, I am using the latest release of Alpine Linux (with gcc 12.2.1) trying to compile gcc 11.2
.0 for the same target architecture as the host (x86-64).
I was wondering about the BII_IMPLICIT_RULES_ENABLED flag which I had switched off in one of my CMakeLists.txt files, in order to get an OpenGL related block to compile on a Mac, following a suggestion from biicode. This setting is still there and everything works perfectly, but I would like to find out more about it. Could someone explain what it does exactly?
Thanks!
BII_IMPLICIT_RULES_ENABLED activates the addition of system libs to the target that has included certain headers. For example, if your code contains an:
#include "math.h"
And you are in *nix systems, then the library "m" (libm) will be added to your target via TARGET_LINK_LIBRARIES.
You can see the headers that are processed in your cmake/biicode.cmake file, in the HANDLE_SYSTEM_DEPS
My recommendation: Put it to False whenever possible, and handle the required system libs yourself, exactly what you have done. It is something that will be deprecated soon, or at least set to False by default to new projects. This option sometimes causes troubles, if something fails or there is a bug in biicode.cmake, e.g. in the past it tried to add libm to targets also in windows. It will be gradually deprecated and probably substituted by some CMake macros hosted (as in http://www.biicode.com/biicode/cmake) that could be used by users if they decide to, but not automatically as it is done now.
I'm trying to build a project (namely, Angband's source - http://rephial.org/downloads/3.3/angband-v3.3.2.tar.gz) with Emscripten's emcc in order to port it to Javascript and ultimately build an online version.
I've managed to get the process started with
emconfigure ./configure
make
which begins to successfully start generating LLVM bitcode .o files, but then it hangs up on main-gcu.c with 'main-gcu.c:43:11: fatal error: 'ncurses.h' file not found'
I believe main-gcu.c is the only file that references ncurses, but I just can't figure out how to include the library while compiling. Is there a way to specify including ncurses with 'make', or should I compile the main-gcu.c file individually, with 'emcc main-gcu.c -c -lncurses'? I tried doing that but that led to another error with emcc being unable to find other actually included header files two levels down (it couldn't find headers that were included by a header included by main-gcu.c - anyway to fix that?).
I'm also not certain if I have/need to install the ncurses library on Mac OSX. All I can really find are references to libncurses5-dev for Linux.
Thanks!
I think you misunderstand the compilation via Emscripten. I will try to point out a few problems you are facing.
The general rule is that all tools of Emscripten ONLY can turn LLVM formats (e.g. BITCODE) into JavaScript. emconfigure, emmake, ... modify the build environment so that your sourcecode is compiled to one of the LLVM formats (there are exceptions to the rule but nevermind). So anything you want to link against your final result has to be in a LLVM format, as well (which by default ncurses is not).
Since the output is JavaScript, there is no chance to execute any program code in different threads. While a lot of C/C++ code does use a thread for the UI and others for processing, such a model does NOT work for Emscripten. So in order to get the software compiling/running you will have to rewrite the parts that use threading. See emscripten_set_main_loop for pointers.
Even if you have the libraries compiled you then have to statically link them to Emscripten. At this point it is less of a technical problem but more of a license issue since if your library is licensed under e.g. LGPL due to static linking the GPL terms are effective.
I hope all clarity finally vanished ;)
This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.
I'm trying to migrate a project which uses Boost (particularly boost::thread and boost::asio) to VxWorks.
I can't get boost to compile using the vxworks gnu compiler. I figured that this wasn't going to be an issue as I'd seen patches on the boost trac that purport to make this possible, and since the vxworks compiler is part of the gnu tool chain I should be able to follow the directions in the boost docs for cross compilation.
I'm building on windows for a ppc vxworks.
I changed the user-config.jam file as specified in the boost docs, and used the target-os=linux option to bjam, but bjam appears to hang before it can compile. Closer inspection of the commands issued by bjam (by invoking it using the -n option) reveal that it's trying to compile with boost::thread's win32 files. This can't be right, as vxworks uses pthreads.
My bjam command: .\bjam --with-thread toolset=gcc-ppc target-os=linux gcc-ppc is set in user-config to point to the g++ppc vxworks cross compiler.
What am I doing wrong? I believe I have followed the docs to the letter.
If it's #including win32 headers instead of the pthread ones, there could be a discrepancy between the set of macros your compiler is defining and the macros the boost headers are checking for. I had a problem like that with the smart pointer headers, which in an older version of boost would check for __ppc but my compiler defined __ppc__ (or vice versa, can't remember).
touch empty.cpp
ccppc -dD -E empty.cpp
That will show you what macros are predefined by your compiler.
I never tried to compile boost for VxWorks, since I only needed a few of the headers.
Try also adding
threadapi=pthread
The documentation you mention is for Boost.Build -- which is standalone build tool -- and the above flag is something specific to Boost.Thread library. What do you mean by "hang"? Because Boost libraries are huge, it sometimes take a lot of time to scan dependencies prior to build.
If it actually hangs, can you catch bjam in a debugger and produce a backtrace? Also, log of any output will help.