After I declared a variable in this way:
#include <thread>
namespace thread_space
{
thread_local int s;
} //etc.
i tried to compile my code using 'g++ -std=c++0x -pthread [sourcefile]'. I get the following error:
example.C:6:8: error: thread-local storage is unsupported for the current target
static thread_local int s;
^
1 error generated.
If i try to compile the same code on Linux with GCC 4.8.1 whit the same flags, i get a functioning executable file. I'm using clang-503.0.40 (the one which comes with Xcode 5.1.1) on a MacBook Pro running OSX 10.9.3. Can anybody explain me what i'm doing wrong?
Thank you!!
Try clang++ -stdlib=libc++ -std=c++11. OS X's outdated libstdc++ doesn't support TLS.
Edit
Ok, this works for the normal clang version but not for the Xcode one.
I did a diff against Apple's clang (503.0.38) and the normal released one and found the following difference:
.Case("cxx_thread_local",
- LangOpts.CPlusPlus11 && PP.getTargetInfo().isTLSSupported() &&
- !PP.getTargetInfo().getTriple().isOSDarwin())
+ LangOpts.CPlusPlus11 && PP.getTargetInfo().isTLSSupported())
So I think this is a bug in Apple's clang version (or they kept it in there on purpose - but still weird, because -v says based on 3.4).
Alternatively, you can use compiler extensions such as __thread (GCC/Clang) or __declspec(thread) (Visual Studio).
Wrap it in a macro and you can easily port your code across different compilers and language versions:
#if HAS_CXX11_THREAD_LOCAL
#define ATTRIBUTE_TLS thread_local
#elif defined (__GNUC__)
#define ATTRIBUTE_TLS __thread
#elif defined (_MSC_VER)
#define ATTRIBUTE_TLS __declspec(thread)
#else // !C++11 && !__GNUC__ && !_MSC_VER
#error "Define a thread local storage qualifier for your compiler/platform!"
#endif
...
ATTRIBUTE_TLS static unsigned int tls_int;
The clang compiler included in the Xcode 8 Beta and GM releases supports the C++11 thread_local keyword with both -std=c++11 and -std=c++14 (as well as the GCC variants).
Earlier versions of Xcode apparently supported C-style thread local storage using the keywords __thread or _Thread_local, according to the WWDC 2016 video "What's New in LLVM" (see the discussion beginning at 5:50).
Seems like you might need to set the minimum OS X version you target to 10.7 or higher.
Related
I have a compilation error on macOS v11 (Big Sur) with the standard Clang compiler (version 13.0.0).
I am trying to include the sys/sysmacros.h to use the makedev() function which surprisingly is mentioned on the Apple developer website and should be compatible with macOS 15.5+.
Including sys/types.h also gives me an error, however sys/stat.h works. Sadly it still doesn't give me the makedev(), major() and minor() functions that I need.
The Linux manual page of makedev states there were some changes in the glibc library, but as far as I know, macOS does not use the glibc library.
There should be a simple way of installing glibc on macOS using Homebrew (brew) as described here, but I get the same error as was mentioned in this question. So apparently currently there is no proper way of doing it and then I am not sure if this will solve my problem.
Is there a solution to this?
The macro makedev is defined in sys/types.h. Just add #include <sys/types.h> into your files. sys/types.h is a header file of Kernel.framework. You should set it in the Clang invocation, like clang -framework Kernel ....
You can also define these macros as they are defined in sys/types.h:
#define major(x) ((int32_t)(((u_int32_t)(x) >> 24) & 0xff))
#define minor(x) ((int32_t)((x) & 0xffffff))
#define makedev(x, y) ((dev_t)(((x) << 24) | (y)))
I am using the function statfs64 to obtain the mount point from a path on macOS via property f_mntonname.
This works fine when building against the SDK 10.x for the architecture x86_64.
However, when building for arm64 (and SDK 11), the method is not available.
I can use statfs as fallback which seems to be available, but this has limits to the path length.
I know there is the NSFileManager-API (attributesOfFileSystemForPath), but unfortunately there is no property for the mount path.
Does anyone know how to to this on the new SDK/Platform?
Thank you and regards,
Dominik
statfs64 and fstatfs64 have been deprecated since macOS 10.6 in favour of "versioned symbols".
If you're building for macOS 10.6 or higher, simply switch to statfs and fstatfs, and add this at the top of your source files (before the includes):
#define _DARWIN_USE_64_BIT_INODE
Or add a compiler flag, if changing many source files is too tedious:
-D_DARWIN_USE_64_BIT_INODE
For arm64 targets, this is already set, so it has no effect.
For x86_64 targets, this causes the linker to emit a dependency on _statfs$INODE64 (which is equivalent to _statfs64) rather than _statfs.
If your x86_64 slice does indeed need to support macOS 10.5, then you'll have to resort to some preprocessing:
#define _DARWIN_USE_64_BIT_INODE
#if __ENVIRONMENT_MAC_OS_X_VERSION_MIN_REQUIRED__ < 1060
#define STATFS statfs64
#define FSTATFS fstatfs64
#else
#define STATFS statfs
#define FSTATFS fstatfs
#endif
And if you need to support macOS 10.4 or lower, you're out of luck anyway because there is no 64-bit inode support back there.
I cannot compile any MATLAB MEX code due to the following error:
In file included from /Applications/MATLAB_R2013a.app/extern/include/mex.h:58:
In file included from /Applications/MATLAB_R2013a.app/extern/include/matrix.h:294:
/Applications/MATLAB_R2013a.app/extern/include/tmwtypes.h:819:9: error: unknown type name 'char16_t'
typedef char16_t CHAR16_T;
The only thing that has changed on my machine as far as I can remember is that Xcode was updated to version 5.1 (5B130a).
Any fix for the time being to compile MEX code in MATLAB?
[Running on OS 10.9.2 with Apple LLVM version 5.1 (clang-503.0.38) (based on LLVM 3.4svn)]
By default, the upgraded Clang doesn't set char16_t, which is required by MATLAB.
Quick fix
This works for C or C++ code but needs to be done on each mex command line.
>> mex -Dchar16_t=uint16_t ...
Other solutions below put this definition into the mex configuration or enable C++11.
Permanent solution
Options:
Add -std=c++11 to CXXFLAGS in your mex configuration file AND compile .cpp files instead of .c. The mex config file is mexopts.sh (pre-R2014a) or the .xml file indicated by mex -setup (R2014a+). This is what worked for OP, but the next option works too. Be sure to edit the active/installed config, not the system-wide reference. Try the next solution if you can't tell.
Use a #define or typedef to create char16_t before including mex.h (see "other workaround" below).
In some future version of MATLAB, this will have been fixed. Re-run mex -setup to have MATLAB reconfigure it for you and it works. As of R2014a, this doesn't do the trick.
As a last resort, you can always modify the MATLAB installation, hacking MATLAB's tmwtypes.h as Dennis suggests, but I strongly suggest NOT modifying the MATLAB installation.
Note: If you are using C and cannot or don't want to change to C++, follow the solution in this other answer, OR see the alternative workaround below.
The other workaround
If for some reason you are not able to enable the C++11 standard, you can use the preprocessor to define char16_t. Either put #define char16_t uint16_t before #include "mex.h", or set it with the compiler command line:
-Dchar16_t=uint16_t
Alternatively, use a typedef, again before including mex.h:
typedef uint16_t char16_t;
If these solutions don't work, try changing uint16_t to UINT16_T. Further yet, others have reported that simply including uchar.h brings in the type, but others don't have that header.
I experienced the same error, also directly after upgrading to Xcode 5.1.
The relevant lines (818-824) in the file tmwtypes.h, which causes the error, are:
#if defined(__STDC_UTF_16__) || (defined(_HAS_CHAR16_T_LANGUAGE_SUPPORT) && _HAS_CHAR16_T_LANGUAGE_SUPPORT)
typedef char16_t CHAR16_T;
#elif defined(_MSC_VER)
typedef wchar_t CHAR16_T;
#else
typedef UINT16_T CHAR16_T;
#endif
A solution is to simply change the line
typedef char16_t CHAR16_T;
into
typedef UINT16_T CHAR16_T;
A must admit that I don't know if this affects any function or behaviour of mex files but at least I'm able to compile my c files again using mex.
Please see other answers if this method doesn't work.
I upgraded my gcc/g++ compilers using homebrew to version 4.8 --> gcc-4.8 and g++-4.8.
After that I changed the following lines in the mexopts.sh file:
CXXFLAGS="-fno-common -fexceptions -arch $ARCHS -isysroot $MW_SDKROOT -mmacosx-version-min=$MACOSX_DEPLOYMENT_TARGET -std=c++11"
In my mexopts.sh, this is line 150. I only added the -std=c++11 flag which is what I guess chappjc meant.
EDIT: This is covered in the update by chappjc!
I just add my own experiment (C++ only). The
#define char16_t uint16_t
was causing some problem in the other parts of the mex file. In fact, subsequently to my mex file, char16_t was properly defined. By tracking the chain of includes, the proper type char16_t is set in a file named __config :
typedef __char16_t char16_t;
which is also the first file included from <algorithm>. So the hack consists in including algorithm before mex.h.
#include <algorithm>
#include "mex.h"
and the proper settings are performed, still in a multiplatform manner and without changing anything in the build configuration.
Include uchar.h before including mex.h...works fine. Also, the answer above (adding -std=c++11) only works for c++, not c.
#include <uchar.h>
#include "mex.h"
As part of XCode 5.1.1 char16_t is defined in __config, which is called from typeinfo.
You can add
#include <typeinfo>
before
#include "mex.h"
to have char16_t defined.
This post might help: http://www.seaandsailor.com/matlab-xcode6.html
It was easier than I thought. Just replace all 10.x with your OS X version and add -Dchar16_t=UINT16_T to CLIBS in mexopts.sh file.
It worked on OS X 10.9 Mavericks with Xcode 6 installed.
When I compile the following code containing the design C++11, in Windows7x64 (MSVS2012 + Nsight 2.0 + CUDA5.5), then I do not get errors, and everything compiles and works well:
#include <thrust/device_vector.h>
int main() {
thrust::device_vector<int> dv(10);
auto iter = dv.begin();
return 0;
}
But when I try to compile it under the Linux64 (Debian 7 Wheezey + Nsight Eclipse from CUDA5.5), I get errors:
../src/CudaCpp11.cu(5): error: explicit type is missing ("int"
assumed)
../src/CudaCpp11.cu(5): error: no suitable conversion function from
"thrust::detail::normal_iterator>" to "int"
exists
2 errors detected in the compilation of
"/tmp/tmpxft_00001520_00000000-6_CudaCpp11.cpp1.ii". make: *
[src/CudaCpp11.o] Error 2
When I added line:-stdc++11
in Properties-> Build-> Settings-> Tool Settings-> Build Stages-> Preprocessor options (-Xcompiler)
I get more errors:
/usr/lib/gcc/x86_64-linux-gnu/4.8/include/stddef.h(432): error:
identifier "nullptr" is undefined
/usr/lib/gcc/x86_64-linux-gnu/4.8/include/stddef.h(432): error:
expected a ";"
...
/usr/include/c++/4.8/bits/cpp_type_traits.h(314): error: namespace
"std::__gnu_cxx" has no member
"__normal_iterator"
/usr/include/c++/4.8/bits/cpp_type_traits.h(314): error: expected a
">"
nvcc error : 'cudafe' died due to signal 11 (Invalid memory
reference) make: * [src/CudaCpp11.o] Error 11
Only when I use thrust::device_vector<int>::iterator iter = dv.begin(); in Linux-GCC then I do not get an error. But in Windows MSVS2012 all c++11 features works fine!
Can I use C++11 in the .cu-files (CUDA5.5) in Windows7x64 (MSVC) and Linux64 (GCC4.8.2)?
You will probably have to split the main.cpp from your others.cu like this:
others.hpp:
void others();
others.cu:
#include "others.hpp"
#include <boost/typeof/std/utility.hpp>
#include <thrust/device_vector.h>
void others() {
thrust::device_vector<int> dv(10);
BOOST_AUTO(iter, dv.begin()); // regular C++
}
main.cpp:
#include "others.hpp"
int main() {
others();
return 0;
}
This particular answer shows that compiling with an officially supported gcc version (as Robert Crovella stated correctly) should work out at least for c++11 code in the main.cpp file:
g++ -std=c++0x -c main.cpp
nvcc -arch=sm_20 -c others.cu
nvcc -lcudart -o test main.o others.o
(tested on Debian 8 with nvcc 5.5 and gcc 4.7.3).
To answer your underlying question: I am not aware that one can use C++11 in .cu files with CUDA 5.5 in Linux (and I was not aware the shown example with host-side C++11 gets properly de-cluttered under MSVC). I even filed a feature request for constexpr support which is still open.
The CUDA programming guide for CUDA 5.5 states:
For the host code, nvcc supports whatever part of the C++ ISO/IEC
14882:2003 specification the host c++ compiler supports.
For the device code, nvcc supports the features illustrated in Code
Samples with some restrictions described in Restrictions; it does not
support run time type information (RTTI), exception handling, and the
C++ Standard Library.
Anyway, it is possible to use some of the C++11 features like auto in kernels, e.g. with boost::auto.
As an outlook, other C++11 features like threads may be quite unlikely to end up in CUDA and I heard no official plans about them yet (as of supercomputing 2013).
Shameless plug: If you are interested in more of these tweeks, feel free to have a look in our library libPMacc which provides multi-GPU grid and particle abstractions for simulations. We implemented lambda, a STL-like access concept for 1-3D matrices and other useful stuff there.
All the best,
Axel
Update: Since CUDA 7.0 C++11 support in kernels has been added officially. As BenC pointed our correctly, parts of this feature were already silently added in CUDA 6.5.
According to Jared Hoberock (Thrust developer), it seems that C++11 support has been added to CUDA 6.5 (although it is still experimental and undocumented). This may make things easier when starting to use C++11 in very large C++/CUDA projects, since splitting everything can be quite cumbersome for large projects when you use CMake for instance.
OS X 10.6.8, XCode 3.2.6, Base SDK 10.5, Intel Compiler 11.1
I am getting a weird message when I try to compile that says:
catastrophic error: could not open source file "stdarg.h"
I am using a PCH, I did find: Xcode Intel compiler icc cannot find #include <algorithm>
which is a similar issue and I think that the source file type is set to .c.c instead of .c
From what I can see stdarg.h is:
/* This file is public domain. */
/* GCC uses its own copy of this header */
#if defined(__GNUC__)
#include_next <stdarg.h>
#elif defined(__MWERKS__)
#include "mw_stdarg.h"
#else
#error "This header only supports __MWERKS__."
#endif
so must be GNUC is defined, obviously.
Can anyone help me figure out how to better compile since this works without changes in GCC 4.0? Is there a global way one might have XCode re-evaluate the source file type to not be .c.c or .cpp.cpp I am not even sure how this would happen.
Also, is there a #define that I can check to see if the Intel compilers are being used to make special cases if I need to?
I looked at a few of the files referenced in the build results and looking at the source file type in XCode it says source.c.c and I think if I change that to source.c that compiler error goes away.