I would like to take a string representation of a C++ lambda function like this:
string fun = "[](int x) { return x + 5;}";
string llvm_ir = clang.get_llvm_ir(fun); // does something like this exist?
and convert it to LLVM IR using Clang from within C++. Is there a way to do this directly using Clang's internal API?
To the best of my knowledge, there is no stable, officially supported API to do this. The Clang C API provides front-end level information (source-code level). Neither does Clang tooling provide this.
You do have good options though. The easiest is to just invoke the Clang front-end as a subprocess clang -cc1 -emit-llvm ...<other options>. This will produce a LLVM IR file which you can then read. It's actually fairly common practice in compilers - the Clang driver itself does this - it invokes the frontend and a bunch of other tools (like the linker), depending on the specific compilation task.
Alternatively, if you feel you must have a programmatic API for this, you can dig in the code of the Clang front-end (the -cc1 invocation mentioned above) to see how it accomplishes it, and collect bits and pieces of code. Be prepared to write a huge amount of scaffolding, though, because these APIs were not designed to be used externally.
To reiterate, it is possible using internal APIs, but there's no easy or recommended way following this path.
Related
I am a newbie in OpenCL stuffs.
Whats is the best way to compiler an OpenCL project ?
Using a supported compiler (GCC or Clang):
When we use a compiler
like gcc or clang, how do we control these options? Are they
have to be set inside the source code, or, likewise the normal
compilation flow we can pass them on the command line. Looking at the Khornos-Manual-1.2, there are a few options provided for cl_int clBuildProgram for optimizations. :
gcc|clang -O3 -I<INCLUDES> OpenCL_app.c -framework OpenCL OPTION -lm
Actually, I Tried this and received an error :
gcc: error: unrecognized command line option '<OPTION>'
Alternatively, using openclc:
I have seen people using openclc to compiler using
a Makefile.
I would like to know which is the best way (if
there are actually two separate ways), and how do we control the
usage of different compile time options.
You might be aware but it is important to reiterate. OpenCL standard contains two things:
OpenCL C language and programming model (I think recent standard include some C++)
OpenCL host library to manage device
gcc and clang are compilers for the host side of your OpenCL project. So there will be no way to provide compiler options for OpenCL device code compilations using a host compiler since they are not even aware of any OpenCL.
Except with clang there is a flag that accept OpenCL device code, .cl file which contains the kernels. That way you can use clang and provide also the flags and options if I remember correctly, but now you would have either llvm IR or SPIR output not an device executable object. You can then load SPIR object to a device using device's run-time environment(opencl drivers).
You can checkout these links:
Using Clang to compile kernels
Llvm IR generation
SPIR
Other alternative is to use the tools provided by your target platform. Each vendor that claims to support opencl, should have a run-time environment. Usually, they have separate CLI tools to compile OpenCL device code. In you case(I guess) you have drivers from Apple, therefore you have openclc.
Intel CLI as an example
Now to your main question (best way to compile opencl). It depends what you want to do. You didn't specify what kind of requirements you have so I had to speculate.
If you want to have off-line compilation without a host program, the considerations above will help you. Otherwise, you have to use OpenCL library and have on-line compilation for you kernels, this is generally preferred for products that needs portability. Since if you compile all your kernels at the start of your program, you directly use the provided environment and you don't need to provide libraries for each target platform.
Therefore, if you have an OpenCL project, you have to decide how to compile. If you really want to use the generic flags and do not rely on third party tools. I suggest you to have a class that builds your kernels and provides the flags you want.
...how do we control these options? Are they have to be set inside the source code, or, likewise the normal compilation flow we can pass them on the command line.
Options can be set inside the source code. For example:
const char options[] = "-cl-finite-math-only -cl-no-signed-zeros";
/* Build program */
err = clBuildProgram(program, 1, &device, options, NULL, NULL);
I have never seen opencl options being specified at the command line and I'm unaware whether this is possible or not.
I'm interested in learning about compilers and their creation, so I've been looking into various tools such as LLVM. It seems like a great framework to work with, but I'm a little confused how you can access native APIs with it.
Specifically, I'm interested in creating a language that has GUI or at least a windowing system built in. LLVM doesn't seem to wrap that functionality, so would I manually need to write assembly that called the APIs provided by each system (e.g. Win32)?
For example, the Red language claims to have a "Cross-platform native GUI system" built in. I assume they manually wrote the backend for that which used different system calls depending on the current system, or piggy backed on Rebol which did that instead.
Is such a thing possible or viable when using LLVM, which does a lot of the backend abstraction for you?
LLVM does not have an API geared toward abstraction of the use APIs. What you CAN do is write a runtime library for your language, and then use LLVM to generate runtime calls as needed. I have some experimentation and found that I preferred to write a runtime in C++ and then create some C bindings. The C bindings are necessary because C++ name mangling will make it very difficult to link against your runtime library, whereas with C the name of a symbol in a shared lib will be the same as that of the function.
Why isn't gcc used for that? Where is the difference between them and why does almost any autocomplete plugin require clang?
The simple answer is that clang was designed to support completion while gcc was not.
Clang has a command line option that prints out possible completions at a given point in a source file, which makes it easy to use in scripts: Just shell out to clang, parse its output, done. Gcc has nothing comparable.
As for why, see this list of differences between gcc and clang:
[...]
Clang is designed as an API from its inception, allowing it to be reused by source analysis tools, refactoring, IDEs (etc) as well as for code generation. GCC is built as a monolithic static compiler, which makes it extremely difficult to use as an API and integrate into other tools. Further, its historic design and current policy makes it difficult to decouple the front-end from the rest of the compiler.
Now that swift has been released by Apple I have have been thinking the posibility of using gobject as a runtime for existing languages such as rust or even swift.
My main concern is that while vala does this, it compiles to c before and needs language bindings even if the library that you are trying to use already uses gobject and even then somehow vala cant use features that c doesn't support such as function overloading, while objective-c doesn't support it, but swift does and can still be used with it.
On the good side both runtime systems have many similarities, such as using reference counting, having signals and being more dynamic than the average
You can view GObject (and in extension GLib and the ecosystem of GLib based libraries) as a common language runtime for several languages:
Vala / Genie
Gjs (then GNOME version of ECMAScript / JavaScript)
C
C++
Python (through PyGObject)
Probably others, really any language that can talk to the C API
Actually it really is an extension to the C runtime (which is the core common language runtime of most Operating Systems) that adds OOP support.
There are other such technologies like the Java JVM the .NET CLR and as you describe Apple is using the Objective C runtime now for multiple languages as well.
There is (in principle) nothing that prevents someone to write a Rust or Swift compiler that does something similar to Vala (emits C code and uses GObject as it's object system).
About your concern:
Vala could as well emit object code directly (without the intermediate "compile to C" step).
There are some advantages to the concept the valac is written at the moment though:
You can take the emitted C files and use them in a C program without the need to have valac installed
It's much easier for Vala to consume C files and
The foreign function interface (called VAPI in Vala) was designed to make it easy to consume C libraries and abstract from common C idioms (like zero terminated strings, passing array with a length parameter, etc.)
The generated C code can be optimized by the C compiler
Standard C tools can be used to inspect the generated C code
You can actually read the C code to see what Vala does internally (big plus for people that already know C)
Vala uses C as a higher level "assembly" language.
I'm installing mingw-w64 on Windows and there are two options: win32 threads and posix threads. I know what is the difference between win32 threads and pthreads but I don't understand what is the difference between these two options. I doubt that if I will choose posix threads it will prevent me from calling WinAPI functions like CreateThread.
It seems that this option specify which threading API will be used by some program or library, but by what? By GCC, libstdc++ or by something else?
I found this:
Whats the difference between thread_posixs and thread_win32 in gcc port of windows?
In short, for this version of mingw, the threads-posix release will use the posix API and allow the use of std::thread, and the threads-win32 will use the win32 API, and disable the std::thread part of the standard.
Ok, if I will select win32 threads then std::thread will be unavailable but win32 threads will still be used. But used by what?
GCC comes with a compiler runtime library (libgcc) which it uses for (among other things) providing a low-level OS abstraction for multithreading related functionality in the languages it supports. The most relevant example is libstdc++'s C++11 <thread>, <mutex>, and <future>, which do not have a complete implementation when GCC is built with its internal Win32 threading model. MinGW-w64 provides a winpthreads (a pthreads implementation on top of the Win32 multithreading API) which GCC can then link in to enable all the fancy features.
I must stress this option does not forbid you to write any code you want (it has absolutely NO influence on what API you can call in your code). It only reflects what GCC's runtime libraries (libgcc/libstdc++/...) use for their functionality. The caveat quoted by #James has nothing to do with GCC's internal threading model, but rather with Microsoft's CRT implementation.
To summarize:
posix: enable C++11/C11 multithreading features. Makes libgcc depend on libwinpthreads, so that even if you don't directly call pthreads API, you'll be distributing the winpthreads DLL. There's nothing wrong with distributing one more DLL with your application.
win32: No C++11 multithreading features.
Neither have influence on any user code calling Win32 APIs or pthreads APIs. You can always use both.
Parts of the GCC runtime (the exception handling, in particular) are dependent on the threading model being used. So, if you're using the version of the runtime that was built with POSIX threads, but decide to create threads in your own code with the Win32 APIs, you're likely to have problems at some point.
Even if you're using the Win32 threading version of the runtime you probably shouldn't be calling the Win32 APIs directly. Quoting from the MinGW FAQ:
As MinGW uses the standard Microsoft C runtime library which comes with Windows, you should be careful and use the correct function to generate a new thread. In particular, the CreateThread function will not setup the stack correctly for the C runtime library. You should use _beginthreadex instead, which is (almost) completely compatible with CreateThread.
Note that it is now possible to use some of C++11 std::thread in the win32 threading mode. These header-only adapters worked out of the box for me:
https://github.com/meganz/mingw-std-threads
From the revision history it looks like there is some recent attempt to make this a part of the mingw64 runtime.
#rubenvb answer is fully correct, use the mingw posix compiler if you want to use std::thread, std::mutex, etc. For everybody who is using CMake, here is an example:
set(CMAKE_CXX_STANDARD 17) # or 20 if you want..
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(THREADS_PREFER_PTHREAD_FLAG ON)
set(TOOLCHAIN_PREFIX x86_64-w64-mingw32)
set(CMAKE_C_COMPILER ${TOOLCHAIN_PREFIX}-gcc-posix)
set(CMAKE_CXX_COMPILER ${TOOLCHAIN_PREFIX}-g++-posix)
set(CMAKE_RC_COMPILER ${TOOLCHAIN_PREFIX}-windres)
set(CMAKE_FIND_ROOT_PATH
/usr/${TOOLCHAIN_PREFIX}
)
Ideal for cross-compiling Linux apps to Windows.
Hint: For people who are using GTK3 and want to cross-compile their GTK application to Windows. You maybe want to download the Mingw Windows GTK bundle, downloaded and packaged from msys2.org so you don't need to: https://gitlab.melroy.org/melroy/gtk-3-bundle-for-windows