I want to compile C# to LLVM IR. So I think translate compiled CIL to LLVM IR is one way I can try.
There are some tools I can use such as vmkit and mono-llvm.
Is anybody using this tools? Or how can I translate CIL to LLVM?
The answer depends on your goals. Why do you want to translate C# to LLVM?
VMKit was designed as a framework for building virtual machine implementations. I believe it had some support for the CLR at one point, but that support since stagnated in favor of its JVM implementation. Its purpose is to make building a VM from scratch.
Mono-llvm is a project that replaces the mono JIT backend with an LLVM back end. It's goal is to improve the performance of JITed code on Mono.
If your goal is to use Mono, with better performance, mono-llvm is a good choice.
If you want to build an entire VM from scratch, then VMKit might work.
If you are just looking to implement an ahead-of-time compiler that produces executables with no CLR dependencies, you can just download the LLVM core libraries from:
http://llvm.org/
Basically it would translate the CIL into a textual representation of LLVM IR and then use the LLVM APIs to compile it to native machine code.
I don't know if LLVM will generate object files for you. You may have to generate them yourself, but that's pretty easy. It's basically just stuffing the machine code into a data structure, building up string, section, and symbol tables, and then serializing everything to disk.
To get LLVM IR code from CIL you need to use the tool il2bc (other name C# Native) which you can download from http://csnative.codeplex.com/.
You just need to perform some simple steps.
Il2Bc.exe <Your DLL>.dll
If you want to generate an executable from it, you need to compile the generated .ll file (LLVM IR Code).
For example, you have your "Hello World" app
Compile it (it will generate a helloworld.ll file)
Il2Bc.exe helloworld.cs /corelib:CoreLib.dll
Generate LLVM IR file for the core library (it will generate corelib.ll file)
Il2Bc.exe CoreLib.dll
You need to generate an EXE file (it will generate a .EXE file):
llc -filetype=obj -mtriple=i686-w64-mingw32 CoreLib.ll
llc -filetype=obj -mtriple=i686-w64-mingw32 helloworld.ll
g++ -o helloworld.exe helloworld.obj CoreLib.obj -lstdc++ -lgc-lib -march=i686 -L .
I think I understand the question to be that you want to use LLVM IR in the same way that the GCC can compile Java using gcj?
The LLVM had an option to output CIL directly from whatever front end you used (So in theory you could do C/C++ to CIL). The following command options:
llc -march=msil
would output CIL from (in theory) any supported LLVM Front-End.
Going from C# or CIL to LLVM IR hasn't been done yet (or at least finished). You'd need a C# front-end.
VMKit had some kind of C# front end scaffolding. Support was never feature complete and interest has since faded. They've moved to just supporting Java. You might try their source repository and see if there are any remnants of their early C# work can be reworked into a full C# frontend.
Also note that you can write your own C# to LLVM IR compiler in C# (using Mono or whatever) and use P/Invoke to call into LLVM libraries and create LLVM IR. There are some good information out there such as Writing Your Own Toy Compiler Using Flex, Bison and LLVM.
This area is also getting interesting now that the compiler as a service (Roslyn) project has had its first couple of CTP releases, and Mono has its Mono.CSharp project. Though I think Roslyn is a bit more feature-rich.
Related
I'm currently working on a rather generic communication stack. It gets bytes in on one end, parses the packet and calls a callback.
I want to have this stack in a static library (i.e. libcommstack.a).
The library is aimed towards embedded ARM Cortex-M devices. At the moment we have specified that at least a Cortex-M3 should be used (but it should also work for an M4 or M33).
Right now I'm integrating it into another application to verify that linking it is possible. In the future the idea is that we will ship this .a file to customers so they can build their application around it, without having direct access to our sources (to encapsulate our IP).
We are using GCC ARM v7.2.1 to compile both the library and the application that is linked to it.
The application I'm trying to integrate it with is compiled for a Cortex M33 with -mfloat-abi=hard -mfpu-fpv6-sp-d16.
The code for the library does not use any floating points and is compiled using -march=archv7-m (both have the -mthumb flag).
Linking seemed to all go well, until I actually called a function from the lib. At that point the linker starts to complain:
application.elf uses VFP register arguments, libcommstack.a(somefile.c.obj) does not
failed to merge target specific data of file libcommstack.a(somefile.c.obj)
Since I'm not using floating points in the library and I don't know (upfront) if the target application does or does not have an FPU (or even uses floats), I'm not sure how to approach this.
I figured there would be two approaches:
Compile a single version of the lib, using an instruction set that all of the microcontrollers understand. I was hoping that this would be the case with ARMv7 (although I'm not yet 100% confident that the M23/M33 also support this).
Compile a lot of different libs for the different flavors based on the different architectures, FPU, etc.
As you can imagine, I would prefer to keep it simple and go for option 1, but I'm not sure how to "convince" the linker to link these two (or perhaps how to convince the compiler NOT to care about floating points for the lib).
Does anyone know if option 1 is feasible and how it can be achieved?
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Does anyone know if option 1 is feasible
Well, feasible, probably.
how it can be achieved?
Get all the processors you want to support and determine the instructions sets available on all these processors. Then compile for that instruction set.
But, please don't, that is a workaround.
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Gcc has something like "multilib profiles". See arm-none-eabi-gcc --print-multi-lib output. If you have newlib installed, you can go to /usr/arm-none-eabi/lib/thumb/ and see the directories there - newlib is compiled for each profile and installs separate library for it and different library is picked up depending on configuration. Compile for each of those profiles, and package your library by putting libraries in proper /usr/arm-none-eabi/lib/proper/directory/here and compiler will pick them up by itself (see gcc -v output for library search paths). For an example search newlib sources where it happens, can't find it. (Here's my example). With cmake as a backend as a example you could compile and install as follows:
arm-none-eabi-gcc --print-multi-lib |
while IFS=';' read -r dir opts; do
cmake -B builddir CMAKE_C_FLAGS="$opts" CMAKE_INSTALL_LIBDIR="$dir"
cmake --build builddir
cmake --install builddir --prefix "/usr/arm-none-eabi/"
done
I'm trying to use LLVM to implement a compiler for a toy language. Something like the Kaleidoscope Tutorial. I'm using Visual Studio on 64 bit Windows.
I've managed to build LLVM and clang using VS, but now I want to use the LLVM libraries in my own project. It seems like a silly question but how to I do this? What compiler options do I need? What libraries should I link with etc. etc.
As far as I can see this isn't covered anywhere in the LLVM documentation although I could have easily missed it.
I discovered llvm-config which is designed to solve the problems I'm having. It often seems to give incorrect information (for instance llvm-config --includedir is wrong) but it at least gives me a list of libraries to link with.
I suppose I could also use CMake to generate project files, but CMake seems to be difficult to learn from free resources.
Ubuntu 16.04 comes with GCC 5.4 which does support c++11 and it is the default compiler. By default c++11 is not enabled in that particular version of GCC.
My intent is to use some of the binary libraries (not header only) from their repository (e.g. boost). In my projects I will enable c++ 11.
How were c++ libraries from the repository compiled? Is it possible to use them with c++ 11 enabled? I know that c++ libraries can be called from different languages (Java, Pythons, C# etc) by hiding all c++ stuff behind plain C interface. With boost it is not a case. If a certain function returns me a string or a vector or anything from STL then it is a problem. AFAIK STL objects binary representation depends on compiler flags (eg. std=c++11).
Thank you.
Which exact libraries are you talking about?
If you are talking about the standard library, libstdc++ is a part of gcc. It is always okay to link it no matter which standard you compile at. gcc also made a decision to include ABI tags, so that they can be ABI compatible with code compiled at C++11 and pre C++11. See for instance TC's really nice answer to a question I asked here:
Is this simple C++ program using <locale> correct?
If by
How were c++ libraries from the repository compiled?
you mean, how are all of the C++ libraries in the ubuntu repositories compiled, the answer is, it may be different for each one.
For instance if you want to use libfreetype6-dev or libsdl2-dev, these are C libraries, they will be okay to link to no matter what standard you target.
If you want to use libsilly-dev from CEGUI, that is a C++ library, and it is usually best to use the exact same compiler for your project and the C++ lib that you are linking to. If it appears in ubuntu repository, you can assume it was built with the default g++ version that ubuntu is shipping. If you need to use a different compiler, it's probably best to build the C++ lib yourself -- in general C++ is not ABI stable across different compilers, or even different versions of the same compiler.
If you want to use compiled boost libraries, it's probably best to use the libs they give you and use the compiler they give you. If you only use header-only boost, then the compiler doesn't matter since you don't actually have to link with something they built. So you then have more flexibility with respect to compilers.
Often, if you need to use C++ libraries, it's best to integrate their build system into yours so that it can be easily rebuilt from source and you only have to configure the compiler once. (At least in my experience.) This can save a lot of time when you decide to upgrade compilers later. If you use cmake then it's often feasible, but sometimes this can be hard, especially if you have a lot of C++ dependencies. If you don't use cmake, well, many libraries use cmake and it won't be that easy to integrate them this way. cmake is still kind of a pain anyways, so this might not be such a loss.
I am currently working on the toolchain for a processor that has been developed at my university. The processor is closely based on OpenRISC (orpsocv2 has been used as a baseline). Building programs for that platform requires that some custom instructions are added to the binary. I already implemented tools that modify assembly code accordingly (utilizing regular expressions). However, I am looking for a way to integrate it with the GNU toolchain of OpenRISC.
A regular toolchain consists of the following tools:
preprocessor -> compiler -> assembler -> linker
I need my adaptations to be integrated somewhere after compilation (because I require information about the basic blocks that will be present in the binary) and before linking (because afterwards things get messy when you try to change addresses).
Now my question: Is there an easy way to add another tool between the compiler and the assembler of the GNU toolchain?
I don't want to do that manually in the Makefile, because I would like to have the tools as compatible as possible to existing software projects.
So far, I haven't been able to find anything related in the GCC documentation or the web.
I have a C++11 project that uses Google Test, and it builds great in Linux. On a Mac, I am having more difficulty integrating it into my code base. The issue seems to be that while my code uses C++11, the Google code uses TR1. As a result, TR data structures like enum and unordered_set are included differently.
The Google Test samples build and run perfectly as provided. The samples also build just fine if I use clang++ instead of g++. (My code works only on clang++, so I use that to build.) Finally, Google's code also builds and runs if I use -std=c++11.
However, Google test does not build using clang++ on my mac if I use -stdlib=libc++. It reports that it cannot find tr1/tuple, which, of course, is true. This is a problem, because my code does not build if I use -stdlib=libstdc++ (or no stdlib argument).
Of course, I could switch all of my code over to the older standard. This, however, is extremely yuck. Is there a way to make these code bases live happily side-by-side on the Mac?
My code builds happily with Google test using g++ 4.6.3 on an Ubuntu 12.04 computer. The mac is running OSX 10.8.3. It's running g++ 4.2.1 and clang 4.2++.
Thanks for any help,
David
PS: I am somewhat new to C++, so forgive me if this is a foolish question.
Edit: Changed "OS/X" to "OSX." (Yes, I am that old.)
You can instruct Google test to use its own implementation of tr1::tuple
In cmake I use the following line to compile with "old" compilers:
add_definitions(-DGTEST_HAS_TR1_TUPLE=0)
I'm sure you can add it to your build system, it's a simple preprocessor definition.
You can look at include/gtest/internal/gtest-port.h for more options. GTEST_USE_OWN_TR1_TUPLE may be useful. Most of the parameters are correct with default values.