I'm trying to get the cross compiler for Windows RT beside Visual Studio. Unfortunately there is no MinGW-ARM so I have to do it by myself. I pretty much know what I want: PE32+, Thumb2 code always. Pretty sure that GCC can handle both so I thought it will be easy.
Apparently I'm wrong: I can't find a working target. Since I'd like to have MinGW, I chose "arm-w64-mingw32" as my target and configure of GAS told me it doesn't know what output format is. Also tried "arm-windows-pep", GCC didn't know what this is.
I took look at both configures, and I can hardly understand the logic in their code since there is not easy way to debug the shell script under Windows. Can somebody tells me the steps to add a new target?
Related
While working for just one month with the MPLABX5.5 + XC32 3.01 I've already had 3 separate instances where code compiled incorrectly, causing my program to fail after either the stack or frame pointer began using an incorrect address. I would like to dump these tools and try something else as tracking down compiler errors is sucking up too much of my time. Is there anything else available that I can use to work with a PIC32MM? Even access to a different compiler than XC32 might help.
I would like to do the same thing. Maybe we can collect the best options for how to get there, as after many many tries, I haven't yet been successful. As one starting point, I'd also like to be able to recompile xc32-gcc from source to understand exactly what it's doing, and to be able to compile xc32 binaries for other architectures (like, as insane as it may sound, I'd like to compile some code for the pic32mm platform with clang or gcc running on a raspberry pi.)
I would love to be able to even just compile xc32-gcc from source. I know this is possible, but I've not been successful. Some links and starts:
https://github.com/zeha/xc32
This seems to be the most recent grouping of source I've found, but I haven't yet figured out how to compile it.
ChipKit is cited a lot, but, I haven't gotten to the bottom of getting that to build for me either. There are numerous projects here, and I'm not sure how they all fit together yet:
https://github.com/chipKIT32
I suspect somebody (maybe someone who will see this post) knows the formula or script or docker file, or whatever to make this simple.
https://gitlab.com/spicastack/pic32-parts-free
This project seems close to what we're talking about, but, the
recommended way to install is with podman and gentoo. I'm not a
gentoo person (yet?), and the docker version failed for me. It's
probably a simple fix to the dockerfile for a gentoo person, but.. I
didn't get there yet. (I did try installing gentoo and started down
the path but holy cow, talk about being down a rabbit hole when what
I'm trying to do is get a pic cross-compiler working.. when emerge on my new gentoo install failed with a python error, I gave up.)
https://github.com/andeha/Twinbeam
This project also says some of the "right things" about building pic32 code using llvm, and has references to llvm2pic32 in this project: https://github.com/andeha/Sprinkle
I've also not yet managed to get this to make viable intel hex files that I can use on a pic just yet, but there's promise.
Use clang/llvm to generate code. I think it will compile C and generate mips out of the box and I've gotten that far, but I can't get it to link and produce a valid hex file yet. The linker scripts from microchip seem sort of ok, but the hex files end up putting the code in the wrong place, I think. I should probably put together a blinky-light example and try to push it farther, and share it with others to figure out what the deal is, but even stepping one step further back and just trying to get a super simple mips assembly program to get linked and be uploadable to a PIC32MM part would be a great success to me.
Maybe others have better references and links?
I am trying to convert a C program to VB.NET. (I know, just accept I have my reasons and leave it at that).
Anyway, I wasn't sure if the source code I had was any good so after about a week of pulling my hair out I finally downloaded CodeBlocks, grabbed the MinGW directory and I can make working successful build from a command line using Mingw32-make. Now I want to use codeblocks to build the file because ultimately I want to be able to step through the code however when I try I just get a bunch of errors and no exe. I am not sure what I am doing wrong and just need help. For that matter is there anyway to just have MinGW create some kind of debug info that would be usefull in Visual Studio? I am not a C programmer so sorry if it seems obvious.
I understand it is somehow making a connection so that a compiler when envokes connects a source code to whatever libraries that it needs to.
But what is going on a more technical level, or better put what do I need to know in order to confidentally compile code.
I'm working with C++ and MinGW, and have started to look into build files and stuff for Sublime Text 2 (Have learned mostly under unix, or Java + eclipse so far). But what I don't understand what is adding a compiler to your path do for you?
Do I need to add it for every folder I want to compile from? Or is it system wide? I'm really learning this stuff for the first time, we we're never showed how to set up development environments or even deploy code on other systems.
You probably mean include paths and library paths in the compiler:
include paths: where the compiler will look for headers; and
library paths: where the linker, invoked by the compiler, will look for binary libraries to finish building your project.
If that is the case, look here for a gentle explanation.
Basically, what is happening is that the compiler looks in certain places for symbols defined by the operating system and other libraries installed system-wide.
In addition to those paths, you need to tell the compiler where to find the symbols defined in your own project.
You may also mean something related to installing the compiler itself or configuring the editor to use it.
In that case, what is happening is that you need to tell the build system where to find the executable for the compiler.
Basically, what is probably happening is that your editor wants to know where the compiler is so that it can provide real time feedback on your code. Adding the compiler to the system path will usually, but not always, solve your problem.
In more detail:
A C++ build is a rather complex tool chain, involving determining dependencies, preprocessing, compiling, and linking. There are tools that automate that tool chain, and those tools are in turn wrapped into the functionality of modern IDEs like Eclipse, Visual C++, or Sublime Text 2. You many need to tell your editor where to find the tools it uses to provide you with those services.
There is a bug in RHEL5's gcc-4.3.2 with which we are stuck. As a work-around we have extracted the missing object and put it in an object file. Adding this object file to every link makes the problem go away.
While adding it directly to LDFLAGS seems like a good solution, this doesn't work since e.g. libtool cannot cope with non-la files in there.
A slightly more portable solution seems to be to directly patch the gcc spec to add this to every link. I came up with
*startfile:
+ %{shared-libgcc:%{O*:%{!O0:/PATH/TO/ostream-inst.o}}}
where ostream-inst.o is added to the list of startfiles used in the link when compiling a shared library with optimizations.
Trying to compile boost with this spec gives some errors though since its build directly sets some objects with ld's --startgroup/--endgroup.
How should I update that spec to cover that case as well, or even better, all cases?
Go through this URL Specifying subprocesses and the switches to pass to them and GCC Command Options
If this help you, thats great.
I know this is not the answer you want to hear (since you specified otherwise in your question), but you are running into trouble here and are likely to run into more since your compiler is buggy. You should find a way of replacing it, since you'll find yourself writing even more work-around code the next time some obscure build system comes along. There's not only bjam out there.
Sorry I can't help you more. You might try simply writing a .lo file by hand (it's a two-liner, after all) and insert it into your LDFLAGS.
If it is a bug of GCC 4.3, did you try to build (by compiling from sources) and use a newer GCC. GCC 4.6.2 is coming right now. Did you consider using it?
I'm porting an application to MacOS X - but the original developer's build system uses NMAKE, and ideally they'd like to keep it instead of switching to a new one.
I've managed to get NMAKE running under OSX using wine (built using MacPorts) and added Objective C support to the build files, and created a Unix-linked PE wrapper 'run.exe' which wine can load but uses POSIX to call back into things like gcc and ld, as is described on various places online as a means of escaping out of wine back into Unix.
However, I'm having a few specific issues. They're minor enough that I can get on with the port, but it does mean I need to run builds a few times sometimes, because of timing.
Basically, when wine.exe calls back into the shell and thus gcc, the link between child processes seems to be broken. gcc and ld will never return an error code even on failure, because they can't get the exit code from their spawned children. ar will actually print out it can't find its child and return immediately, causing problems when ld tries to link object files to libraries that are still being put together.
Has anyone else tried anything similar and seen the same problem, on OSX or elsewhere? Is there an obvious solution?
The Microsoft .NET Rotor (SSCLI) project includes source code, intended to be build on OSX and elsewhere. The Rotor source code includes the source code to NMake. So get Rotor working, then use it's Nmake. Even if you prefer to continue to use your Wine-based Nmake, you could probably learn from Rotor's use of Nmake on Unix, it's use of Gcc, etc.
If there is nothing that odd / inconsistent about the original developers build system, could you write an automatic conversion of their make files back into Unix make, and keep your builds 'native'?
(Builds being fraught enough anyway, without extra complications)