Hello World Boot Loader - loader

I'm trying to do a hello world for a boot loader in assembly.
I'm following this tutorial:
http://www.osdever.net/tutorials/view/hello-world-boot-loader
I searched and it seems people are saying gcc doesn't work for compiling assembly. So I searched and found flat assembler. When I try to compile the example, it gives me an error at the first line [BITS 16]. Basically it states 'Illegal Instruction'.
What type of assembler does this code require?

I don't know if some tool in GCC can compile assembler (and if it does, whether it can compile to 16 bit code), but the site you refer to recommends NASM. Did you try it with NASM?

I simply commented the line out and it worked in FASM. It seems like FASM defaults to 16 bit automatically. After it compiled, it generated a BIN file by the same name. I renamed it to an IMG extention and then assigned it as a Floppy Disk image using VirtualBox to test it. Worked great and booted.
As long as you write this BIN/IMG file to the drive on the first sector it seems to work okay. I used the tutorials on the above website also.

Try removing square brackets around BITS 16 in case you didn't.

Related

Is it possible to use any program as a library?

I'm trying to create some debug scripts with compiled programs, for this I'm trying to create something where I prepare my variables in some code I generate and then jump into another program.
Is there a way to do that ? For example by having some C code and then jumping to a label or place in the executable. For now I'm focusing on ELF programs, but if something exists on Windows I'm also interested !
Thanks !
I've tried to bring back the ELF file into a .s for GCC and recompile, however this doesn't seem to work well for all ELF files (e.g non-PIE binaries). And I've looked to see if there were tools that would create a .s but they are either buggy, incomplete or both.

How to develop for PIC32MM without either MPLABX or XC32

While working for just one month with the MPLABX5.5 + XC32 3.01 I've already had 3 separate instances where code compiled incorrectly, causing my program to fail after either the stack or frame pointer began using an incorrect address. I would like to dump these tools and try something else as tracking down compiler errors is sucking up too much of my time. Is there anything else available that I can use to work with a PIC32MM? Even access to a different compiler than XC32 might help.
I would like to do the same thing. Maybe we can collect the best options for how to get there, as after many many tries, I haven't yet been successful. As one starting point, I'd also like to be able to recompile xc32-gcc from source to understand exactly what it's doing, and to be able to compile xc32 binaries for other architectures (like, as insane as it may sound, I'd like to compile some code for the pic32mm platform with clang or gcc running on a raspberry pi.)
I would love to be able to even just compile xc32-gcc from source. I know this is possible, but I've not been successful. Some links and starts:
https://github.com/zeha/xc32
This seems to be the most recent grouping of source I've found, but I haven't yet figured out how to compile it.
ChipKit is cited a lot, but, I haven't gotten to the bottom of getting that to build for me either. There are numerous projects here, and I'm not sure how they all fit together yet:
https://github.com/chipKIT32
I suspect somebody (maybe someone who will see this post) knows the formula or script or docker file, or whatever to make this simple.
https://gitlab.com/spicastack/pic32-parts-free
This project seems close to what we're talking about, but, the
recommended way to install is with podman and gentoo. I'm not a
gentoo person (yet?), and the docker version failed for me. It's
probably a simple fix to the dockerfile for a gentoo person, but.. I
didn't get there yet. (I did try installing gentoo and started down
the path but holy cow, talk about being down a rabbit hole when what
I'm trying to do is get a pic cross-compiler working.. when emerge on my new gentoo install failed with a python error, I gave up.)
https://github.com/andeha/Twinbeam
This project also says some of the "right things" about building pic32 code using llvm, and has references to llvm2pic32 in this project: https://github.com/andeha/Sprinkle
I've also not yet managed to get this to make viable intel hex files that I can use on a pic just yet, but there's promise.
Use clang/llvm to generate code. I think it will compile C and generate mips out of the box and I've gotten that far, but I can't get it to link and produce a valid hex file yet. The linker scripts from microchip seem sort of ok, but the hex files end up putting the code in the wrong place, I think. I should probably put together a blinky-light example and try to push it farther, and share it with others to figure out what the deal is, but even stepping one step further back and just trying to get a super simple mips assembly program to get linked and be uploadable to a PIC32MM part would be a great success to me.
Maybe others have better references and links?

Does liblldb-core.a really need to be 763MB in size?

This definitely takes the cake in terms of being the largest single piece of executable code I have ever seen.
Now it was a bit easier to get this whole thing built on my Mac here (I have been trying to build LLDB on a Linux as well, and currently I'm fighting with linking to Python there), for which I am thankful, but this astoundingly large executable file has me second-guessing myself... Did I do something wrong? What is inside of this monstrous archive?
I did run this:
% otool -TV liblldb-core.a
It produces 1159 lines of output, which puts it around 350+ object files. That sounds about right, I saw the XCode project work its way through about 350 source files.
I guess my question is why does LLDB work this way, why isn't it a bit more lightweight and why doesn't it just link to LLVM and Clang code rather than doing whatever this is? OR, are the contents of this archive already ALL LLDB-specific code? I recognize building a debugger is quite the undertaking, but this is honestly just mind-boggling.
I am aware that compiling at -O3 likely bloats executable file size. I'm not about to go back and recompile this monster though (the computer nearly melted with smcFanControl reporting CPU core temps as high as 106 degrees C).
Update: I sort of chronicled some further learning I just did over here... I'm still not able to find a monstrous liblldb-core.a or anything of the sort inside of XCode.app, and I'm still a bit confused about how this whole thing works.
It's the debug info that's the real issue with liblldb-core.a. Around 720MB of the 750MB is DWARF. (You can test this yourself - ar x liblldb-core.a into a directory, then strip -S *.o and you'll have about 32MB of .o files.) DWARF without real type uniquing has some nasty bloat with C++ programs.
I can make a .tar.gz of the (stripped) LLDB framework and lldb driver program and they come in at around 10MB or so with that compression -- and this is after linking in all of the llvm and clang bits needed by lldb. Nothing particularly outrageous here, even if the intermediate steps can look crazy.
I should have noticed that I took that first screenshot inside the Debug build directory which indicates that it is huge potentially because of the build configuration I had used.
Edit: Nope, that isn't quite the answer. The Release directory actually contains a liblldb-core.a that is slightly bigger than the other one.
It does appear that this huge 700MB archive is some sort of "side effect" file. When I archived the project it produced a 369MB .xarchive file. I'm sure this is a better representation of the "content" here. I am still basically learning this stuff by stumbling over it in the dark...
Update:
Oh, okay, looking at this situation it makes a bit more sense:
I took these files out of the Release directory after building it in Xcode (and minimally tweaking it to use -O3 for Release). I can see here that the ~350MB dSYM file containing debug information is taking up the majority of the space of that .xarchive from earlier, and that actually lldb's executable code totals under 40MB in size, the majority of which resides in that framework in an executable named LLDB.
This is a lot more reasonable to me. Now that I've gotten it out of Xcode and into my Documents directory where I can be more assured no external program would delete or modify it without my knowing, I can feel good about symlinking it from here to /usr/lib/lldb.

Codewarrior debugger not showing C source after compiling some code and data into a new ELF section

In our project, we are building an ELF file and a partially linked file (PLF) which is converted to a proprietary format and loaded into memory after the ELF is loaded. We use Codewarrior to run and debug, which has been working just fine (the C++ source code is always available to step through when debugging).
I've recently made a change where some code and data are compiled into a different section in the PLF file (.init, which was previously empty). Now, when debugging, a majority of the files are available only in assembler. When I re-build, no longer using .init, we can step through C++ source code again.
Does anyone know why this would be the case?
why this would be the case
One reason could be that codewarrior is not expecting to find code in .init section.
You are unlikely to get a good answer here. Try codewarrior support forums.
I got this working by switching the order of the sections using the linker command file (.lcf) so that the .init section comes second after .text. I guess as Employed Russian suggests, CodeWarrior is surprised by having code in .init and craps out. Changing the order of the sections seems to have no ill effects and now debugging works as expected again.

Issues compiling libf2c w/ latest mingw-get-inst (3.16.11), gcc

I'm trying to port some very old fortran code to windows. I'd like to use mingw and f2c, which has no problem converting the code to usable C on OS X and Ubuntu. I used f2c.exe as distributed by netlib on a fresh install of mingw, and it translated the code fine. I have a "ported" version of libf2c that seems to still contain some unresolved references -- mostly file i/o routines (do_fio, f_open, s_wsfe, e_wsfe) and, peculiarly, one arithmetic routine (pow_dd). To resolve these issues, I tried to build libf2c from source, but ran into an issue during the make process. The make proceeds to dtime_.c, but then fails due to a dependency on sys/times.h, which is no longer a part of the mingw distro. There appears to be a struct defined in times.h that defines the size of a variable in dtime_.c, specifically t and t0 on lines 53 and 54 (error is "storage size of 't' isn't known"; same for t0).
The makefile was modified to use gcc, and make invoked with no other options passed.
Might anyone know of a workaround for this issue? I feel confident that once I have a properly compiled libf2c, I'll be able to link it with gcc and the code will work like it does on linux and os X.
FOLLOW-UP: I was able to build libf2c.a by commenting out the time related files in the makefile (my code does not contain any time related functions, so don't think it will matter). I copied it to a non-POSIX search directory as show in -print-search-dirs, specifically C:\MinGW\lib\gcc\mingw32\3.4.5. That seems to have fixed the issue on the unresolved references, although the need to eliminate the time files does concern me. While my code is now working, the original question stands -- how to handle makefiles that call for sys/times.h in mingw?
Are you sure the MinGW installation went correct? As far as I can tell the sys/times.h header is still there, in the package mingwrt-3.18-mingw32-dev.tar.gz. I'm not familiar with the gui installer, but perhaps you have to tick a box for the mingwrt dev component.

Resources