Can the Visual Studio ARM Assembler produce binaries that don't require an OS? - visual-studio

I'll admit upfront that I don't know a whole lot about ARM development, so I probably have by information wrong here.
Visual Studio comes with an ARM assembler (armasm.exe), which is extremely convenient because I use the tools included with VS for basically everything and I'm not too wild about paying for an ARM assembler that comes bundled with a C compiler that I'll never use from other companies.
Now, my understanding is that ARM binaries that are run on-the-metal need to be in a pure binary format instead of something like ELF or PE. Is ARMASM capable of outputting binaries that can run without an operating system? The MSDN documentation for ARMASM appears to be lacking in regards to that type of information.
If not, can you recommend a free ARM assembler that provides macro support and doesn't come bundled with a bunch of extra fluff?

The assembler just produces object files. It's up to the linker to produce the final, executable, file. I'm pretty sure Microsoft uses pretty much their usual linker, which produces PE format executables (which is a COFF variant, in case you care). Offhand, I don't know of a linker/locator that will take MS-COFF format object files and produce a pure binary output file (though that hardly means one doesn't exist -- I've never really looked for one).
Also note that running on the bare metal most means burning your file to some variant of ROM. That means you really don't need a pure binary output file -- what you really need is a file suitable for a ROM burner. That usually means Motorola S-records or Intel hex format (quite a few ROM burners accept both).
I know that doesn't give you a "final answer", but it should at least give you a few terms suitable for Googling to get more relevant information...

Related

gdb, how to step into c runtime? Where is crt_c.c?

When I'm stepping into debugged program, it says that it can't find crt/crt_c.c file. I have sources of gcc 6.3.0 downloaded, but where is crt_c.c in there?
Also how can I find source code for printf and rand in there? I'd like to step through them in debugger.
Ide is codeblocks, if that's important.
Edit: I'm trying to do so because I'm trying to decrease size of my executable. Going straight into freestanding leaves me with a lot of missing functions, so I intend to study and replace them one by one. I'm trying to do that to make my program a little smaller and faster, and to be able to study assembly output a bit easier.
Also, forgot to mention, I'm on windows, msys2. But answer is still helpful.
How can I find source code for printf and rand in there?
They (printf, rand, etc....) are part of your C standard library which (on Linux) is outside of the GCC compiler. But crt0 is provided by GCC (however, is often not compiled with debug information) and some C files there are generated in the build tree during compilation of GCC.
(on Windows, most of the C standard library is proprietary -inside some DLL provided by MicroSoft- and you are probably forbidden to look into the implementation or to reverse-engineer it; AFAIK EU laws might mention some exception related to interoperability¸ but then you need to consult a lawyer and I am not a lawyer)
Look into GNU glibc (or perhaps musl-libc) if you want to study its source code. libc is generally using system calls (listed in syscalls(2)) provided by the Linux kernel.
I'd like to step through them in debugger.
In practice you won't be able to do that easily, because the libc is provided by your distribution and has generally been compiled without debug information in DWARF format.
Some Linux distributions provide a debuggable variant of libc, perhaps as some libc6-dbg package.
(your question lacks motivation and smells like some XY problem)
I intend to study and replace them one by one.
This is very unrealistic (particularly on Windows, whose system call interface is not well documented) and could take you many years (or perhaps more than a lifetime). Do you have that much time?
Read also Operating Systems: Three Easy Pieces and look into OsDev wiki.
I'm trying to do so because I'm trying to decrease size of my executable.
Wrong approach. A debugger needs debug info (e.g. in DWARF) which will increase the size of the executable (but could later be stripped). BTW standard C functions are in some common shared library (or DLL on Windows) which is used by many processes.
I'm on windows, msys2.
Bad choice. Windows is proprietary. Linux is made of free software (more than ten billions lines of source code, if you consider all useful packages inside a typical Linux distribution), whose source code you could study (even if it would take several lifetimes).

Suggestions on how to write a debug format conversion tool

I'm looking to write a tool that aims to convert debug symbols of one format to another format that's compatible for use under GDB. This seems like a tedious and potentially complex project so I'm not exactly sure how to tackling it.
Intially I'm aiming to convert the Turbo Debug Symbol table(TDS) emitted from borland compilers into something like stabs or dwarf format(seems like dwarf is prefer from my research). But ideally I want to design my tool to be easy enough to extend so it could convert other formats too later on. e.g. codeview4 or maybe even pdb.
My primary motivation for creating this are:
Interoperability. If I can convert a foreign debug format into a form gdb can work with then source-level debugging would be possible on binaries compiled from another compiler other than gcc. This means any frontend debugging interface that uses gdb as a backend will work as well.
No other tools exist. I did a google searching around for similar tools and the closest I've found is tds2dbg. But it doesn't quite do what I'm looking for.
What I have to work with at the moment:
I already have a debug hook API that can understand the TDS debug format. I can use that to help me get at the needed information from the source format I'm converting from.
For the scope of this project, I'm mainly interested in getting this to work under the win32 environment. Other platforms and tools I'm not really concerned about.
The target dwarf debug format I'm converting to. This one I'm really not familiar with at all. I have used gcc ported compilers like MinGW before and debugged them with gdb with the dwarf format. But I don't have any idea how this format is implemented on windows.
The last point is the one I'm concerned about. I'm reading through the dwarf spec documentation but I find I'm having trouble really understanding and comprehending how it works. There's so much detail in there but at the same time it doesn't have any details about how dwarf gets implemented on object files and image files on a platform that doesn't use ELF natively -- namely the PE-COFF format that windows uses. The documentation is also a very dry read, long sentences make it hard to understand and diagrams and illustrations are sparse. I came across an API called libDwarf that should take most of the parsing work out of interpreting dwarf. The problem is I'm still trying to get it to build and I don't know yet how it will work out.
I haven't written any code yet since I don't fully understand what it is I need to build. I have a feeling the biggest hurtle will be figuring out how to work with dwarf due to it's complexity. Googling for information on how dwarf works under windows hasn't turned up anything helpful either. Like for example, there's no information about the 'glue' code that's needed to contain dwarf within a PE executable image file. How are the dwarf sections exactly layed out? Are there any header information for each section? GDB clearly doesn't just take a 'raw' dwarf debug file and use it as is. So what kind of format does gdb expect the debug file to be in for it to be able to work with it?
My question is, how can I start on such a project? More importantly, where can I turn to for help when I inevitably get stuck on a problem?
Affinic Assembler for Windows
Affinic Assembler is an x86/x86-64 assembler for Windows that takes GAS-syntax assembly source with DWARF debug information and generates corresponding CodeView format sections in object file in order to make the linked program debuggable in Visual Studio. This program is good for Cygwin and MinGW users to port Linux code to Windows.
http://www.affinic.com/?page_id=48
You are asking several questions here :-)
I think you are heading in the right direction, using libdwarf.
BUT, have you taken a look at objcopy to see if this tool can do some of the work for you? It probably doesn't support borland, pdb or codeview4, but it might be worth looking into. (Another approach may be to extend objcopy to support the formats you are trying to convert between.)
I have used the dwarf-discuss mailing list sometimes when I have become stuck.
http://lists.dwarfstd.org/listinfo.cgi/dwarf-discuss-dwarfstd.org
As for the questions on dwarf, split them into separate questions and I will do my best to
answer them. :-)

What are available executable binary formats and emulators?

For fun, I'm working on a compiler for a small language, and I'm targeting the ARM instruction set first due to its ease. Currently, I'm able to compile the code so I have ARM machine code for the body of each method. At this point I need to start tying a few things together:
What format should I persist my machine code to so I can...
Run it in what debugger?
Currently there's no I/O support, etc., so debugging will be heavily keyed to my ability to step through the disassembly and view processor registers/memory.
I'm running Windows and my compiler only runs in Windows, so having some sort of emulator on Windows would be preferable.
Edit: It appears I can use the Visual Studio Windows Mobile 6 emulator. For now, I might be able to simply save the results in a simple binary format and load it into emulator memory via a tiny C++ console application, then jump into it with a function pointer. Later, it appears I would need to support the ELF and PE formats.
Regarding file formats... the most simple would be:
Motorola S-record
Intel hex file
Those formats can record the binary data and the target address range(s) for the data to be loaded. That's about it.
A more capable format to contain more information:
ELF
for maximum information, include DWARF debug information
ELF is fairly widely supported, and not too complex. DWARF allows you to record very expressive debug information for debugging of complex language constructs. However, to achieve that expressiveness, it can be a very complex format to write.

Is There a Way to Tell What Language Was Used for a Program?

I have a desktop program I downloaded and installed. It runs from an .exe file.
Is there some way from the .exe file to tell what programming language was used to write the program?
Are there any tools are available to help with this?
What languages can be determined and which ones cannot?
Okay here are two of the sort of things I'm looking for:
Tips to Determine Whether an App is Written in Delphi or Not
This "IsDelphi" program by Bruce McGee will find all applications built with Delphi, Delphi for .Net or C++ Builder that are on your hard drive.
I use WinDowse (a small freeware utility written in Delphi) to spy the windows of the program.. for example if you look at the "Class" TabSheet you can discover the "Class" Name of the control..
For example:
TFormXX, TEditYY, TPanelZZZ for delphi apps
WindowsForms10.XXXX.yyy, for .NET apps
wxWindowsXXX for wxWindows apps
AfxWndXX for MFC/VC++ apps (I think)
I think this is the fastest way (although not the most accurate) to find information about apps..
I understand your curiosity.
You can identify Delphi and C++ Builder apps and their SKU by looking for a couple of specific resources that the linker adds. Specifically RC Data\DVCLAL and RC DATA\PACKAGEINFO. The XN Resource Editor makes this a lot easier, but it might choke on compressed EXEs.
EXE compressors complicate things a little. They can hide or scramble the contents of the resources. Programs compressed with UPX are easy to identify with a HEX editor because the first 2 sections in the PE header are named UPX0 and UPX1. You can use the app to decompress these.
Applications compiled with .Net aren't difficult to detect. Recent versions of Delphi even include an IsAssembly function, or you could do a little spelunking in the PE header. Check out the IsManaged function in IsDelphi.
Telling which .Net language was used is trickier. By default, VB.Net includes a reference to Microsoft.VisualBasic, and VCL.Net apps included Borland specific references. However, VCL.Net is defunct in favour of Delphi Prism, and you can add a reference to the VB assembly to any managed language.
I haven't looked at some of the apps that use signatures to identify the the compiler, so I don't know how well they work.
I hope this helps.
First, look to see what run time libraries it loads. A C program won't normally load Visual Basic's library.
Also, examine the executable for telltale strings. In most executables, this is near the end. If the program uses string constants, there might be a clue in how they are stored.
A good disassembler, plus of course an excellent understanding of the underlying CPU architecture, can often help you identify the runtime libraries that are in play. Unless the exe has been carefully "stripped" of symbols and/or otherwise masked, the names of symbols seen in runtime libraries will often provide you with programming-language hints, because different languages' standards specify different names, and vendors of compilers and accompanying runtime libraries usually respect those standards pretty closely.
Of course, you won't get there without knowledge of the various possible languages and their library standards -- and if the code's author was intent to mask the information, that's not too hard for them to do, either.
If you have available a large set of samples from known compilers, I should think this would be an excellent application for machine learning. I believe so-called "supervised learning" is relevant here. Unfortunately I know next to nothing about the topic—only that I have heard some impressive results presented at conferences.
You might dig through the proceedings of the Working Conference on Reverse Engineering to see if anyone else is interested in this problem.
Assuming this is an application for Windows...
Does Reflector recognize it as a .NET assembly? Then it's MSIL, 99% either VB or C#, but you'll likely never know which, nor does it matter.
Does it need an intrepreter (like Java?)? Then it's Java (or whatever the interpreter is.)
Check what runtime DLLs it requires.
Does it require the VB runtime dlls? Congratulations, VB from VisualStudio 6.0 or earlier.
Does it require the Delphi dlls? Congratulations, Delphi.
Did you make it this far? C/C++. Assume C++ unless it requires msys or cygwin dlls, in which case C has maybe a 25% chance.
Congratulations, this should come out correct for the vast majority of Windows software. This probably doesn't actually help you though, as a lot of the same things can be done in all of these languages.
IDA Pro Free (http://www.hex-rays.com/idapro/idadownfreeware.htm) may be helpful. Even if you don't understand assembly language, if you load the EXE into IDA Pro then its initial progress output might (if there are any telltale signs) include its best guess as to which compiler was used.
Start with various options to dumpbin. The symbol names, if not carefully erased, will give you all kinds of hints as to whether it is C, C++, CLR, or something else.
Other tools use signatures to identify the compiler used to create the executable, like PEiD, CFF Explorer and others.
They normally scan the entry point of the executable vs the signature.
Signature Explorer from CFF Explorer can give you an understanding of how one signature is constructed.
It looks like the VC++ linker from V6 up adds a signature to the PE header which youcan parse.
i suggest PEiD (freeware, closed source). Has all of Delphi for Win32 signatures, also can tell you which was packer used (if any).

Lua compiled scripts on Mac OS X - Intel vs PPC

Been using Lua 5.0 in a Mac OS X universal binary app for some years. Lua scripts are compiled using luac and the compiled scripts are bundled with the app. They have worked properly in Tiger and Leopard, Intel or PPC.
To avoid library problems at the time, I simply added the Lua src tree to my Xcode project and compiled as is, with no problems.
It was time to update to a more modern version of Lua so I replaced my source tree with that of 5.1.4. I rebuilt luac using make macosx (machine is running Leopard on Intel).
Uncompiled scripts work properly in Tiger and Leopard, Intel and PPC, as always.
However, now compiled scripts fail to load on PPC machines.
So I rebuilt luac with the 'ansi' flag, and recompiled my scripts. Same error. Similarly, a build flag of 'generic' produced no joy.
Can anyone please advise on what I can do next?
Lua's compiled scripts are pretty much the raw bytecode dumped out after a short header. The header documents some of the properties of the platform used to compile the bytecode, but the loader only verifies that the current platform has the same properties.
Unfortunately, this creates problems when loading bytecode compiled on another platform, even if compiled by the very same version of Lua. Of course, scripts compiled by different versions of Lua cannot be expected to work, and since the version number of Lua is included in the bytecode header, the attempt to load them is caught by the core.
The simple answer is to just not compile scripts. If Lua compiles the script itself, you only have to worry about possible version mismatches between Lua cores in your various builds of your application, and that isn't hard to deal with.
Actually supporting a full cross compatibility for compiled bytecode is not easy. In that email, Mike Pall identified the following issues:
Endianess: swap on output as needed.
sizeof(size_t), affects huge string constants: check for overflow when
downgrading.
sizeof(int), affectsMAXARG_Bx and MAXARG_sBx: check for overflow when
downgrading.
typeof(lua_Number): easy in C, but only when the host and the target
follow the same FP standard; precision
loss when upgrading (rare case);
warn about non-integer numbers when
downgrading to int32.
From all the discussions that I've seen about this issue on the mailing list, I see two likely viable approaches, assuming that you are unwilling to consider just shipping the uncompiled Lua scripts.
The first would be to fix the byte order as the compiled scripts are loaded. That turns out to be easier to do than you'd expect, as it can be done by replacing the low-level function that reads the script file without recompiling the core itself. In fact, it can even be done in pure Lua, by supplying your own chunk reader function to lua_load(). This should work as long as the only compatibility issue over your platforms is byte order.
The second is to patch the core itself to use a common representation for compiled scripts on all platforms. This has been described as possible by Luiz Henrique de Figueiredo:
....
I'm convinced that the best route to
byte order or cross-compiling is
third-party dump/undump pairs. The
files ldump.c and lundump.c are
completely replaceable; they export a
single, well-defined, entry point. The
format of precompiled chunks is not
sacred at all; you can use any format,
as long as ldump.c and lundump.c agree
about it. (For instance, Rici Lake is
considering writing a text format for
precompiled chunks.)
....
Personally, I'd recommend giving serious consideration to not pre-compiling the scripts and thus avoid the platform portability issues entirely.
Edit: I've updated my description of the bytecode header thanks to lhf's comment. I hadn't read this part of the Lua source yet, and I probably should have checked it before being quite so assertive about what information is or is not present in the header.
Here is the fragment from lundump.c that forms a copy of the header matching the running platform for comparison to the bytecode being loaded. It is simply compared with memcmp() for an exact match to the header from the file, so any mismatch will cause the stock loader (luaU_undump()) to reject the file.
/*
* make header
*/
void luaU_header (char* h)
{
int x=1;
memcpy(h,LUA_SIGNATURE,sizeof(LUA_SIGNATURE)-1);
h+=sizeof(LUA_SIGNATURE)-1;
*h++=(char)LUAC_VERSION;
*h++=(char)LUAC_FORMAT;
*h++=(char)*(char*)&x; /* endianness */
*h++=(char)sizeof(int);
*h++=(char)sizeof(size_t);
*h++=(char)sizeof(Instruction);
*h++=(char)sizeof(lua_Number);
*h++=(char)(((lua_Number)0.5)==0); /* is lua_Number integral? */
}
As can be seen, the header is 12 bytes long and contains a signature (4 bytes, "<esc>Lua"), version and format codes, a flag byte for endianness, sizes of the types int, size_t, Instruction, and lua_Number, and a flag indicating whether lua_Number is an integral type.
This allows most platform distinctions to be caught, but doesn't attempt to catch every way in which platforms can differ.
I still stand by the recommendations made above: first, ship compilable sources; or second, customize ldump.c and lundump.c to store and load a common format, with the additional note that any custom format should redefine the LUAC_FORMAT byte of the header so as to not be confused with the stock bytecode format.
You may want to use a patched bytecode loader that supports different endianness.
See this.
I would have commented on RBerteig's post, but I apparently don't have enough reputation yet to be able to do so. In working on bringing LuaRPC up to speed with Lua 5.1.x AND making it work with embedded targets, I've been modifying the ldump.c and lundump.c sources to make them both a bit more flexible. The embedded Lua project (eLua) already had some of the patches you can find on the Lua list, but I've added a bit more to make lundump a little more friendly to scripts compiled on different architectures. There's also cross-compilation support provided so that you can build for targets differing from the host system (see luac.c in the same directory as the links below).
If you're interested in checking out the modifications, you can find them in the eLua source repository:
http://svn.berlios.de/wsvn/elua/trunk/src/lua/lundump.c
http://svn.berlios.de/wsvn/elua/trunk/src/lua/lundump.h
http://svn.berlios.de/wsvn/elua/trunk/src/lua/ldump.c
Standard Disclaimer:
I make no claim that the modifications are perfect or work in every situation. If you use it and find anything broken, I'd be glad to hear about it so that it can be fixed.
Lua bytecode is not portable. You should ship source scripts with your application.
If download size is a concern, they are generally shorter than the bytecode form.
If intellectual property is a concern, you can use a code obfuscator, and keep in mind that disassembling Lua bytecode is anything but difficult.
If loading time is a concern, you can precompile the sources locally in your installation script.
I conjecture that you compiled the scripts on an Intel box.
Compiled scripts are wildly unportable. If you really want to precompile scripts, you'll need to include two versions of each compiled script: one for Intel and one for PPC. Your app will have to interrogate which program it's running on and use the correct compiled script.
I don't have enough reputation to comment, so I have to provide this as an answer instead even though it's not an appropriate answer to the question asked. Sorry.
There is an Lua Obfuscator available here:
http://www.capprime.com/CapprimeLuaObfuscator/CapprimeLuaObfuscator.aspx
Full disclosure: I am the author of the obfuscator and I am aware it is not perfect. Feedback is welcome and encouraged (there is a feedback page available from the above page).

Resources