I somehow managed to produce a MAC executable of half the usual size using Lazarus, but nobody knows how. I've inspected the executable using nm, but found no relevant differences:
exactly the same maximum address
nearly exactly the same number of lines (123316 vs 123318)
randomly chosen inspected lines were either the same or differed slightly in the address
I wonder how can I find out more. I have no idea about MAC and similarly for binary formats.
Related
I want to produce a binary file to load to mips target (barebone/no os) that will contain only one jump instruction and will be linked to the specific address. How to do it? With all my attempts, I get quite big files, that apparently contain linked libraries (they are unnecessary for one single jump instruction). I believe I shall get a file of just few bytes in length. How to do it?
Thanks.
I'm working on a large project in C++ using Visual Studio, but it very regularly either produces a duff build (the executable it generates doesn't match the code, resulting in random crashes or the inability to set breakpoints) or refuses to give any debug info for many of the types. For example, a vector of very simple structs stored by value will be displayed as "size: attempt to divide by zero". You can't drill down into the entries of the vector to see the values, and you get a similar thing for lists only you see a bunch of question marks instead of the divide by zero error.
This doesn't just affect standard library containers, but they are some of the worst culprits because they so often behave in this way. Doing a complete rebuild of the code will maybe rectify the problem 10% of the time, but it's completely unpredictable. I have found that writing shorter C++ files (I literally mean the file size, nothing to do with the objects themselves) can sometimes help, but I suspect that's just down to luck. It really doesn't make much sense that it could be relevant, anyway.
I work as part of a team on the same project, and only two of us seem to run into these kind of gnarly problems on a daily basis.
If anyone has any suggestions as to how I might be able to get the VS debugger to behave, I would be incredibly grateful.
OK, I have the problem, I do not know exactly the correct terms in order to find what I am looking for on google. So I hope someone here can help me out.
When developing real time programs on embedded devices you might have to iterate a few hundred or thousand times until you get the desired result. When using e.g. ARM devices you wear out the internal flash quite quickly. So typically you develop your programs to reside in the RAM of the device and all is ok. This is done using GCC's functionality to split the code in various sections.
Unfortunately, the RAM of most devices is much smaller than the flash. So at one point in time, your program gets too big to fit in RAM with all variables etc. (You choose the size of the device such that one assumes it will fit the whole code in flash later.)
Classical shared objects do not work as there is nothing like a dynamical linker in my environment. There is no OS or such.
My idea was the following: For the controller it is no problem to execute code from both RAM and flash. When compiling with the correct attributes for the functions this is also no big problem for the compiler to put part of the program in RAM and part in flash.
When I have some functionality running successfully I create a library and put this in the flash. The main development is done in the 'volatile' part of the development in RAM. So the flash gets preserved.
The problem here is: I need to make sure, that the library always gets linked to the exact same location as long as I do not reflash. So a single function must always be on the same address in flash for each compile cycle. When something in the flash is missing it must be placed in RAM or a lining error must be thrown.
I thought about putting together a real library and linking against that. Here I am a bit lost. I need to tell GCC/LD to link against a prelinked file (and create such a prelinked file).
It should be possible to put all the library objects together and link this together in the flash. Then the addresses could be extracted and the main program (for use in RAM) can link against it. But: How to do these steps?
In the internet there is the term prelink as well as a matching program for linux. This is intended to speed up the loading times. I do not know if this program might help me out as a side effect. I doubt it but I do not understand the internals of its work.
Do you have a good idea how to reach the goal?
You are solving a non-problem. Embedded flash usually has a MINIMUM write cycle of 10,000. So even if you flash it 20 times a day, it will last a year and half. An St-Nucleo is $13. So that's less than 3 pennies a day :-). The TYPICAL write cycle is even longer, at about 100,000. It will be a long time before you wear them out.
Now if you are using them for dynamic storage, that might be a concern, depending on the usage patterns.
But to answer your questions, you can build your code into a library .a file easily enough. However, GCC does not guarantee that it links the object code in any order, as it depends on optimization level. Furthermore, only functions that are referenced in a library file is pulled in, so if your function calls change, it may pull in more or less library functions.
I ve written a code in C for ATmega128 and
I d like to know how the changes that I do in the code influence the Program Memory.
To be more specific, let's consider that the code is similar to that one:
d=fun1(a,b);
c=fun2(c,d);
the change that I do in the code is that I call the same functions more times e.g.:
d=fun1(a,b);
c=fun2(c,d);
h=fun1(k,l);
n=fun2(p,m);
etc...
I build the solution at the AtmelStudio 6.1 and I see the changes in the Program Memory.
Is there anyway to foresee, without builiding the solution, how the chages in the code will affect the program memory?
Thanks!!
Generally speaking this is next to impossible using C/C++ (that means the effort does not pay off).
In your simple case (the number of calls increase), you can determine the number of instructions for each call, and multiply by the number. This will only be correct, if the compiler does not inline in all cases, and does not apply optimzations at a higher level.
These calculations might be wrong, if you upgrade to a newer gcc version.
So normally you only get exact numbers when you compare two builds (same compiler version, same optimisations). avr-size and avr-nm gives you all information, for example to compare functions by size. You can automate this task (by converting the output into .csv files), and use a spreadsheet or diff to look for changes.
This method normally only pays off, if you have to squeeze a program into a smaller device (from 4k flash into 2k for example - you already have 128k flash, that's quite a lot).
This process is frustrating, because if you apply the same design pattern in C with small differences, it can lead to different sizes: So from C/C++, you cannot really predict what's going to happen.
I have some knowledge on OS (really little.)
I would like to know a lot about specifically the Windows OS (e.g. win 7)
I know, it's the most dominant OS out there, and there is an enormous amount of work I`ll have to do.
Where do I start? what are beginner/intermediate books/articles/websites that I should read?
The first thing I wonder about is that the compiler turns my C programs to binary code, however when I open the (exe) result files, I find something other than 0 and 1.
I can't point you in a direction as far as books go, but I can clarify this:
The first thing I wonder about is that the compiler turns my C programs to binary code, however when I open the (exe) result files, I find something other than 0 and 1.
Your programs are in fact compiled to binary. Everything on your computer is stored in binary.
The reason you do not see ones and zeroes is because of the makeup of character encodings. It takes eight bits, which can have the value 0 or 1, to store one byte. A lot of programs and character encodings represent one byte as one character (with the caveat of non-ASCII unicode characters, but that's not terribly important in this discussion).
So what's going on is that the program you are using to open the file is interpreting sequences of eight bits and turning those eight bits into one character. So each character you see when you open the file is, in fact, eight ones and zeros. The most basic mapping between bytes and characters is ASCII. The character "A", for example, is represented in binary as 01000001. so when the program you use to open the file sees that bit sequence, it will display "A" in its place.
A nice book to read if you are interested in the Microsoft Windows operating system is The Old New Thing by Microsoft legend Raymond Chen. It is very easy reading if you are a Win32 programmer, and even if you are not (even if you are not a programmer at all!) many of the chapters are still readily accessible.
Otherwise, to understand the Microsoft Windows OS, you need to understand the Windows API. You learn this by writing programs for the Windows (native) platform, and the official documentation, which is very good, is at MSDN.
There are a series of books titled "Windows Internals" that could probably keep you busy for the better part of a couple years. Also, Microsoft has been known to release source code to universities to study...
well, if you study the win32 api you will learn a lot about high-level os
(petzold is the king, and it's not about win7 just win32....)
If you want to study about low level, study the processor assembler language.
There are a ton of resources out there for learning operating systems in general, many of which don't really focus on Windows because, as John pointed out, it's very closed and not very useful academically. You may want to look into something like Minix, which is very useful academically. It's small, light, and made pretty much for the sole purpose of education.
From there you can branch out into other OSes (even Windows, as far as not being able to look under the hood can take you) armed with a greater knowledge of what an OS is and does, as well as more knowledge of the inner workings of the computer itself. (For example, opening executable code in I assume a text editor, such as Notepad, to try to see the 1s and 0s, which as cdhowie pointed out eloquently is not doing what you think it's doing.)
I would personally look into the ReactOS project - a working windows clone.
The code con give some ideas of how windows is implemented...
Here is the site:
www. reactos. org