I stopped the make command mid-way, will previous build be affected? - makefile

I built a very big project, which had a number of sub-projects, using make command. It took me 3 hours. Then by mistake (without cleaning the previous build) I re-executed the make command for a few minutes and then stopped it.
Have I ruined my previous build? How does make actually work behind the scenes? Are building the object files done in an atomic and safe manner?
Note: I cannot really run any of my binary files to see if they are broken since that's another lengthy process. I just want to know if I am fine or I have to re-run the make and let it finish.

If you want to publish this binary as a production version of your commercial product, then I would not rely on it, always be 100% sure that you are using a successfully built version of a fully saved and committed code base.
On the other hand, I you need this for debugging purposes, then you could use this! why? because the make system overrides the output binary only once if finishes compiling all the object files and only if it detects changes that requires a relink of the binary:
After recompiling whichever object files need it, make decides whether to relink edit. This must be done if the file edit does not exist, or if any of the object files are newer than it. If an object file was just recompiled, it is now newer than edit, so edit is relinked.
From GNU make: How Make Works
So if you haven't changed your code base, the linker will not relink the binary, leaving it as it was created by the successful build.

Related

Trick gmake build/ Copying files to preserve time stamps and avoid complete rebuild

short background info:
For some rather convoluted reasons I am trying to trick my projects build system. We are working with Code Composer Studio on Windows from Texas Instruments and the gmake implementation that was included. We use the standard Debug build option pretty much unmodified as far as I am aware of (my understanding of make and our implementation of it is unfortunately limited). When developing in Code Composer the build works as one would expect, only the files that have been changed and their dependencies are changed. Sometimes I need to regenerate all code from a code generator that we use however, triggering a complete rebuild.
Trying to trick this complete rebuild I wrote a python script that moves all the original files out from the project, waits for the code generator, then compares the files and replaces the new identical files with the old ones. Thus the creation timestamp and the last modified timestamp should be preserved. This also seems to work when examining these properties. However a complete rebuild of the problem is always triggered.
Core of the Problem:
Having already built the project in Code Composer and building again it concludes that nothing has changed and that there is nothing to do, as one would expect.
I move a file out of the folder and back in again, preserving both the creation time stamp and last modified time stamp as far as I can see (in windows explorer -> properties).
When now rebuilding the project it will rebuild the said file and its dependencies, despite it being identical to before and having the same timestamps.
What is gmake looking at? Does it detect that the folder has been changed. Is it looking at some other hidden timestamp? Is it a Windows problem? Shouldn't it be possible to do this, inserting a file with an old timestamp?
What am I missing? Anyone have a clue?
Having examined this I concluded that the problem does not lie with gmake or the makefile but with Code Composer Studio's/Eclipse makefile generation.
We use the built in makefile generation in Code Composer Studio (which is an Eclipse derivative). It seems to keep track of if files have been removed/moved out from the projects workspace. If it has it removes the dependecies of that files during the makefile generation, triggering a rebuild.
Examples:
Case 1, works as I expect:
Rebuilding an unchanged project in Code Composer Studio. Nothing is recompiled since nothing is changed.
Case 2, works as I expect:
Changing something in a single then rebuilding results in recompilation of only that single file. As I/once expects.
Case 3, does not work as I expect:
Having an unchanged project and the moving one source code file (xyz.cpp) out of the project, not changing it in any way and then mvoing it back results in a recompilation of xyz.obj.
Code Composer/Eclipse seem to keep track on the filestructure. Despite the file being put back into the project exactly the way it was, Code Composer/Eclipse deletes xyz.obj during the makefile generation, triggering a rebuild.
On the other hand; closing Code Composer, moving the file back and forth, Reopening Code Composer and regenerating makefile/rebuilding works as one expects. Nothing is rebuilt and no .obj files are removed.
Since this didn't have anything to do with gmake and makefile I guess this question was invalid/unrelevant to begin with. I will collect my thoughts and reformulate a new question formulated strictly as a Code Composer/Eclipse errand.
If anyone has found themselves in the same situation or have relevant information on their hands please share however!

fastest to rebuild the linux kernel using rpmbuild

I just built the linux kernel for CentOS using the instructions that can be found here: https://wiki.centos.org/HowTos/Custom_Kernel
Now, I made my changes and I would like to rebuild the kernel and test it with my changes. How do I do that but:
1. Without having to recompile everything. So, build process should reuse whatever object files generated by the first build that wont need to be modified.
2. Without having to build the other packages that are build with the kernel (e.g., debuginfo, tools, debug-devel, ...etc.).
Thanks.
You cannot. The paradigm of rpmbuild is to always start from a clean slate to ensure reproducibility and predictability. The subpackages would be also be invalidated because they depend on the exact output of your kernel build, e.g. locations within the binary images where certain symbols are defined, that may have changed when you rebuilt it.

How can I build bootimage without going through all Makefiles

I am currently working on the Linux kernel for an Android phone. My workflow is:
Make changes to kernel code
Build with make bootimage
Flash with fastboot flash boot
This works fine. However building takes unnecessary long because make bootimage first goes through the entire tree and includes all Android.mk files. This takes longer than actually compiling the kernel and creating a boot image. Including these files should not be necessary since there are no changes to them. To decrease turnaround time in my workflow, I would like to speed up the build step.
When building other other projects, there are ways to to not build dependencies and thus skip reading all Android.mk files (for instance mm).
There is a make target bootimage-nodeps which seems to do the right thing: It makes a new boot image without going through all Android.mk files. Unfortunately dependencies also include the kernel itself (which therefore does not get built although there are changes).
My question is: Is there a way to build the kernel and create a boot image withouth having to read all Android.mk files.
In case you 're still looking into it, try using the showcommands goal at make, for example :
make bootimage showcommands
The showcommands goals will show all commands needed to build the kernel and the bootimage. Some of the commands, including the one to create the bootimage have $(hide) in front and are not shown.
Once you know the commands and the arguments, next time you need to make the bootimage you can run the commands manually (without using make bootimage and without including all the makefiles).
I have exactly the same problem and this is the only working solution I've found.
I am not sure if you can save time this way (since this solution needs to zip/unzip multiple times which needs more time then to search for all Android.mks on my machine) but since your question is:
Is there a way to build the kernel and create a boot image withouth
having to read all Android.mk files.
you could give this a try:
Preperations:
call make dist once
unzip the target_files.zip in out/dist/
now create a script that does the following for you:
overwrite the kernel in your unpacked target_files with your newly build kernel
zip your target_files with your new kernel
use the python script img_from_target_files from build/tools/releasetools/ with the extra parameter -z. Example: img_from_target_files -z out/dist/target_files.zip new_imgs.zip
inside the newly created new_imgs.zip you will find your new boot.img with your new kernel
You can try the make SINGLE_SHOTcommand - if you know the path to your Andorid.mk:-
m -j8 ONE_SHOT_MAKEFILE=build/target/board/Android.mk bootimage
This worked for me pretty well in Android L/M/N releases

XCode force touch

How can I force Xcode (4.5.x) to force "touch" on a file (foo.m) every time I hit the build button.
I would like that file to be compiled every time even though nothing has changed (It contains a DATE & TIME macro in case you are curious).
That's a poor solution. Better would be to run a build script that will generate a source file containing the current date and time. You can then compile and link this generated source file with the final binary.
This will also make it easier to manage with git, as foo.m won't be seen to change even though no functional changes have occurred.

How do you verify that 2 copies of a VB 6 executable came from the same code base?

I have a program under version control that has gone through multiple releases. A situation came up today where someone had somehow managed to point to an old copy of the program and thus was encountering bugs that have since been fixed. I'd like to go back and just delete all the old copies of the program (keeping them around is a company policy that dates from before version control was common and should no longer be necessary) but I need a way of verifying that I can generate the exact same executable that is better than saying "The old one came out of this commit so this one should be the same."
My initial thought was to simply MD5 hash the executable, store the hash file in source control, and be done with it but I've come up against a problem which I can't even parse.
It seems that every time the executable is generated (method: Open Project. File > Make X.exe) it hashes differently. I've noticed that Visual Basic messes with files every time the project is opened in seemingly random ways but I didn't think that would make it into the executable, nor do I have any evidence that that is indeed what's happening. To try to guard against that I tried generating the executable multiple times within the same IDE session and checking the hashes but they continued to be different every time.
So that's:
Generate Executable
Generate MD5 Checksum: md5sum X.exe > X.md5
Verify MD5 for current executable: md5sum -c X.md5
Generate New Executable
Verify MD5 for new executable: md5sum -c X.md5
Fail verification because computed checksum doesn't match.
I'm not understanding something about either MD5 or the way VB 6 is generating the executable but I'm also not married to the idea of using MD5. If there is a better way to verify that two executables are indeed the same then I'm all ears.
Thanks in advance for your help!
That's going to be nearly impossible. Read on for why.
The compiler will win this game, every time...
Compiling the same project twice in a row, even without making any changes to the source code or project settings, will always produce different executable files.
One of the reasons for this is that the PE (Portable Executable) format that Windows uses for EXE files includes a timestamp indicating the date and time the EXE was built, which is updated by the VB6 compiler whenever you build the project. Besides the "main" timestamp for the EXE as a whole, each resource directory in the EXE (where icons, bitmaps, strings, etc. are stored in the EXE) also has a timestamp, which the compiler also updates when it builds a new EXE. In addition to this, EXE files also have a checksum field that the compiler recalculates based on the EXE's raw binary content. Since the timestamps are updated to the current date/time, the checksum for the EXE will also change each time a project is recompiled.
But, but...I found this really cool EXE editing tool that can undo this compiler trickery!
There are EXE editing tools, such as PE Explorer, that claim to be able to adjust all the timestamps in an EXE file to a fixed time. At first glance you might think you could just set the timestamps in two copies of the EXE to the same date, and end up with equivalent files (assuming they were built from the same source code), but things are more complicated than that: the compiler is free to write out the resources (strings, icons, file version information, etc.) in a different order each time you compile the code, and you can't really prevent this from happening. Resources are stored as independent "chunks" of data that can be rearranged in the resulting EXE without affecting the run-time behavior of the program.
If that wasn't enough, the compiler might be building up the EXE file in an area of uninitialized memory, so certain parts of the EXE might contain bits and pieces of whatever was in memory at the time the compiler was running, creating even more differences.
As for MD5...
You are not misunderstanding MD5 hashing: MD5 will always produce the same hash given the same input. The problem here is that the input in this case (the EXE files) keep changing.
Conclusion: Source control is your friend
As for solving your current dilemma, I'll leave you with this: associating a particular EXE with a specific version of the source code is a more a matter of policy, which has to be enforced somehow, than anything else. Trying to figure out what EXE came from what version without any context is just not going to be reliable. You need to track this with the help of other tools. For example, ensuring that each build produces a different version number for your EXE's, and that that version can be easily paired with a specific revision/branch/tag/whatever in your version control system. To that end, a "free-for-all" situation where some developers use source control and others use "that copy of the source code from 1997 that I'm keeping in my network folder because it's my code and source control is for sissies anyway" won't help make this any easier. I would get everyone drinking the source control Kool-Aid and adhering to a standard policy for creating builds right away.
Whenever we build projects, our build server (we use Hudson) ensures that the compiled EXE version is updated to include the current build number (we use the Version Number Plugin and a custom build script to do this), and when we release a build, we create a tag in Subversion using the version number as the tag name. The build server archives release builds, so we can always get the specific EXE (and setup program) that was given to a customer. For internal testing, we can choose to pull an archived EXE from the build server, or just tell the build server to rebuild the EXE from the tag we created in Subversion.
We also never, ever, ever release any binaries to QA or to customers from any machine other than the build server. This prevents "works on my machine" bugs, and ensures that we are always compiling from a "known" copy of the source code (it only pulls and builds code that is in our Subversion repository), and that we can always associate a given binary with the exact version of the code that it was created from.
I know it has been a while, but since there is VB De-compiler app, you may consider bulk-decompiling vb6 apps, and then feeding decompilation results to an AI/statistical anomaly detection on the various code bases. Given the problem you face doesn't have an exact solution, it is unlikely the results will be 100% accurate, but as you feed more data, the detection should become more and more accurate

Resources