Buildroot 'make <pkg>-rebuild' same as 'make <pkg>'? - makefile

On a Buildroot managed project I'd just checked-out, I've run <pkg>-rebuild by mistake instead of make all followed by make <pkg>-rebuild.
When I found out my mistake, I was surprised the build had gone smoothly, without error, just as if I had run make all followed by make <pkg>-rebuild as suggested in the documentation available in my company for this package.
I'm totally new to Buildroot, and I was wondering: is it expected behavior for all Buildroot managed projects? In other words, can I continue to just run make <pkg>-rebuild instead of make all followed by make <pkg>-rebuild?
EDIT: in other words, for a package never built before, should make <pkg>-rebuild have the very same effect as make <pkg>?
Note : I didn't find the answer in the Buildroot user manual.

make <pkg> builds:
all the dependencies of <pkg> that have not been built yet
<pkg> if it has not been built yet
So, if make <pkg> is executed twice in a row, the second call will do nothing.
make <pkg>-rebuild builds:
all the dependencies of <pkg> that have not been built yet
(same as above)
The build and the following steps for <pkg>,
no matter if they have already been done
So, if make <pkg>-rebuild is executed twice in a row, the second call will not run the extract, patch and configure steps, but it will execute the build and install steps.
make <pkg>-rebuild is used for example when you edit the package build recipe in <pkg>/<pkg>.mk and you want to build it again with the new rules.

Related

How can I prepare a Linux source tree so an external module can be compiled against it?

I am keeping a WIFI driver alive by patching compilation errors for new Kernel versions. I can build it against a source tree, so I do not have to boot the kernel for which I want to fix it.
Unfortunately for this I have to fully compile the entire kernel. I know how to build a small version by using make localmodconfig, but that still takes very long.
Recently, I learned about the prepare target. This allows me to "compile" the module, so I learn about compilation problems. However, it fails in the linking phase, which prevents using make prepare in a Git bisect run. I also had the impression that it requires to clean the source tree from time to time due to spurious problems.
The question is: What is the fastest way to prepare a source tree so I can compile a Wifi module against it?
The target you are looking for is modules_prepare. From the doc:
An alternative is to use the "make" target "modules_prepare." This will make sure the kernel contains the information required. The target exists solely as a simple way to prepare a kernel source tree for building external modules.
NOTE: "modules_prepare" will not build Module.symvers even if CONFIG_MODVERSIONS is set; therefore, a full kernel build needs to be executed to make module versioning work.
If you run make -j modules_prepare (-j is important to execute everything in parallel) it should run pretty fast.
So what you need is basically something like this:
# Prepare kernel source
cd '/path/to/kernel/source'
make localmodconfig
make -j modules_prepare
# Build your module against it
cd '/path/to/your/module/source'
make -j -C '/path/to/kernel/source' M="$(pwd)" modules
# Clean things up
make -j -C '/path/to/kernel/source' M="$(pwd)" clean
cd '/path/to/kernel/source'
make distclean
The last cleaning up step is needed if you are in a bisect run before proceeding to the next bisection step, otherwise you may leave behind unwanted object files that might make other builds fail.

How to re-run ./configure with the same parameters?

A few months ago I installed an application from source by executing
./configure --whatever
make
sudo make install
I reinstalled my OS recently and now I would like to install this application again, but I don't remember what compile flags I used back then. However, since I kept all files from my home, I still have the source code and my original build. Is there a way to recover the ./configure parameters I used originally?
just re-run make? after all, i think you are mainly after re-running the installation phase rather than the configure-step.
however, to specifically answer your question: autotools record your configure-invocation in the config.status file.
you can call it to re-run your configure phase "in the same conditions":
./config.status --recheck
(this is mainly needed internally by autotools: if you hack your Makefile.am or your configure.ac, automake will rebuild the build-system so the changes have an effect; in order to do this correctly, it must invoke configure in the same way as it was originally called (by the user))

"go build" became very slow after installing a new version of Go

After upgrading from Go 1.2.1 to 1.3 (Windows 7 64 bit) "go build" execution time has increased from around 4 to over 45 seconds. There were no other changes except the go version update. Switching off the virus scanner seems to have no effect. Any clues?
You probably have dependencies that are being recompiled each time. Try go install -a mypackage to rebuild all dependencies.
Removing $GOPATH/pkg also helps to ensure you don't have old object files around.
Building with the -x flag will show you if the toolchain is finding incompatible versions.
I have the exact same problem, running this command solves it:
go get -u -v github.com/mattn/go-sqlite3
Another tip: http://kokizzu.blogspot.co.id/2016/06/solution-for-golang-slow-compile.html
Using go1.6,
Simply run go build -i.
It will compile all the dependencies and store them at $GOPATH/pkg/*/* as .a files.
Later when you run go run main.go, everything is much faster.
What s really great is that if you use vendored dependencies (IE: a vendor folder in your project), deps are built appropriately within $GOPATH/pkg/**/yourproject/vendor/**
So you don t have to go get install/get/whatever and have a mix of vendor / global dependencies.
I suspect you got to re-build .a files after deps update (glide update or smthg like this), but i did not test that yet.
After Go 1.10, you'd just need to type go build. You'd not need to type: go build -i.
From the draft Go 1.10 document, here.
Build & Install
The go build command now detects out-of-date packages purely based on the content of source files, specified build flags, and metadata stored in the compiled packages. Modification times are no longer consulted or relevant. The old advice to add -a to force a rebuild in cases where the modification times were misleading for one reason or another (for example, changes in build flags) is no longer necessary: builds now always detect when packages must be rebuilt. (If you observe otherwise, please file a bug.)
...
The go build command now maintains a cache of recently built packages, separate from the installed packages in $GOROOT/pkg or $GOPATH/pkg. The effect of the cache should be to speed builds that do not explicitly install packages or when switching between different copies of source code (for example, when changing back and forth between different branches in a version control system). The old advice to add the -i flag for speed, as in go build -i or go test -i, is no longer necessary: builds run just as fast without -i. For more details, see go help cache.
I just experienced the same problem - updating from 1.4 to 1.5. It seems that the olds versions are somehow incompatible or are being rebuild every time as go build -x shows. Executing go get -v invalidates all packages or refetches them, I am not quite sure and go build -x shows quite less output.
You can build sqlite3 like this:
cd ./vendor/github.com/mattn/go-sqlite3/
go install
After that your project will b built much faster.
If you try as all other said but still not work, I suggest removing the directory of $GOPATH such as :
sudo rm -rf $GOPATH
cd yourproject
go get -d
go get -u -v github.com/mattn/go-sqlite3

Should a Makefile delete itself on 'make clean'?

I have a configure script that writes a Makefile (from Makefile.in). The clean target currently removes everything created from within the makefile, but it doesn't delete the makefile itself. (I'm not using Autotools as you can probably tell)
My question therefore: Should the makefile also remove itself, requiring the developer to run ./configure again?
On the one hand, I want the clean target to properly clean up the source tree. But, on the other hand, I'd like to be able to type make clean test to check that everything's working as it should before committing; Running the configure script again seems weird somehow.
This is a stylistic question, rather than a technical question. The best place to go for answers is the automake manual, which will tell you:
`make clean'
Erase from the build tree the files built by `make all'.
`make distclean'
Additionally erase anything `./configure' created.
So, no, make clean should not delete Makefile. make distclean should delete Makefile, since it's created by configure not make all.
One of the best things about autotools is that they are consistent and standard. It's best to not irritate your users by flouting those standards.
I'd probably have a separate target for that. So clean would leave them able to build again but distclean or realclean or allclean or something would force a reconfigure. You could see which autotools clean target (if any) has similar behaviour.
The purpose of the clean target is usually to remove interim files, so you can start your compile from scratch. See more here For instance, a common makefile target is "clean," which generally performs actions that clean up after the compiler--removing object files and the resulting executable.

How to force final target in autotools

I have an autotools project. In one of its directories, I would like to run a script, after the make process is done. In other words, I'd like to have an option to "phony" target that would be executed last. Alternatively, I could use a dedicated m4 Macro (I only I knew which one...).
Any ideas?
Thanks
I'm assuming that by "autotools", you're using Automake as well as Autoconf. I can see two ways of doing this.
You can make a -hook rule in your Makefile.am. However, this can only be done for certain default targets: install-data, install-exec, uninstall, dist and distcheck. So, to make a rule that will be run immediately after install-exec, call it install-exec-hook. Then just run the script in the recipe for that rule.
Based on the wording of your question, though, it seems that you want to run the script after building. If that's the case, you can customize the all target with an all-local target and then run the script in the recipe for this target. Note that, according to the Automake documentation,
With the '-local' targets, there is no particular guarantee of
execution order; typically, they are run early, but with parallel make,
there is no way to be sure of that.
However, since the all target is phony, it shouldn't run until everything is built. Nevertheless, if you can run the script after installation, I would recommend that way since the execution order is guaranteed.

Resources