Which operating systems support passing -j options to sub-makes? - makefile

From the man page for gnu make:
The ‘-j’ option is a special case (see Parallel Execution). If you set
it to some numeric value ‘N’ and your operating system supports it
(most any UNIX system will; others typically won’t), the parent make
and all the sub-makes will communicate to ensure that there are only
‘N’ jobs running at the same time between them all. Note that any job
that is marked recursive (see Instead of Executing Recipes) doesn’t
count against the total jobs (otherwise we could get ‘N’ sub-makes
running and have no slots left over for any real work!)
If your operating system doesn’t support the above communication, then
‘-j 1’ is always put into MAKEFLAGS instead of the value you
specified. This is because if the ‘-j’ option were passed down to
sub-makes, you would get many more jobs running in parallel than you
asked for. If you give ‘-j’ with no numeric argument, meaning to run
as many jobs as possible in parallel, this is passed down, since
multiple infinities are no more than one.
Which common operating systems support or don't support this behavior?
And how can you tell if your os supports it?

To tell if your make supports this, run this command from your shell prompt:
echo 'all:;#echo $(filter jobserver,$(.FEATURES))' | make -f-
If it prints 'jobserver', then you have support. If it prints nothing, you do not have support. Or, if your OS doesn't support echo or pipelines, create a small makefile containing:
all:;#echo $(filter jobserver,$(.FEATURES))
then run make with that makefile.

Related

How to view commands that `make` executes?

I have a Makefile, which fails at some point, with a git error. How can I view the git command that causes the whole make process to fail? More precisely, I am looking for a list of commands (including the ones that start with #) that I can run on an identical setup, to achieve the same effect as what make does.
I know for a script, instead of #! /bin/bash you would add a flag -x to it, and that would display all the commands before their execution. How do I do the same thing for make?
I am looking for a list of commands (including the ones that start with #) that I can run on an identical setup, to achieve the same effect as what make does.
By default, make echoes all recipe commands it runs, except those prefixed with #. The POSIX specifications for make do not describe a way to override that effect of # (but see below). It is conceivable that your make has an extension for that, but the make implementations you are most likely to be using (GNU make or BSD make, since you seem to assume that your standard shell is bash) do not.
Additionally, in POSIX-conforming make implementations, including the two mentioned above, the special target .SILENT can be used to suppress echoing the commands of some or all targets, and the -s command-line option can be used to suppress echoing for all targets.
You can print recipe commands prefixed with # if you run make with the -n (no-op) flag. That will print the commands for out-of-date targets without running them, except that those prefixed with a + are run even in no-op mode. Commands prefixed with # are included among those printed. Under some circumstances, the fact that most commands are not actually run in this mode can affect the output, but all the cases I can think of at the moment involve recursive make, and I think they are fairly unlikely.
POSIX seems to indicate that -n does not override -s or .SILENT, so if you have to deal with those then you may have no alternative but to modify your makefile. If you happen to be using GNU make, however, you will find that -n does override .SILENT and -s in that implementation. The same may be true of other makes.

Running "<" command between two different directories

I'm working a small JS project and trying to get a script to run, which compiles some source files that are written in our own "language x".
To run the compiler normally you would use the command ./a.out < source.x And it would print out success or compilation errors etc.
In the case now, I'm trying to working between two directories and using this command:
sudo ~/Documents/server/xCompiler/./a.out < ~/Documents/server/xPrograms/source.x
But this produces no output into the terminal at all and doesn't affect the output files. Is there somthing I'm doing wrong with the use of <? I'm planning to use it in child_process.exec within a node server later.
Any help would be appreciated, I'm a bit stumped.
Thanks.
Redirection operators (<, >, and others like them) describe operations to be performed by the shell before your command is run at all. Because these operations are performed by the shell itself, it's extremely unlikely that they would be broken in a way specific to an individual command: When they're performed, the command hasn't started yet.
There are, however, some more pertinent ways your first and second commands differ:
The second (non-working) one uses a fully-qualified path to the compiler itself. That means that the directory that the compiler is found in and the current working directory where the compiler is running can differ. If the compiler looks for files in or in locations relative to its current working directory, this can cause a failure.
The second uses sudo to escalate privileges to run the compiler. This means you're running as a different user, with most environment variables cleared or modified (unless explicitly whitelisted in /etc/sudoers) during the switch -- and has widespread potential to break things depending on details of your compiler's expectations about its runtime environment beyond what we can reasonably be expected to diagnose here.
That first one, at least, is amenable to a solution. In shell:
xCompile() {
(cd ~/Documents/server/xCompiler && exec ./a.out "$#")
}
xCompile < ~/Documents/server/xPrograms/source.x
Using exec is a performance optimization: It balances the cost of creating a new subshell (with the parenthesis) by consuming that subshell to launch the compiler rather than launching it as a subprocess.
Calling the node child_process.exec(), you can simply pass the desired runtime directory in the cwd argument, so no shell function is necessary.

make -jXXX : how can I get XXX

I have a parallel test suite (perl prove -j XXX). If a user types make -j 8 all, I'd like the test suite to be run with the same parameter: prove -j XXX t. If not, then I'd like it to be run single-threaded. Since I know the test suite is top level and depends on all of the binary targets, I'd like to, very simply, pass-through the user's specified parallel argument.
Is there something in gnu make that allows for getting the command-line arguments used to run make? Or will the user have to do something like: make -j 8 PLL=8 all.
According to the manual -j is passed down to some sub-makes (via MAKEFLAGS) in some circumstances, but not most and is not (apparently) present in MAKEFLAGS in the toplevel make. So I don't see any way to get this information unfortunately.
You could, however, have the user pass the value only through a variable assignment PLL=8 and add it to MAKEFLAGS yourself manually (MAKEFLAGS += -j$(PLL)) with appropriate guarding for only doing that in the toplevel makefile and only when some other value of -j isn't in MAKEFLAGS (in case that can actually happen somehow). I believe this will work correctly as far as make jobserver behaviour is concerned.

How to forward command line params to the "collateral" make?

I have a makefile which looks roughly like this
debug:
make -C build-debug
release:
make -C build-release
Now, I run "main" make
make -j4 debug
How do I forward -j4 to the collateral make? Note that I don't want to hardcode it, I want to forward whatever was passed to main make.
From the manual,:
If you set [-j] to some numeric value ‘N’ and your operating system
supports it (most any UNIX system will; others typically won't), the
parent make and all the sub-makes will communicate to ensure that
there are only ‘N’ jobs running at the same time between them all...
If your operating system doesn't support the above communication, then
‘-j 1’ is always put into MAKEFLAGS instead of the value you
specified.
If you really want to override this behavior, it's probably not too difficult...

Best/conventional method of dealing with PATH on multiple UNIX environments

The main question here is: is there a standard method of writing UNIX shell scripts that will run on multiple UNIX platforms.
For example, we have many hosts running different flavours of UNIX (Solaris, Linux) and at different versions all with slightly different file system layouts. Some hosts have whoami in /usr/local/gnu/bin/, and some in /usr/bin/.
All of our scripts seem to deal with this in a slightly different way. Some have case statements on the architecture:
case "`/script/that/determines/arch`" in
sunos-*) WHOAMI=`/usr/local/gnu/bin/whoami` ;;
*) WHOAMI=`/usr/bin/whoami` ;;
esac
With this approach you know exactly what binary is being executed, but it's pretty cumbersome if there are lots of commands being executed.
Some just set the PATH (based on the arch script above) and call commands by just their name. This is convenient, but you lose control over which command you run, e.g. if you have:
/bin/foo
/bin/bar
/other/bin/foo
/other/bin/bar
You wouldn't be able to use both /bin/foo and /other/bin/bar.
Another approach I could think of would to have a local directory on each host with symlinks to each binary that would be needed on each host. E.g.:
Solaris host:
/local-bin/whoami -> /usr/local/gnu/bin/whoami
/local-bin/ps -> /usr/ucb/ps
Linux host:
/local-bin/whoami -> /usr/bin/whoami
/local-bin/ps -> /usr/ps
What other approaches do people use? Please don't just say write the script in Python... there are some tasks where bash is the most succinct and practical means of getting a simple task accomplished.
I delegate all this to my .profile, which has an elaborate series of internal functions to try likely directories to add to the PATH. Except on OSX, which I believe is basically impossible because Darwin/Fink/Ports each wants to control your PATH, this approach works well enough.
If I cared about ambiguity (multiple instances of foo in different directories on my PATH), I would modify the functions so as to identify all ambiguous commands and require manual resolution. But for my environment, this has never been an issue. My main concern has been to have a single .profile that runs on Debian, Red Hat, Solaris, BSD, and so on. The 'try every directory that could possibly work' approach works well enough.
To set PATH to POSIX-compliant directories you can do the following at the beginning of your Bash scripts:
unset PATH
PATH="$(PATH=/bin:/usr/bin getconf PATH)"
export PATH
If you know you can use Bash across different Unix systems, you may use shell builtins instead of external commands to improve portability as well. Example:
help type
type -a type
type -P ls # replaces: which ls
To disable alias / function lookup for commands such as find, ls, ... in Bash you may use the command builtin. Example:
help command
command ls -l
If you want to be 100% sure to execute a specific command located in a specific directory, using the full executable path seems the way to go. First match wins in PATH lookup!

Resources