[MAKE]: How to run function based on passed arg - makefile

I am currently working on project with multiple services, so I figured that creating make to manage them would be a nice thing, since I'm new to make I am wondering how to achieve following thing.
Currently I have couple of scripts to view logs from given service containers. Here is example command below:
logs-back:
docker-compose -f docker-compose.dev.yaml logs --follow backend
Since there are about 12 services I had to make 12 separate commands to view logs from specific ones.
What I would like to know is how to create more generic command which would view logs from specific service using service alias as argument, something like this:
logs:
docker-compose -f docker-compose.dev.yaml logs --follow $(arg)
and used like
make run back - where back would be this arg

You can do this:
logs-%:
docker-compose -f docker-compose.dev.yaml logs --follow $*
Now you can run this:
make logs-back
and make will find that pattern rule matches the target on the command line, and use it with the automatic variable $* set to the string that matches the stem (the part matching %).
See the GNU make manual for the meaning of all the terms in the previous paragraph :)

The simple answer:
make logs arg=back
This would create and populate a variable in the makefile named arg, such that $(arg) would expand to back.
Using the bareword back on the command line tells make to build a target named back. This leads to the next possibility:
LOGS := back front side top bottom
.PHONY: $(LOGS)
$(LOGS):
docker-compose -f docker-compose.dev.yaml logs --follow $#
where you would run make back, to generate the back logfile.

Related

AFL-Fuzz - Odd, Check Syntax! - How to add command line arguments to binary?

I am attempting to fuzz a proprietary binary with no source code that accepts a config file. So the typical use case would be:
./File --config file.config
The config is a bunch of different parameters that are required to run the rest of the program, and runs fine if I run it by itself. Additionally, the config file is within the input directory.
I am attempting to fuzz it utilizing the following command with AFL:
./afl-fuzz -Q -i input/ -o output/ -m 400 ./File --configfile
However, once I run the command, everything looks fine, but as soon as I get to the first iteration of 'havoc', I get an 'odd, check syntax!' error. If I add a ## at the end, the afl will give me a timeout error. I'm assuming that once afl-fuzz starts to mutate that input file, it breaks the binary, but I'm not sure and I'm not sure what else to try - any ideas? Thanks!

Bash completion for command line arguments and options

I'm writing a CLI for my app in go and I would like to provide users with bash completions for options and arguments.
The API is the following myapp [-f file] argument andI would like to provide completion for both the option -f and the argument.
I wonder how I can distinguish between the following two scenarios:
myapp -f some_path<tab><tab>
myapp -f some_file <tab><tab>
In the first case the user is still typing the option and he/she would like to see candidates for the file option. In the second he/she finished typing and would like to see the candidates for the argument.
The problem is that in both cases the value of os.Args is identical and I can't distinguish between the two scenarios. Is there a way to access the invocation string?
One solution would be to use a separator like -- between the option and the argument but I would like to know if there is a cleaner solution before I go down this path.

Bash: Getting a command's completion output programmatically (e.g., in a variable)

How to tap into the completion of another command programmatically?
Supposing my current directory has files a1, a2, and a3, then how can I make my command invoke the autocompletion of ls a to get back a1 a2 a3?
Is this possible?
Clarification and justification:
I chose ls because people can relate to it. It is a contrived example that is intentionally simple so that people can understand the question without distractions, but unfortunately such examples sometimes take us on tangents. :)
Let me try to exemplify the value of this feature. I have a command called build which, given a directory, can autocomplete to the targets that can be built in that directory. Those targets may not correspond to the files from that directory, and so glob completion (* and other wildcard characters) will not work. The targets might be mined by the build command from a build file that I don't want to be parsing. In other words:
build path/to/dir/TABTAB
Might give:
path/to/dir/a_target
path/to/dir/b_target
Again, a_target and b_target are not files or directories.
build is a pre-existing command, not something I can go ahead and modify to suit my purposes. And the manner in which it comes up with the valid completions is something I certainly don't want to know or reinvent.
Now suppose I have an entire repository of buildable projects, and most of my work and therefore most of my build work happens in only one project. In other words, I always build targets under my/project/directory.
So far so good.
I want to write a wrapper around the build command that doesn't require me to feed it the directory path each time I run it. I want it to know my preferred project directory (or directories, why not) and let me reference the targets without qualifying them:
So under the assumption that I have:
my/project/directory/a_target
my/project/directory/b_target
I want this:
mybuild TABTAB
to give me:
a_target
b_target
Basically I want to decorate and simplify the behavior of build to suit my particular needs.
I will need to write my own completion code for mybuild, but I want it to rely on the completion for build, because I can't ask the developers of build to code a build listtargets command just to make me happy. (Although that would be much more robust.) The build command already has code somewhere that given a prefix can return all the matching targets. It's in the completion for build, and I need to tap into it.
(Of course, when I run mybuild a_target, I will make sure that it knows to run build my/project/directory/a_target. I know how to implement this and it is not in scope for this question.)
I hope this illustrates why I need to tap into the completion of the build command and invoke it as a black box.
This is a bit of an odd thing to do, and the command you need to execute depends on the number of files in the directory - none, one, or more than one. But this command works for the example case:
echo echo a$'\t'$'\t' | bash -i 2>&1 | head -3 | tail -1
The command being autocompleted is
echo a
so send that as a character stream, followed by two tab characters, into an interactive bash shell. bash produces the autocompletion output on stderr, so redirect that to stdout and pipe that through head and tail to select one line of output from the whole. That produces, in this case, the one-line output
a1 a2 a3
But, as others say, just using
echo a*
might be easier!
Bash has something similar to this called globbing.
So for example in your case you could run the command
echo a*
Which would produce:
a1 a2 a3
This is very useful where you have spaces in the names of your files as you can say
for i in a*
do
echo $i
done
And it would work for a1 as well as a 1
Auto-completion is a feature provided by your shell (e.g. bash). The shell will try to offer auto-complete suggestions based on the context of the command you're trying to run and the environment. E.g. it knows that some commands will work on files and can offer some auto-completion based on file paths. But the command itself is not aware on how the arguments it has been run with have been specified, either by the user or with help of auto-completion.

Running a variable script with parameters through a script

I am sure there is an easier way to do this, but I have yet to figure out what to try next. We are running some jboss applications and I wish to be able to restart these with a input parameter. As I wish to restart more than one application at a time I figured a list would be good. This is comma seperated. This is how far I have gotten thus far.
IFS=',';
while read mLine
do
for i in $mLine
do
sh jboss-{$mLine} restart
done;
done < /tmp/apps
In general it works if I just write "sh jboss-abcdef restart", but not as long as I write "jboss-${mLine} restart". The latter will return a response from the script ( which is the right script according to the input values ) asking for the parameter which as you can see is in the sh command of this script. The former starts the correct script just like the latter, but unlike the latter, the first one actually restarts the server in question.
One could argue that I put one like for each applcation as well, but since not all applications needs to be restarted every time that would make me alot of if's this and if's that to find out which lines would have to be run, and thus defeating the purpose of neat and simple ...
Any ideas would be appreciated as I'm willing to try most to find a solution.
If you do this:
#!/bin/sh
for app in "$#"; do
sh "jboss-$app" restart
done
Then you can pass a space-separated list of app names to the script
./restart-apps app1 app2 app3 ...

Is there a smarter alternative to "watch make"?

I ran into this useful tip that if you're working on files a lot and you want them to build automatically you run:
watch make
And it re-runs make every couple seconds and things get built.
However ... it seems to swallow all the output all the time. I think it could be smarter - perhaps show a stream of output but suppress Nothing to be done for 'all' so that if nothing is built the output doesn't scroll.
A few shell script approaches come to mind using a loop and grep ... but perhaps something more elegant is out there? Has anyone seen something?
Using classic gnu make and inotifywait, without interval-based polling:
watch:
while true; do \
$(MAKE) $(WATCHMAKE); \
inotifywait -qre close_write .; \
done
This way make is triggered on every file write in the current directory tree. You can specify the target by running
make watch WATCHMAKE=foo
This one-liner should do it:
while true; do make --silent; sleep 1; done
It'll run make once every second, and it will only print output when it actually does something.
Here is a one-liner:
while true; do make -q || make; sleep 0.5; done
Using make -q || make instead of just make will only run the build if there is something to be done and will not output any messages otherwise.
You can add this as a rule to your project's Makefile:
watch:
while true; do $(MAKE) -q || $(MAKE); sleep 0.5; done
And then use make watch to invoke it.
This technique will prevent Make from filling a terminal with "make: Nothing to be done for TARGET" messages.
It also does not retain a bunch of open file descriptors like some file-watcher solutions, which can lead to ulimit errors.
How about
# In the makefile:
.PHONY: continuously
continuously:
while true; do make 1>/dev/null; sleep 3; done
?
This way you can run
make continuously
and only get output if something is wrong.
Twitter Bootstrap uses the watchr ruby gem for this.
https://github.com/twbs/bootstrap/blob/v2.3.2/Makefile
https://github.com/mynyml/watchr
Edit:
After two years the watchr project seems not to be maintained anymore. Please look for another solution among the answers. Personally, if the goal is only to have a better output, i would recommend the answer from wch here
I do it this way in my Makefile:
watch:
(while true; do make build.log; sleep 1; done) | grep -v 'make\[1\]'
build.log: ./src/*
thecompiler | tee build.log
So, it will only build when my source code is newer than my build.log, and the "grep -v" stuff removes some unnecessary make output.
This shell script uses make itself to detect changes with the -q flag, and then does a full rebuild if and only if there are changes.
#!/bin/sh
while true;
do
if ! make -q "$#";
then
echo "#-> Starting build: `date`"
make "$#";
echo "#-> Build complete."
fi
sleep 0.5;
done
It does not have any dependencies apart from make.
You can pass normal make arguments (such as -C mydir) to it as they are passed on to the make command.
As requested in the question it is silent if there is nothing to build but does not swallow output when there is.
You can keep this script handy as e.g. ~/bin/watch-make to use across multiple projects.
There are several automatic build systems that do this and more - basically when you check a change into version control they will make/build - look for Continuous Integration
Simple ones are TeamCity and Hudson
#Dobes Vandermeer -- I have a script named "mkall" that runs make in every subdirectory. I could assign that script as a cron job to run every five minutes, or one minute, or thirty seconds. Then, to see the output, I'd redirect gcc results (in each individual makefile) to a log in each subdirectory.
Could something like that work for you?
It could be pretty elaborate so as to avoid makes that do nothing. For example, the script could save the modify time of each source file and do the make when that guy changes.
You could try using something like inotify-tools. It will let you watch a directory and run a command when a file is changed or saved or any of the other events that inotify can watch for. A simple script that does a watch for save and kicks off a make when a file is saved would probably be useful.
You could change your make file to output a growl (OS X) or notify-send (Linux) notification. For me in Ubuntu, that would show a notification bubble in the upper-right corner of my screen.
Then you'd only notice the build when it fails.
You'd probably want to set watch to only cycle as fast as those notifications can display (so they don't pile up).
Bit of archaeology, but I still find this question useful. Here is a modified version of #otto's answer, using fswatch (for the mac):
TARGET ?= foo
all:
#fswatch -1 . | read i && make $(TARGET)
#make -ski TARGET=$(TARGET)
%: %.go
#go build $<
#./$#

Resources