This might be a dumb question and more related to shell scripting however. How do subscribe to a 'die' event and capture the container name. I've got this which sort of works
docker events --filter 'event=die' | while read event
do
echo event
done
However, its doesn't output anything useful.
OK, so this seems to work, unless there's a better way to do this.
while IFS= read -r result
do
echo $result
done < <(docker events --filter 'event=die')
Related
I am trying to tail dynamically created files in bin/bash using command
tail -f /data/logs*.log
But its not tailing the files created at runtime.
For eg if there are already 2 files logs1.log and logs2.log present and after some time if logs3.log is created at runtime.
It is not tailing logs3.log
What is the way to tail such dynamically created files ?
This does not work because the bash wildcard * is resolved only once. It produced the list of files that are already existing. This list is then not updated any more. So you whole line tail -f /data/logs*.log is replaced by something like tail -f /data/logs/logs1.log /data/logs2.log /data/logs3.log. And this command is then executed. Normally you must understand wildcards as a preprocessing before a command is executed.
What you want needs a bit more effort. You command already works for files that already exist. That is good so far. But you need more. So you must send your tail command into the background by adding a &. Try it:
tail -f /data/logs*.log &
sleep 2s
echo something more
But instead of writing "something more" you want to listen for new files and also tail -f them. How to do this you can find here: https://unix.stackexchange.com/questions/24952/script-to-monitor-folder-for-new-files
Over the time you will have more and more processes. Assuming that you normally have only a few new files, this won't be a problem. But in case you have hundreds or thousands of new files you have to spend more effort for your solution.
You can try something like this, but it has some issues (see below):
#!/bin/bash
pid=
dir="$1"
handle_sigint() {
kill $pid 2>/dev/null
exit
}
trap handle_sigint SIGINT SIGTERM
while true; do
tail -n1 -f "$dir"/*.log &
pid=$!
inotifywait -q -e create "$dir"
kill $pid 2>/dev/null
done
Run it by giving the wanted directory as the first parameter.
Sadly, even if you remove the -n1 argument in the tail command, you may miss some log lines, notably if the new files have many lines written directly on creation.
I have a bash script that asks the user for single-key inputs to select from some menu screens. I'm using read -n 1 -s -r -p '' and then doing stuff based on the user input passed through a small set of if statements.
I need to be able to prevent the user from accidentally dragging in a file acting as an input to the read command. I'm very open to replacing the read command to then allow me to prevent this user action from disrupting the process, but I need this to be compatible with most shells.
Right now, as one would expect, dragging in a file enters the filepath in the terminal, and so the shell treats it as if the user actually pressed keys.
I do not want the user to be able to use copy/paste or dragging in a file as a way to respond to the read prompt.
Is this possible? I don't mind if it's complex and probably not worth it; I'd implement it anyway I'm sure :D
I'm not familiar with official shell names, but I want it to be at least compatible with Mac OSX Terminal and Ubuntu.
I do not believe there is a way to prevent Copy/Pasting (or the equivalent dragging of files). This service is provided by the window manager, and is not controlled by applications.
That said, consider implementing the flushing using pure bash - flush the buffer BEFORE prompting the user for input, and after taking the data. This can be done by forcing a timeout on the read.
The solution will also prevent feeding the program with input from a pipe, or similar/file. I'll leave it to OP to decide if this is desired.
# Read all pending input
while read -t 0.1 ; do : ; done
# Read input
read -n 1 -s -r -p ''
# Consume any remaining input
while read -t 0.1 ; do : ; done
PARTIAL ANSWER
(stops user dragging in a file or directory but not from copy/pasting)
There is probably a more reliable and functional method but I have no idea.
Inspired by #user1934428 's suggestion about emptying the keyboard buffer, I came across a reddit post including the Perl command that clears the buffer. I did some trial and error, and found that this works perfectly:
read -n 1 -s -r -p '' && perl -e 'use POSIX; tcflush(0, TCIFLUSH);'
while [[ "$REPLY" == *"/"* ]] || [ "$REPLY" = "" ]; do
read -n 1 -s -r -p '' && perl -e 'use POSIX; tcflush(0, TCIFLUSH);'
done
After clearing the buffer, conveniently $REPLY is left with a "/" in memory from a file and is left empty from a directory.
This allows me to use that while loop to wait until the user has not dragged in a file or directory and do different things based on which.
This does not fix the copy/paste issue however.
I'm having a hard time understanding some sort of anomaly with grep return value.
As noted in the grep man page, the return value will be zero in case of a match and non-zero in case of no-match/error/etc.
In this code: (bash)
inotifywait -m ./logdir -e create -e moved_to |
while read path action file; do
if grep -a -q "String to match" "$path/$file"; then
# do something
fi
done
It returns non-zero when matched.
In this code: (bash)
search_file()
{
if grep -a -q "String to match" "$1"; then
# do something
fi
}
inotifywait -m ./logdir -e create -e moved_to |
while read path action file; do
search_file "$path/$file"
done
It returns zero when matched.
Can someone explain to me what is going on?
EDIT:
Let me be clear once more: if I run the first code on a file that contains the string, the if statement is running. if i run the second code on the same file, the if statement fails and does not run.
I support #John1024's conjecture that he wrote as a comment.
The "anomaly" is likely due to a slight timing difference between the two versions of your script. In case of a create event the file is initially empty, so grep will start scanning a partially written file. Calling grep through a function introduces a small delay, which increases the chances of the searched-for data to appear in the file by the time grep opens the file.
The solution to this race condition depends on a couple of assumptions/requirements:
Can you assume that pre-existing files in the watched directory will not be modified?
Do you want to identify every new matching file as soon as possible, or you can afford delaying its processing until it is closed?
I'm trying to create a status bar that runs during a rsync process. This is the code I tried. but this just keeps a creating a dotted line that never ends. I thought it would end when the rsync ended?
while (rsync -r /Volumes/foo /Volumes/bar) ; do
echo -n "."
done
It's obvious that the while command cannot work. You need to learn in which order expressions of a program getting executed and how control structures work.
Fortunately rsync has it's own way to show progress. Use this:
rsync --progress -r /Volumes/foo /Volumes/bar
I should not do this, because you have to learn programming, but here comes a hack how to achieve this:
rsync --progress -r /Volumes/foo /Volumes/bar | awk '{printf "."}'
I'm using awk to replace every line of rsync's progress output by a .. This is not exact, as there are more than one output line for each file. But it should do a good job unless you are counting the dots. You can try to refine it.. (for learning :)
I am trying to automate the set up of site creation for our in-house development server.
Currently, this consists of creating a system user, mysql user, database, and apache config. I know how I can do everything in a single bash file, but I wanted to ask if there was a way to more cleanly generate the apache config.
Essentially what I want to do is generate a conf file based on a template, similar to using printf. I could certainly use printf, but I thought there might be a cleaner way, using sed or awk.
The reason I don't just want to use printf is because the apache config is about 20 lines long, and will take up most of the bash script, as well as make it harder to read.
Any help is appreciated.
Choose a way of marking parameters. One possibility is :parameter:, but any similar pair of markers that won't be confused with legitimate text for the template file(s) is good.
Write a sed script (in sed, awk, perl, ...) similar to the following:
sed -e "s/:param1:/$param1/g" \
-e "s/:param2:/$param2/g" \
-e "s/:param3:/$param3/g" \
httpd.conf.template > $HTTPDHOME/etc/httpd.conf
If you get to a point where you need sometimes to edit something and sometimes don't, you may find it easier to create the relevant sed commands in a command file and then execute that:
{
echo "s/:param1:/$param1/g"
echo "s/:param2:/$param2/g"
echo "s/:param3:/$param3/g"
if [ "$somevariable" = "somevalue" ]
then echo "s/normaldefault/somethingspecial/g"
fi
} >/tmp/sed.$$
sed -f /tmp/sed.$$ httpd.conf.template > $HTTPDHOME/etc/httpd.conf
Note that you should use a trap to ensure the temporary doesn't outlive its usefulness:
tmp=/tmp/sed.$$ # Consider using more secure alternative schemes
trap "rm -f $tmp; exit 1" 0 1 2 3 13 15 # aka EXIT HUP INT QUIT PIPE TERM
...code above...
rm -f $tmp
trap 0
This ensures that your temporary file is removed when the script exits for most plausible signals. You can preserve a non-zero exit status from previous commands and use exit $exit_status after the trap 0 command.
I'm surprised nobody mentioned here documents. This is probably not what the OP wants, but certainly a way to improve legibility of the script you started out with. Just take care to escape or parametrize away any constructs which the shell will perform substitutions on.
#!/bin/sh
# For example's sake, a weird value
# This is in single quotes, to prevent substitution
literal='$%"?*=`!!'
user=me
cat <<HERE >httpd.conf
# Not a valid httpd.conf
User=${user}
Uninterpolated=${literal}
Escaped=\$dollar
HERE
In this context I would recommend ${variable} over the equivalent $variable for clarity and to avoid any possible ambiguity.
Use sed like for example
sed s/%foo%/$foo/g template.conf > $newdir/httpd.conf