How to detect in bash script where stdout and stderr logs go? - bash

I have a bash script called from cron multiple times with different parameters and redirecting their outputs to different logs approximately like this:
* * * * * /home/bob/bin/somescript.sh someparameters >> /home/bob/log/param1.log 2>&1
I need my script get in some variable the value "/home/bob/log/param1.log" in this case. It could as well have a date calculated for logfilename instead of "param1". Main reason as of now is reuse of same script for similar purposes and be able to inform a user via monitored folder where he should look for more info - give him a logfile name in some warning file.
How do I detect to which log the output (&1 or both &1 and &2) goes?

If you are running Linux, you can read the information from the proc file system. Assume you have the following program in stdout.sh.
#! /bin/bash
readlink -f /proc/$$/fd/1 >&2
Interactively it shows your terminal.
$ ./stdout.sh
/dev/pts/0
And with a redirection it shows the destination.
$ ./stdout.sh > nix
/home/ceving/nix

at runtime /usr/bin/lsof or /usr/sbin/lsof gives open file
lsof -p $$ -a -d 1
lsof -p $$ -a -d 2
filename1=$(lsof -p $$ -a -d 1 -F n)
filename1=${filename1#*$'\n'n}
filename2=$(lsof -p $$ -a -d 2 -F n)
filename2=${filename2#*$'\n'n}

Related

script doesn't promt message if called from another script

I have the following example:
run_docker_script
#!/bin/bash
argument=$1
if [ argument==c1 ]; then
DOCKERNAME=container1
else
DOCKERNAME=container2
fi
docker run -it --rm --entrypoint /bin/bash $DOCKERNAME -c 'read -rp "username:" user'
This is working fine if I call it like ./run_docker_script.sh (means I was asked to give a username).
If I call this script from another one and redirect the output to a file, nothing will be prompted to the console! The script sits there waiting for the input but the user doesn't see anything:
#!/bin/bash
LOG_DIR=results
mkdir -p $LOG_DIR
./run_docker_script.sh c1 >"$LOG_DIR"/result.txt
Any hints?
You are redirecting the prompt to the log file. Probably use tee instead of a plain redirection.
#!/bin/bash
LOG_DIR=results
mkdir -p "$LOG_DIR" # notice quoting
./run_docker_script.sh arg1 arg2 | tee "$LOG_DIR"/result.txt
You will still probably have some issues with buffering. I'm thinking passing the input as an argument to the Docker container would be a better design.
#!/bin/bash
# ^ notice fixed spacing
if [ argument = c1 ]; then
# ^ ^ notice fixed spacing
DOCKERNAME=debian
else
DOCKERNAME=ubuntu
fi
read -r -p "username: " username
docker run -it --rm --entrypoint /bin/bash $DOCKERNAME -c "user=$username"
It's slightly weird that Docker outputs the standard error from the shell within the container to standard output, too, but that's what it does. I don't think there is an easy way to change that.
as i said, the script is working well if i don't redirect the output to a file, means that the user will be asked to provide some input in the console.
But if i redirect the output to the file, the text "username:" will be as well redirected to the file and the user doesn't see anything.

How to check if a specific executable has a live process

I want to write a script that checks periodically if a specific executable has a live process, something like this:
psping [-c ###] [-t ###] [-u user-name] exe-name
-c - limit amount of pings, Default is infinite
-t - define alternative timeout in seconds, Default is 1 sec
-u - define user to check process for. The default is ANY user.
For example, psping java will list all processes that are currently invoked by the java command.
The main goal is to count and echo the number of live processes for a user, whose executable file is exe-name, java in the above example.
I wrote a function:
perform_ping(){
ps aux | grep "${EXE_NAME}" | awk '{print $2}' | while read PID
do
echo $PID # -> This will echo the correct PID
# How to find if this PID was executed by ${EXE_NAME}
done
fi
sleep 1
}
I'm having a hard time figuring out how to check if a specific executable file has a live process.
To list all processes that opens a file, we can use the lsof command. Because an executable must be opened in order to be run, we may just use lsof for this purpose.
The next problem is that when we run a java file, we simply type java some_file, and if we issue lsof java it will coldly says that lsof: status error on java: No such file or directory because the java is actually /usr/bin/java.
To convert from java to /usr/bin/java we can use which java, so the command would be:
lsof $(which $EXE_FILE)
The output may looks like this:
lsof: WARNING: can't stat() tracefs file system /sys/kernel/debug/tracing
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
python3 26969 user txt REG 8,1 4526456 15409 /usr/bin/python3.6
In this case I searched python3 as lsof $(which python3). It will report the PID in the second field. But when there's another user that invokes python3 too, lsof will issue the warning on stderr like the first two lines because it cannot read other users info. Therefore, we modify the command as:
lsof $(which python3) 2> /dev/null
to suppress the warning. Then we're almost there:
lsof $(which python3) 2> /dev/null | awk 'NR > 1 { print $2 }'
Then you can use read to catch the PID.
Edit: how to list all processes for all users?
By default lsof doesn't read process for a specific file, but after further reading man lsof I found that there are options that meet your needs.
-a causes list selection options to be ANDed.
-c c selects the listing of files for processes executing the command that begins with the characters of c. Multiple commands may be specified, using multiple -c options.
-u s selects the listing of files for the user whose login names or user ID numbers are in the comma-separated set s.
Therefore, you can use
lsof -c java
to list all commands that are run by java. And to see a specific user, add -u option as
lsof -a -c java -u user
-a is needed for the AND operation. If you run this command you will see multiple entry for a process, to unique them, run
lsof -c java 2> /dev/null | sed 1d | sort -uk2,2
Also please notice that users may run their own java in their path and therefore you have to decide which one to monitor: java or /usr/bin/java.

Assign output to variable for command run under different user on OSX

I run a bash python command as current user, by doing this:
su $USER -c 'python3 -m site --user-site'
This works properly and prints the following:
/Users/standarduser7/Library/Python/3.6/lib/python/site-packages
I want to assign this output to a variable, so I'm using "$(command)":
target="$(su $USER -c 'python3 -m site --user-site')"
At this point, the OSX terminal hangs and has to be killed. Using backticks instead of "$(command)" leads to same result.
However, if I run the command without user, everything works as it should:
target="$(python3 -m site --user-site)"
echo target
output: /Users/standarduser7/Library/Python/3.6/lib/python/site-packages
How can I assign the output from a command run as the current user to a variable?
I don’t think it’s hanging; I think it’s showing a blank (prompt-less) command-line and is waiting for input. When I key in the user password, it returns this result:
Password:/Users/CK/Library/Python/2.7/lib/python/site-packages
and this is what ends up being stored in the target variable. A quick parameter substitution can rectify this anomalous output:
target=${target#*:}
The other solution (credit given to this answer) is to create a file descriptor as a copy of stdout, then tee the command to the copy, which then allows stdout to be piped to grep in order to process the output:
exec 3>&1 # create a copy of stdout
target=$(su $USER -c "python -m site --user-site" | tee /dev/fd/3 | grep -o '\/.*')
exec 3>&- # close copy

How to redirect stdin to a FIFO with bash

I'm trying to redirect stdin to a FIFO with bash. This way, I will be able to use this stdin on an other part of the script.
However, it doesn't seem to work as I want
script.bash
#!/bin/bash
rm /tmp/in -f
mkfifo /tmp/in
cat >/tmp/in &
# I want to be able to reuse /tmp/in from an other process, for example :
xfce4-terminal --hide-menubar --title myotherterm --fullscreen -x bash -i -c "less /tmp/in"
Here I would expect , when I run ls | ./script.bash, to see the output of ls, but it doesn't work (eg the script exits, without outputing anything)
What am I misunderstanding ?
I am pretty sure that less need additional -f flag when reading from pipe.
test_pipe is not a regular file (use -f to see it)
If that does not help I would also recommend to change order between last two lines of your script:
#!/bin/bash
rm /tmp/in -f
mkfifo /tmp/in
xfce4-terminal --hide-menubar --title myotherterm --fullscreen -x bash -i -c "less -f /tmp/in" &
cat /dev/stdin >/tmp/in
In general, I avoid the use of /dev/stdin, because I get a lot of surprises from what is exactly /dev/stdin, especially when using redirects.
However, what I think that you're seeing is that less finishes before your terminal is completely started. When the less ends, so will the terminal and you won't get any output.
As an example:
xterm -e ls
will also not really display a terminal.
A solution might be tail -f, as in, for example,
#!/bin/bash
rm -f /tmp/in
mkfifo /tmp/in
xterm -e "tail -f /tmp/in" &
while :; do
date > /tmp/in
sleep 1
done
because the tail -f remains alive.

Piping and redirecting with cat

Looking over the Dokku source code, I noticed two uses of pipe and redirect that I am not familiar with.
One is: cat | command
Example: id=$(cat | docker run -i -a stdin progrium/buildstep /bin/bash -c "mkdir -p /app && tar -xC /app")
The other is cat > file
Example: id=$(cat "$HOME/$APP/ENV" | docker run -i -a stdin $IMAGE /bin/bash -c "mkdir -p /app/.profile.d && cat > /app/.profile.d/app-env.sh")
What is the use of pipe and redirect in the two cases?
Normally, both usages are completely useless.
cat without arguments reads from stdin, and writes to stdout.
cat | command is equivalent with command.
&& cat >file is equivalent with >file, assuming the previous command processed the stdin input.
Looking at it more closely, the sole purpose of that cat command in the second example is to read from stdin. Without it, you would redirect the output of mkdir to the file. So the command first makes sure the directory exists, then writes to the file whatever you feed to it through the stdin.

Resources