for-Loop in screen does not work - bash

I would like to use screen to stay attached to a loop command on a ssh session, which is most likely going to run for a couple of hours. I am using screen because I fear that my terminal will get disconnected while the command is still running. This is the loop-command:
for i in *; do echo $i/share/sessions/*; done
(echo will be replaced by rm -rf).
I have tried multiple variants of screen 'command ; command ; command', but never got it working. How can I fix this? Alternatively, could you suggest a workaround for my problem?

Screen for long running commands can be used like this :
$screen -S session_name
//Inside screen session
$ <run long running command>
$ //press key combination - Ctrl + a + d - to come out of screen session
// Outside screen session
// Attach to previously created session
$screen -x session_name
For more details look at man page of screen.
Another application which works similar way and is very popular is tmux

I assume that you're trying to run:
screen 'for i in *; do echo $i/share/sessions/* ; done'
This results in a Cannot exec [your-command-here]: No such file or directory because screen doesn't implicitly start a shell; rather, it calls an execv-family syscall to directly invoke the program named in its argument. There is no program named for i in *; do echo $i/share/sessions/*; done, and no shell running which might interpret that as a script, so this fails.
You can, however, explicitly start a shell:
screen bash -c 'for i in *; do echo $i/share/sessions/* ; done'
By the way -- running one copy of rm per file you want to delete is going to be quite inefficient. Consider using xargs to spawn the smallest possible number of instances:
# avoid needing to quote and escape the code to run by encapsulating it in a function
screenfunc() { printf '%s\0' */share/sessions/* | xargs -0 rm -rf; }
export -f screenfunc # ...and exporting that function so subprocesses can access it.
screen bash -c screenfunc

There is no need really for screen here.
nohup rm -vrf */share/sessions/* >rm.out 2>&1 &
will run the command in the background, with output to rm.out. I added the -v option so you can see in more detail what it's doing by examining the tail of the output file. Note that the file won't be updated completely in real time due to buffering.
Another complication is that the invoking shell will do a significant amount of work with the wildcard when it sets up this job. You can delegate that to a subshell, too:
nohup sh -c 'rm -rvf */share/sessions/*' >rm.out 2>&1 &

Related

Clear last bash command from history from executing bash script

I have a bash script, which uses expect package to ssh into a remote server.
My script looks like this:
#!/bin/bash
while getopts "p:" option; do
case "${option}" in
p) PASSWORD=${OPTARG};;
esac
done
/usr/bin/expect -c "
spawn ssh my.login.server.com
expect {
\"Password*\" {
send \"$PASSWORD\r\"
}
}
interact
"
I run it like ./login.sh -p <my-confidential-password>
Now once you run it and log in successfully and exit from the remote server, I can hit up-arrow-key from the keyboard and can still see my command with password in the terminal. Or I simply run history it shows up. Once I exit the terminal, then it also appears in bash_history.
I need something within my script that could clear it from history and leave no trace of the command I ran (or password) anywhere.
I have tried:
Clearing it using history -c && history -r, this doesn't work as the script creates its own session.
Also, echo $HISTCMD returns 1 within script, hence I cannot clear using history -d <tag>.
P.S. I am using macOS
You could disable command history for a command:
set +o history
echo "$PASSWORD"
set -o history
Or, if your HISTCONTROL Bash variable includes ignorespace, you can indent the command with a space and it won't be added to the history:
$ HISTCONTROL=ignorespace
$ echo "Hi"
Hi
$ echo "Invisible" # Extra leading space!
Invisible
$ history | tail -n2
7 echo "Hi"
8 history | tail -n2
Notice that this isn't secure, either: the password would still be visible in any place showing running processes (such as top and friends). Consider reading it from a file with 400 permissions, or use something like pass.
You could also wrap the call into a helper function that prompts for the password, so the call containing the password wouldn't make it into command history:
runwithpw() {
IFS= read -rp 'Password: ' pass
./login.sh -p "$pass"
}

Bind keyboard shortcut to bash function to interrupt sleep?

I have a collection of .jpg background images that I want to use as backgrounds for my i3-gaps desktop. Currently, I have these two lines in my i3 config file for my wallpapers.
exec --no-startup-id randomwallpaper
bindsym $mod+i exec --no-startup-id feh --bg-scale --randomize /home/user/Pictures/bgart/*.jpg
This is my randomwallpaper script. It uses feh to set an image and wal to create a colorscheme based off of it.
#!/bin/bash
cd /home/user/Pictures/bgart
for file in $(ls); do
shopt -s nullglob
for i in *.jpg; do
feh --bg-scale --randomize /home/user/Pictures/bgart/$i
wal -q -i $i
sleep 300
done
done
On startup, randomwallpaper starts and every 5 minutes the wallpaper changes along with the colorscheme. However, I can also press Win+I to manually switch to a random wallpaper. Is it possible to add a trigger of some sort to interrupt the cycle? Maybe have the script as a function and add a key to call it? That way, I can have the above script running and if I get bored of the wallpaper, I can switch to another with Win+I and still have it change 5 minutes later.
Unless you modified your bash with a built-in sleep, you can kill the sleep command. The script will then proceed to the next command as if sleep terminated normally. The only tricky part is to identify the correct process to kill. Here I assume that there is only one randomwallpaper process running on your system:
exec --no-startup-id randomwallpaper
bindsym $mod+i exec --no-startup-id sh -c 'pkill -P $(pgrep -ox randomwallpaper) sleep'
By the way; Your script could use some improvement. For instance, the variable file is unused and --randomize has no effect since you only supply one picture.
#!/bin/bash
shopt -s nullglob
cd /home/user/Pictures/bgart
while true; do
i=$(shuf -en1 ./*.jpg)
if [ -n "$i" ]; then
feh --bg-scale "$i"
wal -q -i "$i"
fi
sleep 300
done

Continuously watch a socket file, run command when it doesn't exist?

It seems that when my screen is locked for some period of time, my S.gpg-agent.ssh disappears, and so in order to continue using my key I have to re-initialise it.
Obviously, this is a relatively frequent occurrence, so I've written a function for my shell to kill gpg-agent, restart it, and reset the appropriate environment variables.
This may be a bit of an 'X-Y problem', X being above this line, but I think Y below is more generally useful to know anyway.
How can I automatically run a command when an extant file no longer exists?
The best I've come up with is:
nohup echo "$file" | entr $command &
at login. But entr runs a command when files change, not just deletion, so it's not clear to me how that will behave with a socket.
According to your comment, cron daemon does not fit.
Watch socket file deletion
Try auditd
# auditctl -w /var/run/<your_socket_file> -p wa
$ tail -f /var/log/audit/audit.log | grep 'nametype=DELETE'
Howto run a script if event occurred
If you want to run a script on socketile deletion, you can use while loop, e.g.:
tail -Fn0 /var/log/audit/audit.log | grep 'name=<your_socket_file>' | grep 'nametype=DELETE' \
while IFS= read -r line; do
# your script here
done
thx to Tom Klino and his answer
You don't mention the OS you're using, but if it's linux, you can use inotifywait from the inotify-tools package:
#!/bin/sh
while inotifywait -qq -e delete_self /path/to/S.gpg-agent.ssh; do
echo "Socket was deleted!"
# Recreate it.
done

How to redirect stdin to a FIFO with bash

I'm trying to redirect stdin to a FIFO with bash. This way, I will be able to use this stdin on an other part of the script.
However, it doesn't seem to work as I want
script.bash
#!/bin/bash
rm /tmp/in -f
mkfifo /tmp/in
cat >/tmp/in &
# I want to be able to reuse /tmp/in from an other process, for example :
xfce4-terminal --hide-menubar --title myotherterm --fullscreen -x bash -i -c "less /tmp/in"
Here I would expect , when I run ls | ./script.bash, to see the output of ls, but it doesn't work (eg the script exits, without outputing anything)
What am I misunderstanding ?
I am pretty sure that less need additional -f flag when reading from pipe.
test_pipe is not a regular file (use -f to see it)
If that does not help I would also recommend to change order between last two lines of your script:
#!/bin/bash
rm /tmp/in -f
mkfifo /tmp/in
xfce4-terminal --hide-menubar --title myotherterm --fullscreen -x bash -i -c "less -f /tmp/in" &
cat /dev/stdin >/tmp/in
In general, I avoid the use of /dev/stdin, because I get a lot of surprises from what is exactly /dev/stdin, especially when using redirects.
However, what I think that you're seeing is that less finishes before your terminal is completely started. When the less ends, so will the terminal and you won't get any output.
As an example:
xterm -e ls
will also not really display a terminal.
A solution might be tail -f, as in, for example,
#!/bin/bash
rm -f /tmp/in
mkfifo /tmp/in
xterm -e "tail -f /tmp/in" &
while :; do
date > /tmp/in
sleep 1
done
because the tail -f remains alive.

Quit less when pipe closes

As part of a bash script, I want to run a program repeatedly, and redirect the output to less. The program has an interactive element, so the goal is that when you exit the program via the window's X button, it is restarted via the script. This part works great, but when I use a pipe to less, the program does not automatically restart until I go to the console and press q. The relevant part of the script:
while :
do
program | less
done
I want to make less quit itself when the pipe closes, so that the program restarts without any user intervention. (That way it behaves just as if the pipe was not there, except while the program is running you can consult the console to view the output of the current run.)
Alternative solutions to this problem are also welcome.
Instead of exiting less, could you simply aggregate the output of each run of program?
while :
do
program
done | less
Having less exit when program would be at odds with one useful feature of less, which is that it can buffer the output of a program that exits before you finish reading its output.
UPDATE: Here's an attempt at using a background process to kill less when it is time. It assumes that the only program reading the output file is the less to kill.
while :
do
( program > /tmp/$$-program-output; kill $(lsof -Fp | cut -c2-) ) &
less /tmp/$$-program-output
done
program writes its output to a file. Once it exits, the kill command uses lsof to
find out what process is reading the file, then kills it. Note that there is a race condition; less needs to start before program exists. If that's a problem, it can
probably be worked around, but I'll avoid cluttering the answer otherwise.
You may try to kill the process group program and less belong to instead of using kill and lsof.
#!/bin/bash
trap 'kill 0' EXIT
while :
do
# script command gives sh -c own process group id (only sh -c cmd gets killed, not entire script!)
# FreeBSD script command
script -q /dev/null sh -c '(trap "kill -HUP -- -$$" EXIT; echo hello; sleep 5; echo world) | less -E -c'
# GNU script command
#script -q -c 'sh -c "(trap \"kill -HUP -- -$$\" EXIT; echo hello; sleep 5; echo world) | less -E -c"' /dev/null
printf '\n%s\n\n' "you now may ctrl-c the program: $0" 1>&2
sleep 3
done
While I agree with chepner's suggestion, if you really want individual less instances, I think this item for the man page will help you:
-e or --quit-at-eof
Causes less to automatically exit the second time it reaches end-of-file. By default,
the only way to exit less is via the "q" command.
-E or --QUIT-AT-EOF
Causes less to automatically exit the first time it reaches end-of-file.
you would make this option visible to less in the LESS envir variable
export LESS="-E"
while : ; do
program | less
done
IHTH

Resources