I have a collection of .jpg background images that I want to use as backgrounds for my i3-gaps desktop. Currently, I have these two lines in my i3 config file for my wallpapers.
exec --no-startup-id randomwallpaper
bindsym $mod+i exec --no-startup-id feh --bg-scale --randomize /home/user/Pictures/bgart/*.jpg
This is my randomwallpaper script. It uses feh to set an image and wal to create a colorscheme based off of it.
#!/bin/bash
cd /home/user/Pictures/bgart
for file in $(ls); do
shopt -s nullglob
for i in *.jpg; do
feh --bg-scale --randomize /home/user/Pictures/bgart/$i
wal -q -i $i
sleep 300
done
done
On startup, randomwallpaper starts and every 5 minutes the wallpaper changes along with the colorscheme. However, I can also press Win+I to manually switch to a random wallpaper. Is it possible to add a trigger of some sort to interrupt the cycle? Maybe have the script as a function and add a key to call it? That way, I can have the above script running and if I get bored of the wallpaper, I can switch to another with Win+I and still have it change 5 minutes later.
Unless you modified your bash with a built-in sleep, you can kill the sleep command. The script will then proceed to the next command as if sleep terminated normally. The only tricky part is to identify the correct process to kill. Here I assume that there is only one randomwallpaper process running on your system:
exec --no-startup-id randomwallpaper
bindsym $mod+i exec --no-startup-id sh -c 'pkill -P $(pgrep -ox randomwallpaper) sleep'
By the way; Your script could use some improvement. For instance, the variable file is unused and --randomize has no effect since you only supply one picture.
#!/bin/bash
shopt -s nullglob
cd /home/user/Pictures/bgart
while true; do
i=$(shuf -en1 ./*.jpg)
if [ -n "$i" ]; then
feh --bg-scale "$i"
wal -q -i "$i"
fi
sleep 300
done
Related
I'm trying to use tmux in a script, so that it runs a command that takes some time (let's say 'ping -c 5 8.8.8.8', for example) in a new hidden pane, while blocking the current script itself until the ping ends.
By "hidden pane", I mean running the command in a new pane that would be sent in background, and is still accessible by switching panes in order to monitor and/or interact with it (not necessarily ping).
(cf. EDIT)
Here is some pseudo bash code to show more clearly what I'm trying to do:
echo "Waiting for ping to finish..."
echo "Ctrl-b + p to switch pane and see running process"
tmux new-window -d 'ping -c 5 8.8.8.8' # run command in new "background" window
tmux wait-for # display "Done!" only when ping command has finished
echo "Done!"
I know the tmux commands here don't really have any sense like this, but this is just to illustrate.
I've looked at different solutions in order to either send a command in background, or wait until a process has finished in an other pane, but I still haven't found a way to do both correctly.
EDIT
Thanks to Nicholas Marriott for pointing out the -d option exists when creating a new window to avoid switching to it automatically. Now the only issue is to block the main script until the command ends.
I tried the following, hoping it would work, but it doesn't either (the script doesn't resume).
tmux new-window -d 'ping -c 5 8.8.8.8; tmux wait -S ping' &
tmux wait $!
Maybe there is a way by playing with processes (using fg,bg...), but I still haven't figured it out.
Similar questions:
[1] Make tmux block until programs complete
[2] Bash - executing blocking scripts in tmux
[3] How do you hide a tmux pane
[4] how to wait for first command to finish?
You can use wait-for but you need to give it a channel and signal that channel when your process is done, something like:
tmux neww -d 'ping blah; tmux wait -S ping'
tmux wait ping
echo done
If you think you might run the script several times in parallel, I suggest making a channel name using mktemp or similar (and removing the file when wait-for returns).
wait-for can't automatically wait for stuff like pane or windows exiting, silence in a pane, and so on, but I would like to see that implemented at some point.
The other answers are only working if you're already within a tmux session.
But if you are outside of it you've to use something like this:
tmux new-session -d 'vi /etc/passwd' \; split-window -d 'vi /etc/group' \; attach
If you want to call this within a script you should check whether or not "$TMUX" is set. (Or just unset to force a nested tmux window).
#!/bin/sh
export com1="vi /etc/passwd"
export com2="vi /etc/group"
if [ -z $TMUX ]
then
export doNewSession="new-session -d 'exit 0'"
else
export doNewSession=""
fi
tmux $doNewSession \; split-window -d $com1 \; split-window -d $com2 \; attach;
[ -z $TMUX ] && exit 0
My solution was to make a named pipe and then wait for input using read:
#!/bin/sh
rm -f /wait
mkfifo /wait
tmux new-window -d '/bin/sh -c "ping -c 5 8.8.8.8; echo . > /wait"'
read -t 10 WAIT <>/wait
[ -z "$WAIT" ] &&
echo 'The operation failed to complete within 10 seconds.' ||
echo 'Operation completed successfully.'
I like this approach because you can set a timeout and, if you wanted, you could extend this further with other tmux controls to kill the ongoing process if it doesn't end the way you want.
I would like to use screen to stay attached to a loop command on a ssh session, which is most likely going to run for a couple of hours. I am using screen because I fear that my terminal will get disconnected while the command is still running. This is the loop-command:
for i in *; do echo $i/share/sessions/*; done
(echo will be replaced by rm -rf).
I have tried multiple variants of screen 'command ; command ; command', but never got it working. How can I fix this? Alternatively, could you suggest a workaround for my problem?
Screen for long running commands can be used like this :
$screen -S session_name
//Inside screen session
$ <run long running command>
$ //press key combination - Ctrl + a + d - to come out of screen session
// Outside screen session
// Attach to previously created session
$screen -x session_name
For more details look at man page of screen.
Another application which works similar way and is very popular is tmux
I assume that you're trying to run:
screen 'for i in *; do echo $i/share/sessions/* ; done'
This results in a Cannot exec [your-command-here]: No such file or directory because screen doesn't implicitly start a shell; rather, it calls an execv-family syscall to directly invoke the program named in its argument. There is no program named for i in *; do echo $i/share/sessions/*; done, and no shell running which might interpret that as a script, so this fails.
You can, however, explicitly start a shell:
screen bash -c 'for i in *; do echo $i/share/sessions/* ; done'
By the way -- running one copy of rm per file you want to delete is going to be quite inefficient. Consider using xargs to spawn the smallest possible number of instances:
# avoid needing to quote and escape the code to run by encapsulating it in a function
screenfunc() { printf '%s\0' */share/sessions/* | xargs -0 rm -rf; }
export -f screenfunc # ...and exporting that function so subprocesses can access it.
screen bash -c screenfunc
There is no need really for screen here.
nohup rm -vrf */share/sessions/* >rm.out 2>&1 &
will run the command in the background, with output to rm.out. I added the -v option so you can see in more detail what it's doing by examining the tail of the output file. Note that the file won't be updated completely in real time due to buffering.
Another complication is that the invoking shell will do a significant amount of work with the wildcard when it sets up this job. You can delegate that to a subshell, too:
nohup sh -c 'rm -rvf */share/sessions/*' >rm.out 2>&1 &
I have a .sh script that works fine if I run it in Terminal using "/Volumes/MEDIA/SERVER/SYNC.sh"
But I can not get it to run the same in AppleScript Editor using:
do shell script "/Volumes/MEDIA/SERVER/SYNC.sh"
Also tried the above with bash in front, sh in front.
The shell script (SYNC.sh)
#!/bin/bash
login="uhh"
pass="uhh"
host="uhh.com"
remote_dir='~/private/sync'
local_dir="/Volumes/MEDIA/_/SYNCING"
base_name="$(basename "$0")"
lock_file="/tmp/$base_name.lock"
trap "rm -f $lock_file" SIGINT SIGTERM
if [ -e "$lock_file" ]
then
echo "$base_name is running already."
exit
else
touch "$lock_file"
lftp -p 22 -u "$login","$pass" sftp://"$host" << EOF
set sftp:auto-confirm yes
set mirror:use-pget-n 5
mirror -c -P5 --Remove-source-files --log="/Volumes/MEDIA/SERVER/LOGS/$base_name.log" "$remote_dir" "$local_dir"
quit
EOF
# MOVE FINISHED FILES INTO DIRECTORY FOR CONVERSION
mv /Volumes/MEDIA/_/SYNCING/movies/* /Volumes/MEDIA/SEEDBOX/MOVIES
mv /Volumes/MEDIA/_/SYNCING/tvshows/* /Volumes/MEDIA/SEEDBOX/TVSHOWS
mv /Volumes/MEDIA/_/SYNCING/books/* /Volumes/MEDIA/SEEDBOX/BOOKS
mv /Volumes/MEDIA/_/SYNCING/music/* /Volumes/MEDIA/SEEDBOX/MOVIES
# SHOW COMPLETED NOTIFICIATION
osascript -e 'display notification "Sync completed" with title "SEEDB0X"'
rm -f "$lock_file"
trap - SIGINT SIGTERM
exit
fi
By not 'the same' what happens is only the
osascript -e 'display notification "Sync completed" with title "SEEDB0X"'
is run. With the script running through Terminal that only appears once syncing is done.
Thanks for any help!
Did you install lftp yourself? I don't have a Mac handy to check if it's in Mac OS X by default or not. If you installed it, then it probably isn't in the PATH of the AppleScript environment and the bash script can't find it when run from there.
If this is the case, then you'll have to either:
Fully qualify the path to 'lftp' (eg, "/usr/local/bin/lftp" or where ever it actually is)
or
Append to the PATH environment variable as used by AppleScript (or early in your bash script).
I think I'd go for option 1. Option 2 is overkill and more likely to adversely affect other things at other times.
PS. If you don't know where 'lftp' is installed, type 'which lftp' in the terminal.
So I have a shell script that does some long operations, and when they do I want to just output a series of dots (.) until it's done, to show that it's running.
I'm using pkill to test that the process is running, and as long as it is it outputs another dot. This works very well for nearly every place I need it. However, one part of the process involves removing a directory, and that is where it breaks down.
Here is my code:
ERROR=$(rm -rf "$1" 2>&1 >/dev/null)
while pkill -0 rm; do
printf "."
sleep 1
done
printf "\n"
I'm using pkill to test the rm process, but when I do, this is the output I get:
pkill: signalling pid 192: Operation not permitted
pkill: signalling pid 326: Operation not permitted
.pkill: signalling pid 61: Operation not permitted
My script runs up until the dot-output code, including the folder deletion, but then it stops and just outputs those three lines over and over again until I forcibly kill the process.
Anyone have any ideas what's going on? I feel like it's not able to work with the rm operation, but I'm not sure.
Thanks in advance.
The problem is pkill is sending kill(PID, SIG_0) for the processes matched by Regex pattern rm. For some matched processes (shown by the PIDs), you don't have sufficient permission to send SIG_0 to get the process status.
You can use -x (--exact) option (no Regex) to match only process(es) with exact name rm (given there is no rm by other users running):
pkill -0 -x rm
or use pgrep
pgrep -x rm
Better mention your username:
pkill -0 -x -u username rm
pgrep -x -u username rm
Your script is not putting rm into the background, so when pkill is being run, presumably it's finding processes owned by other users and is not able to kill them because you cannot kill another user's process unless you are root.
Since you are spawning the process within the script, if you correctly background the rm then you can get the PID of the rm job from $! and use kill instead of pkill.
You should run the rm command and properly background it. The following untested code should do what you're trying to do:
rm -rf "$1" >/dev/null 2>&1 &
RMPID=$!
while kill -0 $RMPID 2>/dev/null; do
printf "."
sleep 1
done
printf "\n"
wait $RMPID
RESULT=$?
if (( $RESULT != 0 )); then
printf "Error when deleting $1\n"
exit 1
fi
You can read the bash documentation for more details on wait and $! and $?
I want to run xterm -e file.sh without terminating.
In the file, I'm sending commands to the background and when the script is done, they are still not finished.
What I'm doing currently is:
(cd /myfolder; /xterm -ls -geometry 115x65 -sb -sl 1000)
and then after the window pops up
sh file.sh
exit
What I want to do is something like:
(cd /myfolder; /xterm -ls -geometry 115x65 -sb -sl 1000 -e sh file.sh)
without terminating and wait until the commands in the background finish.
Anyone know how to do that?
Use hold option:
xterm -hold -e file.sh
-hold Turn on the hold resource, i.e., xterm will not immediately destroy its window when the shell command completes. It will wait
until you use the window manager to destroy/kill the window, or if you
use the menu entries that send a signal, e.g., HUP or KILL.
I tried -hold, and it leaves xterm in an unresponsive state that requires closing through non-standard means (the window manager, a kill command). If you would rather have an open shell from which you can exit, try adding that shell to the end of your command:
xterm -e "cd /etc; bash"
I came across the answer on Super User.
Use the wait built-in in you shell script. It'll wait until all the background jobs are finished.
Working Example:
#!/bin/bash
# Script to show usage of wait
sleep 20 &
sleep 20 &
sleep 20 &
sleep 20 &
sleep 20 &
wait
The output
sgulati#maverick:~$ bash test.sh
[1] Done sleep 20
[2] Done sleep 20
[3] Done sleep 20
[4]- Done sleep 20
[5]+ Done sleep 20
sgulati#maverick:~$
Building on a previoius answer, if you specify $SHELL instead of bash, it will use the users preferred shell.
xterm -e "cd /etc; $SHELL"
With respect to creating the separate shell, you'll probably want to run it in the background so that you can continue to execute more commands in the current shell - independent of the separate one. In which case, just add the & operator:
xterm -e "cd /etc; bash" &
PID=$!
<"do stuff while xterm is still running">
wait $PID
The wait command at the end will prevent your primary shell from exiting until the xterm shell does. Without the wait, your xterm shell will still continue to run even after the primary shell exits.