ssh user#myserver.com<<EOF
cd ../../my/path/
sh runscript.sh
wait
cd ../../temp/path
sh secondscript.sh
EOF
The first script runs and asks me the questions in that script, but before i'm even able to start typing to answer them the second script starts running. From what I'm reading this shouldn't be happening even without the wait.
Related
I have a simple bash script, which uses the inotifywait command and based on the modification that is done, it run a rsync command. This is done in an infinite loop.
The script is run in a "screen" session, but unfortunately it crashes from time to time (no error as to why).
I've been searching for a way to "monitor" that specific screen/script and restart it when it crashes, but I'm struggling to find a solution to that.
The script is run "screen -AmdS script ./script.sh".
Script example:
#!/usr/bin/bash
while inotifywait --exclude "(.log)" -r -e modify,create,delete /home/backups/
do
rsync -avz -e --update --rsh='ssh -pxxxxx' /home/backups/* user#target:/location/ --delete --force
done
So my question basically is, is there a way to monitor the 'screen' session and if it stops to start a new one or is there another way to keep this script running constantly and restart it (possibly not utilizing screen).
You can rerun your failing script in a loop until it succeeds. You can do this with
screen -AmdS script bash -c 'until ./script.sh; do echo "Crashed with exit code $?. Restaring."; sleep 1; done'
Every time your script fails, this will print that your script crashed and what exit code it had, pause for 1 second, and then rerun your script. As soon as your script succeeds (i.e., ./script.sh terminates with a non-zero exit code), then the loop will terminate.
Note that if your script never succeeds, this is an infinite loop.
Edit: attempt to be clearer
You could just issue
until ./script.sh ; do echo crashed ; sleep 1 ; done
in your terminal. That will restart the script whenever it exits with non-zero result.
I am trying to run a series of tests on a remote Linux server to which I am connecting via ssh.
I don't want to have to stay logged in the ssh session during the runs -> nohup(?)
I don't want to have to keep checking if one run is done -> for loop(?)
Because of licensing issues, I can only run a single testing process at a time -> sequential
I want to keep working while the test set is being processed -> background
Here's what I tried:
#!/usr/bin/env bash
# Assembling a list of commands to be executed sequentially
TESTRUNS="";
for i in `ls ../testSet/*`;
do
MSG="running test problem ${i##*/}";
RUN="mySequentialCommand $i > results/${i##*/} 2> /dev/null;";
TESTRUNS=$TESTRUNS"echo $MSG; $RUN";
done
#run commands with nohup to be able to log out of ssh session
nohup eval $TESTRUNS &
But it looks like nohup doesn't fare too well with eval.
Any thoughts?
nohup is needed if you want your scripts to run even after the shell is closed. so yes.
and the & is not necessary in RUN since you execute the command with &.
Now your script builds the command in the for loop, but doesn't execute it. It means you'll have only the last file running. If you want to run all of the files, you need to execute the nohup command as part of your loop. BUT - you can't run the commands with & because this will run commands in the background and return to the script, which will execute the next item in the loop. Eventually this would run all files in parallel.
Move the nohup eval $TESTRUNS inside the for loop, but again, you can't run it with &. What you need to do is run the script itself with nohup, and the script will loop through all files one at a time, in the background, even after the shell is closed.
You could take a look at screen, an alternative for nohup with additional features. I will replace your testscript with while [ 1 ]; do printf "."; sleep 5; done for testing the screen solution.
The commands screen -ls are optional, just showing what is going on.
prompt> screen -ls
No Sockets found in /var/run/uscreens/S-notroot.
prompt> screen
prompt> screen -ls
prompt> while [ 1 ]; do printf "."; sleep 5; done
# You don't get a prompt. Use "CTRL-a d" to detach from your current screen
prompt> screen -ls
# do some work
# connect to screen with batch running
prompt> screen -r
# Press ^C to terminate the batch (script printing dots)
prompt> screen -ls
prompt> exit
prompt> screen -ls
Google for screenrc to see how you can customize the interface.
You can change your script into something like
#!/usr/bin/env bash
# Assembling a list of commands to be executed sequentially
for i in ../testSet/*; do
do
echo "Running test problem ${i##*/}"
mySequentialCommand $i > results/${i##*/} 2> /dev/null
done
Above script can be started with nohup scriptname & when you do not use screen or simple scriptname inside the screen.
I am using guestcontrol with Virtual Box with a Windows host and a Linux (RHEL7) guest. I want to do some config from the host to the guest by running a shell script on the guest (from a .bat on the host). This is fine and the script runs, however, it hangs when I call the reboot (I believe it is because nothing is returned). So when the following .sh is called:
#!/bin/bash
echo "here"
exit
The .bat file shows "here" and then exits (or if I use pause gives the correct message). However, when I add the reboot, the .bat never processes anything past where it calls the script. I think this would be because the guest never tells the host that the script is complete.
I have tried things like:
#!/bin/bash
{ sleep 1; reboot; } >/dev/null &
exit
or even:
#!/bin/bash
do_reboot(){
sleep 1
reboot
}
do_reboot() &
exit
but the .bat never gets past the line where it runs the .sh
How can I tell the host that the .sh script (on the guest) is complete so it can continue with the .bat script?
We need to make sure there are no sub processes running, so we want to do a no heads up using the nohup command. So the script simply becomes this:
#!/bin/bash
nohup reboot &> /tmp/nohup.out </dev/null &
exit
The stdin and stdout were causing the issues, so this just sends them into the void so that the script will not be waiting for any input from any other processes.
If you have any issues with this script, you could do something like:
#!/bin/bash
nohup /path/to/reboot_delay.sh &> /tmp/nohup.out </dev/null &
exit
And then in /path/to/reboot_delay.sh you would have:
#!/bin/bash
sleep 10 # or however many seconds you need to wait for something to happen
reboot
This way you could even allow some time for something to finish etc, yet the host machine (or ssh or wherever you are calling this from) would still know the script had finished and do what it needs to do.
I hope this can help people in future.
I'm trying to have a desktop shortcut that executes one command (without a script, I'm just wondering if that is possible). That command requires root privileges so I use gksu in Ubuntu, after I finish typing my password and it is correct I want the other command to run a file. I have this command:
xterm -e "gksu cp /opt/Popcorn-Time/backup/* /opt/Popcorn-Time; /opt/Popcorn-Time/Popcorn-Time"
But Popcorn-Time opens without it waiting for me to finish typing my password (correctly). I want to do this without a seperate script, if possible.
How should I do this?
EDIT: Ah! I see what is going on now, you've all been helping me with causing Popcorn-Time to wait for gksu to finish, but Popcorn-Time isn't going to run without the files in backup, and those are a bit heavy (7 MB total), so it takes a second for them to complete the transfer, then Popcorn-Time is already open by the time the files are copied. Is there a way to wait for Popcorn-Time to wait for the cp command to finish?
I also changed my command above to what I have now.
EDIT #2: Everything I said by now isn't relevant, as the problem with Popcorn-Time isn't what I thought, I didn't need to copy the files over, I just needed to run it as root for it to work. Thanks for everyone who tried to help.
Thanks.
If you want the /opt/popcorntime/Popcorn-time command to wait until the first command finishes, you can separate the commands by && so that the second only executes on successful completion of the first. This is called a compound-command. E.g.:
command1 && command2
With gksu in order to run multiple commands with only a single password entry, you will need:
gksu -- bash -c 'command1 && command2'
In your case:
gnome-terminal -e gksu -- bash -c "cp /opt/popcorntime/backup/* /opt/popcorntime && /opt/popcorntime/Popcorn-Time"
(you may have to adjust quoting to fit your expansion needs)
You can use the or operator in a similar fashion so that the second command only executes if the first fails. E.g.:
command1 || command2
In a console you would do:
gksu cp /opt/popcorntime/backup/* /opt/popcorntime; /opt/popcorntime/Popcorn-Time
In order to use it as Exec in the .desktop file wrap it like this:
bash -e "gksu cp /opt/popcorntime/backup/* /opt/popcorntime; /opt/popcorntime/Popcorn-Time"
The problem is that gnome-terminal is only seeing the gksu command as the value to the -e argument and not the Popcorn-Time command.
gnome-terminal forks and returns immediately and so Popcorn-Time runs immediately.
The solution is to quote the entire command string (both commands) so they are (combined) the single argument to -e.
I am writing my first ever bash script, so excuse the noobie-ness.
It's called hello.bash, and this is what it contains:
#!/bin/bash
echo Hello World
I did
chmod 700 hello.bash
to give myself permissions to execute.
Now, when I type
exec hello.bash
My putty terminal instantly shuts down. What am I doing wrong?
From the man page for exec:
If command is supplied, it replaces the shell without creating a new process. If no command is specified, redirections may be used to affect the current shell environment.
So your script process runs in place of your terminal and when it exits, so does your terminal. Just execute it instead:
./hello.bash