Execute one command after another one finishes under gksu - bash

I'm trying to have a desktop shortcut that executes one command (without a script, I'm just wondering if that is possible). That command requires root privileges so I use gksu in Ubuntu, after I finish typing my password and it is correct I want the other command to run a file. I have this command:
xterm -e "gksu cp /opt/Popcorn-Time/backup/* /opt/Popcorn-Time; /opt/Popcorn-Time/Popcorn-Time"
But Popcorn-Time opens without it waiting for me to finish typing my password (correctly). I want to do this without a seperate script, if possible.
How should I do this?
EDIT: Ah! I see what is going on now, you've all been helping me with causing Popcorn-Time to wait for gksu to finish, but Popcorn-Time isn't going to run without the files in backup, and those are a bit heavy (7 MB total), so it takes a second for them to complete the transfer, then Popcorn-Time is already open by the time the files are copied. Is there a way to wait for Popcorn-Time to wait for the cp command to finish?
I also changed my command above to what I have now.
EDIT #2: Everything I said by now isn't relevant, as the problem with Popcorn-Time isn't what I thought, I didn't need to copy the files over, I just needed to run it as root for it to work. Thanks for everyone who tried to help.
Thanks.

If you want the /opt/popcorntime/Popcorn-time command to wait until the first command finishes, you can separate the commands by && so that the second only executes on successful completion of the first. This is called a compound-command. E.g.:
command1 && command2
With gksu in order to run multiple commands with only a single password entry, you will need:
gksu -- bash -c 'command1 && command2'
In your case:
gnome-terminal -e gksu -- bash -c "cp /opt/popcorntime/backup/* /opt/popcorntime && /opt/popcorntime/Popcorn-Time"
(you may have to adjust quoting to fit your expansion needs)
You can use the or operator in a similar fashion so that the second command only executes if the first fails. E.g.:
command1 || command2

In a console you would do:
gksu cp /opt/popcorntime/backup/* /opt/popcorntime; /opt/popcorntime/Popcorn-Time
In order to use it as Exec in the .desktop file wrap it like this:
bash -e "gksu cp /opt/popcorntime/backup/* /opt/popcorntime; /opt/popcorntime/Popcorn-Time"

The problem is that gnome-terminal is only seeing the gksu command as the value to the -e argument and not the Popcorn-Time command.
gnome-terminal forks and returns immediately and so Popcorn-Time runs immediately.
The solution is to quote the entire command string (both commands) so they are (combined) the single argument to -e.

Related

Bash script is waiting to open second file in gedit until I close the first one [duplicate]

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

How can I tell if a script was run in the background and with nohup?

Ive got a script that takes a quite a long time to run, as it has to handle many thousands of files. I want to make this script as fool proof as possible. To this end, I want to check if the user ran the script using nohup and '&'. E.x.
me#myHost:/home/me/bin $ nohup doAlotOfStuff.sh &. I want to make 100% sure the script was run with nohup and '&', because its a very painful recovery process if the script dies in the middle for whatever reason.
How can I check those two key paramaters inside the script itself? and if they are missing, how can I stop the script before it gets any farther, and complain to the user that they ran the script wrong? Better yet, is there way I can force the script to run in nohup &?
Edit: the server enviornment is AIX 7.1
The ps utility can get the process state. The process state code will contain the character + when running in foreground. Absence of + means code is running in background.
However, it will be hard to tell whether the background script was invoked using nohup. It's also almost impossible to rely on the presence of nohup.out as output can be redirected by user elsewhere at will.
There are 2 ways to accomplish what you want to do. Either bail out and warn the user or automatically restart the script in background.
#!/bin/bash
local mypid=$$
if [[ $(ps -o stat= -p $mypid) =~ "+" ]]; then
echo Running in foreground.
exec nohup $0 "$#" &
exit
fi
# the rest of the script
...
In this code, if the process has a state code +, it will print a warning then restart the process in background. If the process was started in the background, it will just proceed to the rest of the code.
If you prefer to bailout and just warn the user, you can remove the exec line. Note that the exit is not needed after exec. I left it there just in case you choose to remove the exec line.
One good way to find if a script is logging to nohup, is to first check that the nohup.out exists, and then to echo to it and ensure that you can read it there. For example:
echo "complextag"
if ( $(cat nohup.out | grep "complextag" ) != "complextag" );then
# various commands complaining to the user, then exiting
fi
This works because if the script's stdout is going to nohup.out, where they should be going (or whatever out file you specified), then when you echo that phrase, it should be appended to the file nohup.out. If it doesn't appear there, then the script was nut run using nohup and you can scold them, perhaps by using a wall command on a temporary broadcast file. (if you want me to elaborate on that I can).
As for being run in the background, if it's not running you should know by checking nohup.

How to run shell script on VM indefinitely?

I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.

How can I condition that a command stop executing before the bash continue with the next one?

I have a bash script with a lot of lines using the command gnome-terminal so it can open several terminals an execute programs. The problem is that some of the programs depend of the executing of one in particular that takes some time, so I need to be sure that this line stop running before the bash can continue with the next.
One way to do it is to put a wait time with "sleep", calculating how much time this program needs to run for completed; but someone know a more efficient way?
Thank you.
Instead of
xterm -e program1 &
xterm -e program2 &
use
program1
program2
or if you absolutely need them to run in an xterm,
xterm -e sh -c 'program1; program2'
The more sane solution is to factor out the xterms from the actual script and do
xterm -e path/to/yourscript &
when you want your script to run in an xterm.

Is it possible for bash commands to continue before the result of the previous command?

When running commands from a bash script, does bash always wait for the previous command to complete, or does it just start the command then go on to the next one?
ie: If you run the following two commands from a bash script is it possible for things to fail?
cp /tmp/a /tmp/b
cp /tmp/b /tmp/c
Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:
command1 &
command2 &
command3 &
wait
The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.
Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.
The bash manual has more information (search for wait, about two-thirds of the way down).
add '&' at the end of a command to run it parallel.
However, it is strange because in your case the second command depends on the final result of the first one. Either use sequential commands or copy to b and c from a like this:
cp /tmp/a /tmp/b &
cp /tmp/a /tmp/c &
Unless you explicitly tell bash to start a process in the background, it will wait until the process exits. So if you write this:
foo args &
bash will continue without waiting for foo to exit. But if you don't explicitly put the process in the background, bash will wait for it to exit.
Technically, a process can effectively put itself in the background by forking a child and then exiting. But since that technique is used primarily by long-lived processes, this shouldn't affect you.
In general, unless explicitly sent to the background or forking themselves off as a daemon, commands in a shell script are serialized.
They wait until the previous one is finished.
However, you can write 2 scripts and run them in separate processes, so they can be executed simultaneously. It's a wild guess, really, but I think you'll get an access error if a process tries to write in a file that's being read by another process.
I think what you want is the concept of a subshell. Here's one reference I just googled: http://www.linuxtopia.org/online_books/advanced_bash_scripting_guide/subshells.html

Resources