Process not running in background - applescript

I am trying to make an apple script that launches Alacritty and tmux. I have all the parts except the script runs while I'm running Alacritty and I would like it to exit soon after running (in both cases, where a tmux session exists and when it doesn't).
set t to (time of (current date))
do shell script "nohup /Applications/Alacritty.app/Contents/MacOS/alacritty -e /usr/local/bin/tmux attach || tmux new -s general > /dev/null 2>&1 &"
if (time of (current date)) < t + 1 then
do shell script "nohup /Applications/Alacritty.app/Contents/MacOS/alacritty -e /usr/local/bin/tmux new -s general > /dev/null 2>&1 &"
end if
This works as I'd hoped when there isn't a tmux session, but it runs until I quit Alacritty.
I think the solution will be related to job control in AppleScript, but I can't figure it out. I essentially need a way to wait for a second or two and then test if it is running or something.

Okay I figured out another way:
do shell script "nohup /Applications/Alacritty.app/Contents/MacOS/alacritty -e /usr/local/bin/tmux new -A -s general > /dev/null 2>&1 &"
This combines both.

Related

How I can write a command in a terminal which is running a program and I open from a script

I already know how to open a terminal in a bash with gnome-terminal and execute a program:
gnome-terminal -e ./OpenBTSCLI
But I also need that once open that program in the new terminal, write another command inside.
When a I tried to use echo, the message appear in the terminal where I run the bash.
I tried: gnome-terminal -e "bash -c './OpenBTSCLI && echo message'" that I find online but its not working, it only do the first part.
Anyone have an idea of how to resolve this? Thank you
I think it does the second command as well, but the new terminal closes as soon as the command's finished, so you don't see it. I reversed the order of quotes and added a 1s sleep at the end to allow seeing the echo.
gnome-terminal -e 'bash -c "./OpenBTSCLI && echo message && sleep 1"'

run xterm -e without terminating

I want to run xterm -e file.sh without terminating.
In the file, I'm sending commands to the background and when the script is done, they are still not finished.
What I'm doing currently is:
(cd /myfolder; /xterm -ls -geometry 115x65 -sb -sl 1000)
and then after the window pops up
sh file.sh
exit
What I want to do is something like:
(cd /myfolder; /xterm -ls -geometry 115x65 -sb -sl 1000 -e sh file.sh)
without terminating and wait until the commands in the background finish.
Anyone know how to do that?
Use hold option:
xterm -hold -e file.sh
-hold Turn on the hold resource, i.e., xterm will not immediately destroy its window when the shell command completes. It will wait
until you use the window manager to destroy/kill the window, or if you
use the menu entries that send a signal, e.g., HUP or KILL.
I tried -hold, and it leaves xterm in an unresponsive state that requires closing through non-standard means (the window manager, a kill command). If you would rather have an open shell from which you can exit, try adding that shell to the end of your command:
xterm -e "cd /etc; bash"
I came across the answer on Super User.
Use the wait built-in in you shell script. It'll wait until all the background jobs are finished.
Working Example:
#!/bin/bash
# Script to show usage of wait
sleep 20 &
sleep 20 &
sleep 20 &
sleep 20 &
sleep 20 &
wait
The output
sgulati#maverick:~$ bash test.sh
[1] Done sleep 20
[2] Done sleep 20
[3] Done sleep 20
[4]- Done sleep 20
[5]+ Done sleep 20
sgulati#maverick:~$
Building on a previoius answer, if you specify $SHELL instead of bash, it will use the users preferred shell.
xterm -e "cd /etc; $SHELL"
With respect to creating the separate shell, you'll probably want to run it in the background so that you can continue to execute more commands in the current shell - independent of the separate one. In which case, just add the & operator:
xterm -e "cd /etc; bash" &
PID=$!
<"do stuff while xterm is still running">
wait $PID
The wait command at the end will prevent your primary shell from exiting until the xterm shell does. Without the wait, your xterm shell will still continue to run even after the primary shell exits.

bash && operator prevents backgrounding over ssh

After trying to figure out why a Capistrano task (which tried to start a daemon in the background) was hanging, I discovered that using && in bash over ssh prevents a subsequent program from running in the background. I tried it on bash 4.1.5 and 4.2.20.
The following will hang (i.e. wait for sleep to finish) in bash:
ssh localhost "cd /tmp && nohup sleep 10 >/dev/null 2>&1 &"
The following won't:
ssh localhost "cd /tmp ; nohup sleep 10 >/dev/null 2>&1 &"
Neither will this:
cd /tmp && nohup sleep 10 >/dev/null 2>&1 &
Both zsh and dash will execute it in the background in all cases, regardless of && and ssh. Is this normal/expected behavior for bash, or a bug?
One easy solution is to use:
ssh localhost "(cd /tmp && nohup sleep 10) >/dev/null 2>&1 &"
(this also works if you use braces, see second example below).
I did not experiment further but I am reasonably convinced it has to do with open file descriptors hanging around. Perhaps zsh and dash bind the && so that this means what has to be spelled as:
{ cd /tmp && nohup sleep 10; } >/dev/null 2>&1
in bash.Nope, quick experiment in dash shows that echo foo && echo bar >file only redirects the latter. Still, it has to have something to do with lingering open fd's causing ssh to wait for more output; I've run into this a lot in the past.
One more trick, not needed if you use the parentheses or braces for this particular case but might be useful in a more general context, where the set of commands to do with && are more complex. Since bash seems to be hanging on to the file descriptor inappropriately with && but not with ;, you can turn a && b && c into a || exit 1; b || exit 1; c. This works with the test case:
ssh localhost "true || exit 1; echo going on; nohup sleep 10 >/dev/null 2>&1 &"
Replace true with false and the echo of "going on" is omitted.
(You can also set -e, although sometimes that is a bigger hammer than desired.)
This seems to work:
ssh localhost "(exec 0>&- ; exec 1>&-; exec 2>&-; cd /tmp; sleep 20&)"

How to run gnome terminal child processes in differnet terminal

I am writhing shell script. I want three script to run in different terminal. I wrote like this in shell script,
gnome-terminal -x 1.sh
gnome-terminal -x 2.sh
gnome-terminal -x 3.sh
Then parent terminal is waiting to finish for execution of gnome-terminal -x 1.sh. It wont proceed to next script while first script running. If I run these 3 script as background process, they run in 3 different terminal window but, I m not able to kill these 3 process.
I have to manually find there process id's and kill them. I don't want to do this. Is there any better way to do it?
You can get their process id from the command line that launched them:
gnome-terminal -x 1.sh & pid1=$!
gnome-terminal -x 2.sh & pid2=$!
gnome-terminal -x 2.sh & pid3=$!

Getting ssh to execute a command in the background on target machine

This is a follow-on question to the How do you use ssh in a shell script? question. If I want to execute a command on the remote machine that runs in the background on that machine, how do I get the ssh command to return? When I try to just include the ampersand (&) at the end of the command it just hangs. The exact form of the command looks like this:
ssh user#target "cd /some/directory; program-to-execute &"
Any ideas? One thing to note is that logins to the target machine always produce a text banner and I have SSH keys set up so no password is required.
I had this problem in a program I wrote a year ago -- turns out the answer is rather complicated. You'll need to use nohup as well as output redirection, as explained in the wikipedia artcle on nohup, copied here for your convenience.
Nohuping backgrounded jobs is for
example useful when logged in via SSH,
since backgrounded jobs can cause the
shell to hang on logout due to a race
condition [2]. This problem can also
be overcome by redirecting all three
I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
This has been the cleanest way to do it for me:-
ssh -n -f user#host "sh -c 'cd /whereever; nohup ./whatever > /dev/null 2>&1 &'"
The only thing running after this is the actual command on the remote machine
Redirect fd's
Output needs to be redirected with &>/dev/null which redirects both stderr and stdout to /dev/null and is a synonym of >/dev/null 2>/dev/null or >/dev/null 2>&1.
Parantheses
The best way is to use sh -c '( ( command ) & )' where command is anything.
ssh askapache 'sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nohup Shell
You can also use nohup directly to launch the shell:
ssh askapache 'nohup sh -c "( ( chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nice Launch
Another trick is to use nice to launch the command/shell:
ssh askapache 'nice -n 19 sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
If you don't/can't keep the connection open you could use screen, if you have the rights to install it.
user#localhost $ screen -t remote-command
user#localhost $ ssh user#target # now inside of a screen session
user#remotehost $ cd /some/directory; program-to-execute &
To detach the screen session: ctrl-a d
To list screen sessions:
screen -ls
To reattach a session:
screen -d -r remote-command
Note that screen can also create multiple shells within each session. A similar effect can be achieved with tmux.
user#localhost $ tmux
user#localhost $ ssh user#target # now inside of a tmux session
user#remotehost $ cd /some/directory; program-to-execute &
To detach the tmux session: ctrl-b d
To list screen sessions:
tmux list-sessions
To reattach a session:
tmux attach <session number>
The default tmux control key, 'ctrl-b', is somewhat difficult to use but there are several example tmux configs that ship with tmux that you can try.
I just wanted to show a working example that you can cut and paste:
ssh REMOTE "sh -c \"(nohup sleep 30; touch nohup-exit) > /dev/null &\""
You can do this without nohup:
ssh user#host 'myprogram >out.log 2>err.log &'
Quickest and easiest way is to use the 'at' command:
ssh user#target "at now -f /home/foo.sh"
I think you'll have to combine a couple of these answers to get what you want. If you use nohup in conjunction with the semicolon, and wrap the whole thing in quotes, then you get:
ssh user#target "cd /some/directory; nohup myprogram > foo.out 2> foo.err < /dev/null"
which seems to work for me. With nohup, you don't need to append the & to the command to be run. Also, if you don't need to read any of the output of the command, you can use
ssh user#target "cd /some/directory; nohup myprogram > /dev/null 2>&1"
to redirect all output to /dev/null.
This worked for me may times:
ssh -x remoteServer "cd yourRemoteDir; ./yourRemoteScript.sh </dev/null >/dev/null 2>&1 & "
You can do it like this...
sudo /home/script.sh -opt1 > /tmp/script.out &
It appeared quite convenient for me to have a remote tmux session using the tmux new -d <shell cmd> syntax like this:
ssh someone#elsewhere 'tmux new -d sleep 600'
This will launch new session on elsewhere host and ssh command on local machine will return to shell almost instantly. You can then ssh to the remote host and tmux attach to that session. Note that there's nothing about local tmux running, only remote!
Also, if you want your session to persist after the job is done, simply add a shell launcher after your command, but don't forget to enclose in quotes:
ssh someone#elsewhere 'tmux new -d "~/myscript.sh; bash"'
Actually, whenever I need to run a command on a remote machine that's complicated, I like to put the command in a script on the destination machine, and just run that script using ssh.
For example:
# simple_script.sh (located on remote server)
#!/bin/bash
cat /var/log/messages | grep <some value> | awk -F " " '{print $8}'
And then I just run this command on the source machine:
ssh user#ip "/path/to/simple_script.sh"
If you run remote command without allocating tty, redirect stdout/stderr works, nohup is not necessary.
ssh user#host 'background command &>/dev/null &'
If you use -t to allocate tty to run interactive command along with background command, and background command is the last command, like this:
ssh -t user#host 'bash -c "interactive command; nohup backgroud command &>/dev/null &"'
It's possible that background command doesn't actually start. There's race here:
bash exits after nohup starts. As a session leader, bash exit results in HUP signal sent to nohup process.
nohup ignores HUP signal.
If 1 completes before 2, the nohup process will exit and won't start the background command at all. We need to wait nohup start the background command. A simple workaroung is to just add a sleep:
ssh -t user#host 'bash -c "interactive command; nohup backgroud command &>/dev/null & sleep 1"'
The question was asked and answered years ago, I don't know if openssh behavior changed since then. I was testing on:
OpenSSH_8.6p1, OpenSSL 1.1.1g FIPS 21 Apr 2020
I was trying to do the same thing, but with the added complexity that I was trying to do it from Java. So on one machine running java, I was trying to run a script on another machine, in the background (with nohup).
From the command line, here is what worked: (you may not need the "-i keyFile" if you don't need it to ssh to the host)
ssh -i keyFile user#host bash -c "\"nohup ./script arg1 arg2 > output.txt 2>&1 &\""
Note that to my command line, there is one argument after the "-c", which is all in quotes. But for it to work on the other end, it still needs the quotes, so I had to put escaped quotes within it.
From java, here is what worked:
ProcessBuilder b = new ProcessBuilder("ssh", "-i", "keyFile", "bash", "-c",
"\"nohup ./script arg1 arg2 > output.txt 2>&1 &\"");
Process process = b.start();
// then read from process.getInputStream() and close it.
It took a bit of trial & error to get this working, but it seems to work well now.
YOUR-COMMAND &> YOUR-LOG.log &
This should run the command and assign a process id you can simply tail -f YOUR-LOG.log to see results written to it as they happen. you can log out anytime and the process will carry on
If you are using zsh then use program-to-execute &! is a zsh-specific shortcut to both background and disown the process, such that exiting the shell will leave it running.
A follow-on to #cmcginty's concise working example which also shows how to alternatively wrap the outer command in double quotes. This is how the template would look if invoked from within a PowerShell script (which can only interpolate variables from within double-quotes and ignores any variable expansion when wrapped in single quotes):
ssh user#server "sh -c `"($cmd) &>/dev/null </dev/null &`""
Inner double-quotes are escaped with back-tick instead of backslash. This allows $cmd to be composed by the PowerShell script, e.g. for deployment scripts and automation and the like. $cmd can even contain a multi-line heredoc if composed with unix LF.
First follow this procedure:
Log in on A as user a and generate a pair of authentication keys. Do not enter a passphrase:
a#A:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a#A
Now use ssh to create a directory ~/.ssh as user b on B. (The directory may already exist, which is fine):
a#A:~> ssh b#B mkdir -p .ssh
b#B's password:
Finally append a's new public key to b#B:.ssh/authorized_keys and enter b's password one last time:
a#A:~> cat .ssh/id_rsa.pub | ssh b#B 'cat >> .ssh/authorized_keys'
b#B's password:
From now on you can log into B as b from A as a without password:
a#A:~> ssh b#B
then this will work without entering a password
ssh b#B "cd /some/directory; program-to-execute &"
I think this is what you need:
At first you need to install sshpass on your machine.
then you can write your own script:
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1
.
.
.
COMMAND n
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE

Resources