I've read this and several other articles but I can't figure this out. This is what I've put together:
sudo nohup or nohup sudo (I've tried both) the command without the background & (so you can enter your password before detaching it)
Enter the password
^Z then bg
disown the pid (doesn't work for some reason)
Log out
Here's the output:
[user#localhost ~]$ nohup sudo dd if=/dev/zero of=/dev/sda bs=1M
nohup: ignoring input and appending output to ‘nohup.out’
[sudo] password for user:
^Z
[1]+ Stopped nohup sudo dd if=/dev/zero of=/dev/sda bs=1M
[user#localhost ~]$ bg
[1]+ nohup sudo dd if=/dev/zero of=/dev/sda bs=1M &
[user#localhost ~]$ sudo ps | grep dd
2458 pts/32 00:00:00 dd
[user#localhost ~]$ disown 2458
-bash: disown: 2458: no such job
[user#localhost ~]$ logout
Connection to [server] closed.
$ remote
Last login: Mon Feb 3 11:32:59 2014 from [...]
[user#localhost ~]$ sudo ps | grep dd
[sudo] password for user:
[user#localhost ~]$
So the process is gone. I've tried a few other combinations with no success. Any ideas to make this work?
Use the -b option to sudo to instruct it to run the given command in the background. Job control doesn't work because the nohup process is not a child of the current shell, but of the sudo process.
sudo -b nohup dd if=/dev/zero of=/dev/sda bs=1M
You are not using disown correctly, the right argument should be the jobspec ('%' + job number).
In your example you should have disown %1 instead of disown 2458.
For listing your current shell jobs list, you can use the bash builtin jobs
Related
I need to script a way to do the following (note all is done on the local machine as root):
runuser -l user1 -c 'ssh localhost' &
runuser -l user1 -c 'systemctl --user list-units'
The first command should be run as root. The end goal is to log in as "user1" so that if any user ran who "user1" will appear in this list. Noticed how the first command is backgrounded before the next command is run.
The next command should be run as root as well, NOT user1.
Problem: These commands run fine when run separately, but when run in a script "user1" never appears to show up when running who. Here is my script
#!/bin/bash
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
echo
sleep 1
echo "[+] Running systemctl --user commands as root."
runuser -l user 1 -c 'systemctl --user list-units'
echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null
echo "[+] Done."
When running the script it looks like the script is able to ssh into the system but who does not show the user logged in, nor do any ps aux output show a ssh session. Note: I commented out the kill line to confirm if the process stays, which I do not see it at all.
How do I make the bash script fork two processes. Process 1 goal is to login as "user1" and wait. Then process 2 is to perform commands as root while user1 is logged in?
My goal is to run systemctl --user commands as root via script. If your familiar with systemctl --user domain, there is no way to manage systemctl --user units, without the user being logged in via traditional methods (ssh, direct terminal, or gui). I cannot "su - user1" as root either. So I want to force an ssh session as root to the vdns11 user via runuser commands. Once the user is authenticated and shows up via who I can run systemctl --user commands. How can I keep the ssh session active in my code?
With this additional info, the question essentially boils down to 'How can I start and background an interactive ssh session?'.
You could use script for that. It can be used to trick applications into thinking they are being run interactively:
echo "[+] Starting SSH session in background"
runuser -l user1 -c "script -c 'ssh localhost'" &>/dev/null &
pid=$!
...
echo "[+] Killing active SSH session"
kill ${pid}
Original answer before OP provided additional details (for future reference):
Let's dissect what is going on here.
I assume you start your script as root:
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
So root runs runuser -l user1 -c '...', which itself runs ssh -q localhost 2>/dev/null as user1. All this takes place in the background due to &.
ssh will print Pseudo-terminal will not be allocated because stdin is not a terminal. (hidden due to 2>/dev/null) and immediately exit. That's why you don't see anything when running who or when running ps.
Your echo says [+] Becoming user1, which is quite different from what's happening.
sleep 1
The script sleeps for a second. Nothing wrong with that.
echo "[+] Running systemctl --user commands as root."
#runuser -l user 1 -c 'systemctl --user list-units'
# ^ typo!
runuser -l user1 -c 'systemctl --user list-units'
Ignoring the typo, root again runs runuser, which itself runs systemctl --user list-units as user1 this time.
Your echo says [+] Running systemctl --user commands as root., but actually you are running systemctl --user list-units as user1 as explained above.
echo "[+] Killing active ssh sessions."
kill $(ps aux | grep ssh | grep "^user1.*" | grep localhost | awk '{print$2}') 2>/dev/null
This would kill the ssh process that had been started at the beginning of the script, but it already exited, so this does nothing. As a side note, this could be accomplished a lot easier:
echo "[+] Becoming user1"
runuser -l user1 -c 'ssh -q localhost 2>/dev/null' &
pid=$!
...
echo "[+] Killing active ssh sessions."
kill $(pgrep -P $pid)
So this should give you a better understanding about what the script actually does, but between the goals you described and the conflicting echos within the script it's really hard to figure out where this is supposed to be going.
Can anyone help me to to solve following issue
i need to ssh to another server by e.g. ubuntu user which has permission to run sudo su fore sure then execute pm2 restart command
full command look like this
#!/bin/sh
CMD="sudo su; pm2 restart 0; pm2 restart 1; exit;"
ssh -i somepemfile.pem ubuntu#1.1.1.1 $CMD
for example i can run normally any command with sudo
CMD="sudo /etc/init.d/apache2 restart"
but with sudo su case it somehow hang and do not response
Unless you have an unusual setup, you can't normally string su with other preceding commands like that. I would imagine it is running sudo su, then hanging in the root environment/session, because it's waiting for you to exit before preceding to the pm2 commands. Instead, I would consider something along the lines of this using the -c option:
CMD="sudo su -c 'pm2 restart 0; pm2 restart 1'"
ssh -i somepemfile.pem ubuntu#1.1.1.1 "$CMD"
As suggested in another answer, it would also probably be useful to encapsulate the $CMD variable in quotes in the ssh call.
su normally puts you in a sub shell which you can see by echoing the current PID (process id)
$ echo $$
94260
$ sudo echo $$
94260
$ sudo su
$ echo $$
94271
But to get around this you can pipe the commands you want to run to su like this
$ echo "whoami" | sudo su
root
And we run multiple commands
$ echo "uptime;whoami" | sudo su
11:29 up 8 days, 19:20, 4 users, load averages: 4.55 2.96 2.65
root
Now to make this work with ssh
$ ssh wderezin#localhost 'echo "uptime;whoami" | sudo su'
sudo: no tty present and no askpass program specified
Darn it, we need allocate a tty for the su command. Add the -t option which allocates a TTY during the remote execution.
$ ssh -t wderezin#localhost 'echo "uptime;whoami" | sudo su'
11:36 up 8 days, 19:26, 5 users, load averages: 2.97 2.97 2.76
root
Your command would look this
ssh -i somepemfile.pem ubuntu#1.1.1.1 'echo "pm2 restart 0; pm2 restart1" | sudo su'
Use -c option of su to specify the command
From man su
In particular, an argument of -c will cause the next argument to be treated as a command by most command interpreters. The command will be executed by the shell specified in
/etc/passwd for the target user.
CMD="sudo su -c \"pm2 restart 0; pm2 restart 1;\""
You need to quote the expansion so that the entire string is parsed on the remote end.
ssh -i somepemfile.pem ubuntu#1.1.1.1 "$CMD"
Otherwise, the expansion is subject to word splitting, and the remote shell gets a string which consists of the command sudo and the arguments su;, restart, 0;, pm2, restart;, 1;, and exit;. That is, ssh will escape the semicolons when it builds a single string from the separate arguments you pass.
However, that doesn't solve the problem of running pm2 in the shell started by sudo. That is addressed by ramki.
I am trying to kill a nohup process in an EC2 instance but so far have been unsuccessful. I am trying to grab the process ID (PID) and then use it with the kill command in terminal, like so:
[ec2-user#ip-myip ~]$ ps -ef |grep nohup
ec2-user 16580 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
with columns, (I believe) they're:
UID PID PPID C STIME TTY TIME CMD
ec2-user 16580 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
However, each time I try to kill the process, I get an error saying that the PID doesn't exist, seemingly because the PID changed. Here is a sequence I am running into in my command line:
// first try, grab the PID and kill
[ec2-user#ip-myip ~]$ ps -ef |grep nohup
ec2-user 16580 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
[ec2-user#ip-172-31-41-213 ~]$ kill 16580
-bash: kill: (16580) - No such process
// ?? - check for correct PID again, and try to kill again
[ec2-user#ip-myip ~]$ ps -ef |grep nohup
ec2-user 16583 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
[ec2-user#ip-172-31-41-213 ~]$ kill 16583
-bash: kill: (16583) - No such process
// try 3rd time, kill 1 PID up
[ec2-user#ip-myip ~]$ ps -ef |grep nohup
ec2-user 16584 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
[ec2-user#ip-myip ~]$ kill 16585
-bash: kill: (16585) - No such process
This is quite a struggle for me right now, since I need to kill/restart this nohup process. Any help is appreciated!
EDIT - I tried this approach to killing the process because it was posted as an answer in this thread (Prevent row names to be written to file when using write.csv) and was the 2nd highest rated answer.
Very very bad question ...
You are trying to kill you grep process...
ec2-user 16580 16153 0 19:50 pts/0 00:00:00 grep --color=auto nohup
The command is grep --color=auto nohup
I'm not sure you can kill nohup
nohup will run your command in a particular way. But after its launching, the nohup process dies.
If you want to grep the ps output :
ps -ef | grep '[n]ohup'
or
pgrep -fl nohup
because you are trying to kill not nohup pid but the grep itself...
This one works
$ cat /etc/shells
# List of acceptable shells for chpass(1).
# Ftpd will not allow users to connect who are not using
# one of these shells.
/bin/bash
/bin/csh
/bin/ksh
/bin/sh
/bin/tcsh
/bin/zsh
But this one does not :
sudo -s 'echo /usr/local/bin/zsh >> /etc/shells'
/bin/bash: echo /usr/local/bin/zsh >> /etc/shells: No such file or directory
sudo takes the string as complete command. You should use a shell to interpret your command like this:
sudo sh -c 'echo /usr/local/bin/zsh >> /etc/shells'
This executes sh with root privileges, and sh interprets the string as a shell command including >> as output redirection.
The only thing you really need sudo for is to open the protected file for writing. You can use the tee command to append to the file.
echo /usr/local/bin/zsh | sudo tee -a /etc/shells > /dev/null
I have running docker ubuntu container with just a bash script inside. I want to start my application inside that container with docker exec like that:
docker exec -it 0b3fc9dd35f2 ./main.sh
Inside main script I want to run another application with nohup as this is a long running application:
#!/bin/bash
nohup ./java.sh &
#with this strange sleep the script is working
#sleep 1
echo `date` finish main >> /status.log
The java.sh script is as follow (for simplicity it is a dummy script):
#!/bin/bash
sleep 10
echo `date` finish java >> /status.log
The problem is that java.sh is killed immediately after docker exec returns. The question is why?
The only solution I found out is to add some dummy sleep 1 into the first script after nohup is started. Than second process is running fine. Do you have any ideas why it is like that?
[EDIT]
Second solution is to add some echo or trap command to java.sh script just before sleep. Than it works fine. Unfortunately I cannot use this workaround as instead of this script I have java process.
This is not an answer, but I still don't have the required reputation to comment.
I don't know why the nohup doesn't work. But I did a workaround that worked, using your ideas:
docker exec -ti running_container bash -c 'nohup ./main.sh &> output & sleep 1'
Okay, let's join two answers above :D
First rcmgleite say exactly right: use
-d
options to run process as 'detached' background.
And second (the most important!) if you run detached process, you don't needed nohup!
deploy_app.sh
#!/bin/bash
cd /opt/git/app
git pull
python3 setup.py install
python3 -u webui.py >> nohup.out
Execute this inside a container
docker exec -itd container_name bash -c "/opt/scripts/deploy_app.sh"
Check it
$ docker attach container_name
$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 11768 1940 pts/0 Ss Aug31 0:00 /bin/bash
root 887 0.4 0.0 11632 1396 pts/1 Ss+ 02:47 0:00 /bin/bash /opt/scripts/deploy_app
root 932 31.6 0.4 235288 32332 pts/1 Sl+ 02:47 0:00 python3 -u webui.py
I know this is a late response but I will add it here for documentation reasons.
When using nohup on bash and running it with 'exec' on a docker container, you should use
$ docker exec -d 0b3fc9dd35f2 /bin/bash -c "./main.sh"
The -d option means:
-d, --detach Detached mode: run command in the
background
for more information about docker exec, see:
https://docs.docker.com/engine/reference/commandline/exec/
This should do the trick.