So I've tried googling and reading a few questions on here as well as elsewhere and I can't seem to find an answer.
I'm using Jenkins and executing a shell script to scp a .jar file to a server and then sshing in, running the build, and then exiting out of the server. However, I cannot get out of that server for the life of me. This is what I'm running, minus the sensitive information:
ssh root#x.x.x.x 'killall -9 java; nohup java -jar /root/project.jar -prod &; exit'
I've tried doing && exit, exit;, but none of it will get me out of the server and jenkins just spins for ever. So the Jenkins build never actually finishes.
Any help would be sweet! I appreciate it.
So I just took off the exit and ran a ssh -f root#x.x.x.x before the command and it worked. The -f just runs the ssh command in the background so Jenkins isn't sitting around waiting.
Usual way of starting a command and sending it to background is nohup command &
Try this. This is working for me. Read the source for more information.
nohup some-background-task &> /dev/null # No space between & and > !
Example :
ssh root#x.x.x.x 'killall -9 java; nohup java -jar /root/project.jar -prod &> /dev/null'
no need exit keyword
source : https://blog.jakubholy.net/2015/02/17/fix-shell-script-run-via-ssh-hanging-jenkins/
Related
I have a unix script that invokes another script on a remote unix server.
amongst other commands i am stopping a service. The stop command essentially translates to
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem stop"'
The service is getting stopped but when i start back the service it just creates the .pid file and does not perform the start up. When i run the command for start i.e.
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem start"'
it does not show any error. On going to the server and checking the status
service aemauthor status
Below message is displayed
aem dead but pid file exists
Also when starting the service by logging in to the server, it works as expected along with the message
Removing stale pidfile (pid: 8701)
Starting aem
We don't know the details of the service script of aem.
I guess the problem is related to the SIGHUP signal. When we log off from a shell or disconnect from ssh, the OS will send HUP signal to all processes that started in this terminated shell. If the process didn't handle the HUP signal, it would exit by default.
When we run a command via ssh remotely, the process started by this command will receive HUP signal after ssh session is terminated.
We can use the nohup command to ignore the HUP signal.
You can try
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "nohup service aem start"'
If it works, you can use nohup command to start aem in the service script.
As mentioned at the stale pidfile syndrome, there are different reasons for pidfiles getting stalled, like for instance some issues with the way your handles its removal when the process exits... but considering your only experiencing when running remotely, I would guess it might be related to what is being loaded or not by your profile... check the most voted solid answer at the post below for some insights:
Why Does SSH Remote Command Get Fewer Environment Variables
As described in the comments of the mentioned post, you can try sourcing /etc/profile or ~/.bash_profile before executing your script to test it, or even trying to execute env locally and remotelly to compare variables that are being sourced or not.
I'd like to have a Jenkins job which kills all processes on port 5000 (bash).
The easy solution
fuser -k 5000/tcp
works fine when I execute this command in the terminal, but on Jenkins ("execute shell") marks build as failure.
I have tried also
kill $(lsof -i -t:5000)
but again, as it works on regular terminal, on Jenkins I get
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
Build step 'Execute shell' marked build as failure
Any ideas how to fix this?
I hadd the same problem. It did not work when the process was not running. bash just did it, but jenkins failed.
You can add an || true to your jenkins job to indicate jenkins to proceed with the job if the bash command fails.
So its:
fuser -k 5000/tcp || true
see also don't fail jenkins build if execute shell fails
Try put the command with the path
/usr/bin/kill $(/usr/sbin/lsof -i -t:5000)
If the user running the jenkins service is not the same as the user with the process on port 5000 you won't be able to kill the process. Maybe you will need to run this with sudo.
Try this
su -s jenkins #Or the user who run jenkins
/usr/bin/kill $(/usr/sbin/lsof -i -t:5000)
maybe jenkins user can't see the processes because of privileges so the expansion of $(lsof ..) is empty.
the error output may not be complete because if lsof call fails there will be a message on stderr.
The problem is that $ is a special char in jenkins commands. It means you are referring to an ENV VAR.
You should try writing the command wrapped with single quotes.
I was facing the same issue and in many cases standard input/output is disabled (specially when you have ssh to the target machine). What you can do is create a executable shell file in the target server and execute that file.
So, the step would look something like below :
Step 1 . -> create the shell file
cat > kill_process.sh << EOF
target_port_num=\`lsof -i:\${1} -t\`;
echo "Kill process at port is :: \${target_port_num}"
kill -9 \${target_port_num}
EOF
Step 2 . -> make it executable
chmod +x process_killer.sh
Step 3 . -> execute the shell and pass the port number
./process_killer.sh 3005
Hope this help.
I am building an executable jar file using jenkins and copying into a remote server. After the copying, I need to run the jar file in the remote server. I am using SSH Plugin for executing the remote script.
The remote script looks like this:
startServer.sh
pkill -f MyExecutable
nohup java -jar /home/administrator/app/MyExecutable.jar &
Jenkins is able to execute the script file, but it is not stopping the job after the execution. It is still continuing the process and showing the log in jenkins console. This is creating problems since these continuing jobs blocks other jobs to execute.
How can I stop the job once the script is executed.
Finally, I was able to fix the problem. I am posting it for others sake.
I used ssh -f user#server ....
this solved my problem.
ssh -f root#${server} sh /home/administrator/bin/startServer.sh
I ran into a similar issue using the Publish Over SSH Plugin. For some reason Jenkins wasn't stopping after executing the remote script. Ticking the below configuration fixed the problem.
SSH Publishers > Transfers > Advanced > Exec in pty
Hope it helps someone else.
I got the solution for you my friend.
Make sure to add usePty: true in the pipeline that you are using which will enable the execution of sudo commands that require a tty (and possibly help in other scenarios too.)
sshTransfer(
sourceFiles: "target/*.zip",
removePrefix: "target",
remoteDirectory: "'/root/'yyyy-MM-dd",
execTimeout: 300000,
usePty: true,
verbose: true,
execCommand: '''
pkill -f MyExecutable
nohup java -jar /home/administrator/app/MyExecutable.jar &
echo $! >> /tmp/jenkins/jenkins.pid
sleep 1
'''
)
I ssh to another server and run a shell script like this nohup ./script.sh 1>/dev/null 2>&1 &
Then type exit to exit from the server. However it just hangs. The server is Solaris.
How can I exit properly without hanging??
Thanks.
I assume that this script is a long running one. In this case you need to detach the process from the terminal that you wish to close when you terminate your ssh session.
Actually you already done most of the work by reassigning both stdout and stderr to /dev/null, however you didn't do that for stdin.
I used the test case of:
ssh localhost
nohup sleep 10m &> /dev/null &
^D
# hangs
While
ssh localhost
nohup sleep 10m &> /dev/null < /dev/null &
^D
# exits
I second the recommendation to use the excellent gnu screen, that will do this service for you, among others.
Oh, and have you considered running the script directly and not within a shell? I.e.:
ssh user#host script.sh
If you're trying to leave a command running remotely after you close your SSH link, I strongly recommend you use screen and learn to detach the screen. That's much better than leaving background processes around; it also lets you reconnect and see what the process is up to.
Since you haven't provided us with script.sh, I don't think we can know for sure why the command is hanging.
You can use the command :
~.
This command close the ssh session.
sh -c ./script.sh &
In a bash script I execute a command on a remote machine through ssh. If user breaks the script by pressing Ctrl+C it only stops the script - not even ssh client. Moreover even if I kill ssh client the remote command is still running...
How can make bash to kill local ssh client and remote command invocation on Crtl+c?
A simple script:
#/bin/bash
ssh -n -x root#db-host 'mysqldump db' -r file.sql
Eventual I found a solution like that:
#/bin/bash
ssh -t -x root#db-host 'mysqldump db' -r file.sql
So - I use '-t' instead of '-n'.
Removing '-n', or using different user than root does not help.
When your ssh session ends, your shell will get a SIGHUP. (hang-up signal). You need to make sure it sends that on to all processes started from it. For bash, try shopt -s huponexit; your_command. That may not work, because the man page says huponexit only works for interactive shells.
I remember running into this with users running jobs on my cluster, and whether they had to use nohup or not (to get the opposite behaviour of what you want) but I can't find anything in the bash man page about whether child processes ignore SIGHUP by default. Hopefully huponexit will do the trick. (You could put that shopt in your .bashrc, instead of on the command line, I think.)
Your ssh -t should work, though, since when the connection closes, reads from the terminal will get EOF or an error, and that makes most programs exit.
Do you know what the options you're passing to ssh do? I'm guessing not. The -n option redirects input from /dev/null, so the process you're running on the remote host probably isn't seeing SIGINT from Ctrl-C.
Now, let's talk about how bad an idea it is to allow remote root logins:
It's a really, really bad idea. Have a look at HOWTO: set up ssh keys for some suggestions how to securely manage remote process execution over ssh. If you need to run something with privileges remotely you'll probably want a solution that involves a ssh public key with embedded command and a script that runs as root courtesy of sudo.
trap "some_command" SIGINT
will execute some_command locally when you press Ctrl+C . help trap will tell you about its other options.
Regarding the ssh issue, i don't know much about ssh. Maybe you can make it call ssh -n -x root#db-host 'killall mysqldump' instead of some_command to kill the remote command?
What if you don't want to require using "ssh -t" (for those as forgetful as I am)?
I stumbled upon looking at the parent PID, because CTRL/C from the initiating session results in the ssh-launched process on the remote process exiting, although its child process continues. By way of example, here's my script that is on the remote server.
#!/bin/bash
Answer=(Alive Dead)
Index=0
while [ ${Index} -eq 0 ]; do
if ! kill -0 ${PPID} 2> /dev/null ; then Index=1; fi
echo "Parent PID ${PPID} is ${Answer[$Index]} at $(date +%Y%m%d%H%M%S%Z)" > ~/NowTime.txt
sleep 1
done
I then invoke it with "ssh remote_server ./test_script.sh"
"watch cat ~/NowTime.txt" on the remote server shows the timestamp in the file increasing and declaring that the parent process is alive; once I hit CTRL/C in the launching process, the script on the remote server notes that its parent process has died, and the script exits.