I am building an executable jar file using jenkins and copying into a remote server. After the copying, I need to run the jar file in the remote server. I am using SSH Plugin for executing the remote script.
The remote script looks like this:
startServer.sh
pkill -f MyExecutable
nohup java -jar /home/administrator/app/MyExecutable.jar &
Jenkins is able to execute the script file, but it is not stopping the job after the execution. It is still continuing the process and showing the log in jenkins console. This is creating problems since these continuing jobs blocks other jobs to execute.
How can I stop the job once the script is executed.
Finally, I was able to fix the problem. I am posting it for others sake.
I used ssh -f user#server ....
this solved my problem.
ssh -f root#${server} sh /home/administrator/bin/startServer.sh
I ran into a similar issue using the Publish Over SSH Plugin. For some reason Jenkins wasn't stopping after executing the remote script. Ticking the below configuration fixed the problem.
SSH Publishers > Transfers > Advanced > Exec in pty
Hope it helps someone else.
I got the solution for you my friend.
Make sure to add usePty: true in the pipeline that you are using which will enable the execution of sudo commands that require a tty (and possibly help in other scenarios too.)
sshTransfer(
sourceFiles: "target/*.zip",
removePrefix: "target",
remoteDirectory: "'/root/'yyyy-MM-dd",
execTimeout: 300000,
usePty: true,
verbose: true,
execCommand: '''
pkill -f MyExecutable
nohup java -jar /home/administrator/app/MyExecutable.jar &
echo $! >> /tmp/jenkins/jenkins.pid
sleep 1
'''
)
Related
I have a unix script that invokes another script on a remote unix server.
amongst other commands i am stopping a service. The stop command essentially translates to
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem stop"'
The service is getting stopped but when i start back the service it just creates the .pid file and does not perform the start up. When i run the command for start i.e.
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem start"'
it does not show any error. On going to the server and checking the status
service aemauthor status
Below message is displayed
aem dead but pid file exists
Also when starting the service by logging in to the server, it works as expected along with the message
Removing stale pidfile (pid: 8701)
Starting aem
We don't know the details of the service script of aem.
I guess the problem is related to the SIGHUP signal. When we log off from a shell or disconnect from ssh, the OS will send HUP signal to all processes that started in this terminated shell. If the process didn't handle the HUP signal, it would exit by default.
When we run a command via ssh remotely, the process started by this command will receive HUP signal after ssh session is terminated.
We can use the nohup command to ignore the HUP signal.
You can try
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "nohup service aem start"'
If it works, you can use nohup command to start aem in the service script.
As mentioned at the stale pidfile syndrome, there are different reasons for pidfiles getting stalled, like for instance some issues with the way your handles its removal when the process exits... but considering your only experiencing when running remotely, I would guess it might be related to what is being loaded or not by your profile... check the most voted solid answer at the post below for some insights:
Why Does SSH Remote Command Get Fewer Environment Variables
As described in the comments of the mentioned post, you can try sourcing /etc/profile or ~/.bash_profile before executing your script to test it, or even trying to execute env locally and remotelly to compare variables that are being sourced or not.
I've downloaded and set up elasticsearch on an EC2 instance that I use to run Jenkins. I'd like to use Jenkins to run some unit tests that use the local elasticsearch.
My problem is that I haven't found a way on how to start the elasticsearch locally and run the tests after, since the script doesn't proceed after starting ES, because the job is not killed or anything.
I can do this by starting ES manually through SSH and then building a project with only the unit tests. However, I'd like to automate the ES launching.
Any suggestions on how I could achieve this? I've tried now using single "Execute shell" block and two "Execute shell" blocks.
It is happening because you starting elasticsearch command in blocking way. It means command will wait until elasticsearch server is shutdown. Jenkins just keep waiting.
You can use following command
./elasticsearch 2>&1 >/dev/null &
or
nohup ./elasticsearch 2>&1 >/dev/null &
it will run command in non-blocking way.
You can also add small delay to allow elasticsearch server start
nohup ./elasticsearch 2>&1 >/dev/null &; sleep 5
I'd like to have a Jenkins job which kills all processes on port 5000 (bash).
The easy solution
fuser -k 5000/tcp
works fine when I execute this command in the terminal, but on Jenkins ("execute shell") marks build as failure.
I have tried also
kill $(lsof -i -t:5000)
but again, as it works on regular terminal, on Jenkins I get
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
Build step 'Execute shell' marked build as failure
Any ideas how to fix this?
I hadd the same problem. It did not work when the process was not running. bash just did it, but jenkins failed.
You can add an || true to your jenkins job to indicate jenkins to proceed with the job if the bash command fails.
So its:
fuser -k 5000/tcp || true
see also don't fail jenkins build if execute shell fails
Try put the command with the path
/usr/bin/kill $(/usr/sbin/lsof -i -t:5000)
If the user running the jenkins service is not the same as the user with the process on port 5000 you won't be able to kill the process. Maybe you will need to run this with sudo.
Try this
su -s jenkins #Or the user who run jenkins
/usr/bin/kill $(/usr/sbin/lsof -i -t:5000)
maybe jenkins user can't see the processes because of privileges so the expansion of $(lsof ..) is empty.
the error output may not be complete because if lsof call fails there will be a message on stderr.
The problem is that $ is a special char in jenkins commands. It means you are referring to an ENV VAR.
You should try writing the command wrapped with single quotes.
I was facing the same issue and in many cases standard input/output is disabled (specially when you have ssh to the target machine). What you can do is create a executable shell file in the target server and execute that file.
So, the step would look something like below :
Step 1 . -> create the shell file
cat > kill_process.sh << EOF
target_port_num=\`lsof -i:\${1} -t\`;
echo "Kill process at port is :: \${target_port_num}"
kill -9 \${target_port_num}
EOF
Step 2 . -> make it executable
chmod +x process_killer.sh
Step 3 . -> execute the shell and pass the port number
./process_killer.sh 3005
Hope this help.
So I've tried googling and reading a few questions on here as well as elsewhere and I can't seem to find an answer.
I'm using Jenkins and executing a shell script to scp a .jar file to a server and then sshing in, running the build, and then exiting out of the server. However, I cannot get out of that server for the life of me. This is what I'm running, minus the sensitive information:
ssh root#x.x.x.x 'killall -9 java; nohup java -jar /root/project.jar -prod &; exit'
I've tried doing && exit, exit;, but none of it will get me out of the server and jenkins just spins for ever. So the Jenkins build never actually finishes.
Any help would be sweet! I appreciate it.
So I just took off the exit and ran a ssh -f root#x.x.x.x before the command and it worked. The -f just runs the ssh command in the background so Jenkins isn't sitting around waiting.
Usual way of starting a command and sending it to background is nohup command &
Try this. This is working for me. Read the source for more information.
nohup some-background-task &> /dev/null # No space between & and > !
Example :
ssh root#x.x.x.x 'killall -9 java; nohup java -jar /root/project.jar -prod &> /dev/null'
no need exit keyword
source : https://blog.jakubholy.net/2015/02/17/fix-shell-script-run-via-ssh-hanging-jenkins/
how can I kill hudson job from bash script when the log file doesn't change? (hudson is freezed).
Context: I have a bash script that check if a log file had change after X seconds and I want to modified it to check that if the timeout raises, and there's no error in console, this means that hudson job is freezed, so I want to be notified about this.
It might be easier to use the Build Timeout plugin.
Finally the solution was to use the following command:
#!/bin/bash
#if the log file does not change
if [ "something" ]; then
kill -9 $(pidof eclipse)
fi
This kills the eclipse instance (who's calls hudson), and continues with the build of the others elements and that it's Ok for my task.