Kill all processes on specific port via Jenkins - bash

I'd like to have a Jenkins job which kills all processes on port 5000 (bash).
The easy solution
fuser -k 5000/tcp
works fine when I execute this command in the terminal, but on Jenkins ("execute shell") marks build as failure.
I have tried also
kill $(lsof -i -t:5000)
but again, as it works on regular terminal, on Jenkins I get
kill: usage: kill [-s sigspec | -n signum | -sigspec] pid | jobspec ... or kill -l [sigspec]
Build step 'Execute shell' marked build as failure
Any ideas how to fix this?

I hadd the same problem. It did not work when the process was not running. bash just did it, but jenkins failed.
You can add an || true to your jenkins job to indicate jenkins to proceed with the job if the bash command fails.
So its:
fuser -k 5000/tcp || true
see also don't fail jenkins build if execute shell fails

Try put the command with the path
/usr/bin/kill $(/usr/sbin/lsof -i -t:5000)
If the user running the jenkins service is not the same as the user with the process on port 5000 you won't be able to kill the process. Maybe you will need to run this with sudo.
Try this
su -s jenkins #Or the user who run jenkins
/usr/bin/kill $(/usr/sbin/lsof -i -t:5000)

maybe jenkins user can't see the processes because of privileges so the expansion of $(lsof ..) is empty.
the error output may not be complete because if lsof call fails there will be a message on stderr.

The problem is that $ is a special char in jenkins commands. It means you are referring to an ENV VAR.
You should try writing the command wrapped with single quotes.

I was facing the same issue and in many cases standard input/output is disabled (specially when you have ssh to the target machine). What you can do is create a executable shell file in the target server and execute that file.
So, the step would look something like below :
Step 1 . -> create the shell file
cat > kill_process.sh << EOF
target_port_num=\`lsof -i:\${1} -t\`;
echo "Kill process at port is :: \${target_port_num}"
kill -9 \${target_port_num}
EOF
Step 2 . -> make it executable
chmod +x process_killer.sh
Step 3 . -> execute the shell and pass the port number
./process_killer.sh 3005
Hope this help.

Related

jobs command result is empty when process is run through script

I need to run rsync in background through shell script but once it has started, I need to monitor the status of that jobs through shell.
jobs command return empty when its run in shell after the script exits. ps -ef | grep rsync shows that the rsync is still running.
I can check the status through script but I need to run the script multiple times so it uses a different ip.txt file to push. So I can't have the script running to check jobs status.
Here is the script:
for i in `cat $ip.txt`; do
rsync -avzh $directory/ user#"$i":/cygdrive/c/test/$directory 2>&1 > /dev/null &
done
jobs; #shows the jobs status while in the shell script.
exit 1
Output of jobs command is empty after the shell script exits:
root#host001:~# jobs
root#host001:~#
What could be the reason and how could I get the status of jobs while the rsync is running in background? I can't find an article online related to this.
Since your shell (the one from which you execute jobs) did not start rsync, it doesn't know anything about it. There are different approaches to fixing that, but it boils down to starting the background process from your shell. For example, you can start the script you have using the source BASH command instead of executing it in a separate process. Of course, you'd have to remove the exit 1 at the end, because that exits your shell otherwise.

Script invoked from remote server unable to run service correctly

I have a unix script that invokes another script on a remote unix server.
amongst other commands i am stopping a service. The stop command essentially translates to
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem stop"'
The service is getting stopped but when i start back the service it just creates the .pid file and does not perform the start up. When i run the command for start i.e.
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "service aem start"'
it does not show any error. On going to the server and checking the status
service aemauthor status
Below message is displayed
aem dead but pid file exists
Also when starting the service by logging in to the server, it works as expected along with the message
Removing stale pidfile (pid: 8701)
Starting aem
We don't know the details of the service script of aem.
I guess the problem is related to the SIGHUP signal. When we log off from a shell or disconnect from ssh, the OS will send HUP signal to all processes that started in this terminated shell. If the process didn't handle the HUP signal, it would exit by default.
When we run a command via ssh remotely, the process started by this command will receive HUP signal after ssh session is terminated.
We can use the nohup command to ignore the HUP signal.
You can try
ssh -t -t -q ${AEM_USER}#${SERVERIP} 'bash -l -c "nohup service aem start"'
If it works, you can use nohup command to start aem in the service script.
As mentioned at the stale pidfile syndrome, there are different reasons for pidfiles getting stalled, like for instance some issues with the way your handles its removal when the process exits... but considering your only experiencing when running remotely, I would guess it might be related to what is being loaded or not by your profile... check the most voted solid answer at the post below for some insights:
Why Does SSH Remote Command Get Fewer Environment Variables
As described in the comments of the mentioned post, you can try sourcing /etc/profile or ~/.bash_profile before executing your script to test it, or even trying to execute env locally and remotelly to compare variables that are being sourced or not.

I would like to find the process id of a Jenkins job

I would like to find a way to find the process id of a Jenkins job, so I can kill the process if the job gets hung. The Jenkins instance is on Ubuntu. Sometimes, we are unable to stop a job via the Jenkins interface. I am able to stop a job by killing the process id if I run a Jenkins job that contains a simple shell script where I manually collect the process id such as:
#!/bin/bash
echo "Process ID: $$"
for i in {1..10000}
do
sleep 10;
echo "Welcome $i times"
done
In the command shell, I can run sudo kill -9 [process id]and it successfully kills the job.
The problem is, most of our jobs have multiple build steps and we have multiple projects running on this server. Many of our build steps are shell scripts, windows batch files, and a few of them are ant scripts. I'm wondering how to find the process id of the Jenkins job which is the parent process of all of the build steps. As of now, I have to wait until all other builds have completed and restart the server. Thanks for any help!
On *nix OS you can review environment variables of a running process by investigating a /proc/$pid/environ and look for Jenkins specific variables like BUILD_ID, BUILD_URL, etc.
cat /proc/'$pid'/environ | grep BUILD_URL
You can do it know you $pid or go through of running processes.
This is an update to my question. For killing hung (zombie) jobs, I believe that this will only work for cases where Jenkins is running from the same server as its jobs. I doubt this would work if you are trying to kill a hung process running on a Jenkins slave.
#FIND THE PROCESS ID BASED ON JENKINS JOB
user#ubuntu01x64:~$ sudo egrep -l -i 'BUILD_TAG=jenkins-Wait_Job-11' /proc/*/environ
/proc/5222/environ
/proc/6173/environ
/proc/self/environ
# ONE OF THE PROCESSES LISTED FROM THE EGREP OUTPUT IS THE 'EGREP'COMMAND ITSELF,
# ENSURE THAT (LOOP THROUGH) THE PROCESS ID'S TO DETERMINE WHICH IS
# STILL RUNNING
user#ubuntu01x64:~$ if [[ -e /proc/6173 ]]; then echo "yes"; fi
user#ubuntu01x64:~$ if [[ -e /proc/5222 ]]; then echo "yes"; fi
yes
# KILL THE PROCESS
sudo kill -9 5222

Jenkins job not stopping after running a remote script

I am building an executable jar file using jenkins and copying into a remote server. After the copying, I need to run the jar file in the remote server. I am using SSH Plugin for executing the remote script.
The remote script looks like this:
startServer.sh
pkill -f MyExecutable
nohup java -jar /home/administrator/app/MyExecutable.jar &
Jenkins is able to execute the script file, but it is not stopping the job after the execution. It is still continuing the process and showing the log in jenkins console. This is creating problems since these continuing jobs blocks other jobs to execute.
How can I stop the job once the script is executed.
Finally, I was able to fix the problem. I am posting it for others sake.
I used ssh -f user#server ....
this solved my problem.
ssh -f root#${server} sh /home/administrator/bin/startServer.sh
I ran into a similar issue using the Publish Over SSH Plugin. For some reason Jenkins wasn't stopping after executing the remote script. Ticking the below configuration fixed the problem.
SSH Publishers > Transfers > Advanced > Exec in pty
Hope it helps someone else.
I got the solution for you my friend.
Make sure to add usePty: true in the pipeline that you are using which will enable the execution of sudo commands that require a tty (and possibly help in other scenarios too.)
sshTransfer(
sourceFiles: "target/*.zip",
removePrefix: "target",
remoteDirectory: "'/root/'yyyy-MM-dd",
execTimeout: 300000,
usePty: true,
verbose: true,
execCommand: '''
pkill -f MyExecutable
nohup java -jar /home/administrator/app/MyExecutable.jar &
echo $! >> /tmp/jenkins/jenkins.pid
sleep 1
'''
)

Run SSH command nohup then exit from server via Jenkins

So I've tried googling and reading a few questions on here as well as elsewhere and I can't seem to find an answer.
I'm using Jenkins and executing a shell script to scp a .jar file to a server and then sshing in, running the build, and then exiting out of the server. However, I cannot get out of that server for the life of me. This is what I'm running, minus the sensitive information:
ssh root#x.x.x.x 'killall -9 java; nohup java -jar /root/project.jar -prod &; exit'
I've tried doing && exit, exit;, but none of it will get me out of the server and jenkins just spins for ever. So the Jenkins build never actually finishes.
Any help would be sweet! I appreciate it.
So I just took off the exit and ran a ssh -f root#x.x.x.x before the command and it worked. The -f just runs the ssh command in the background so Jenkins isn't sitting around waiting.
Usual way of starting a command and sending it to background is nohup command &
Try this. This is working for me. Read the source for more information.
nohup some-background-task &> /dev/null # No space between & and > !
Example :
ssh root#x.x.x.x 'killall -9 java; nohup java -jar /root/project.jar -prod &> /dev/null'
no need exit keyword
source : https://blog.jakubholy.net/2015/02/17/fix-shell-script-run-via-ssh-hanging-jenkins/

Resources