Jenkins job to kill process (Tomcat) over ssh - shell

I am using a Jenkins job to run a few simple shell commands (over ssh, via the Jenkins SSH Plugin); the commands are supposed to shut down a running Tomcat server:
sudo /opt/tomcat/bin/catalina.sh stop
ps xu | awk '/[t]omcat/{print $2}' | xargs -r kill -9
The job executes fine and does terminate the Tomcat, but unfortunately it also fails; the full output is:
[SSH] executing pre build script:
sudo /opt/tomcat/bin/catalina.sh stop
ps xu | awk '/[t]omcat/{print $2}' | xargs kill -9
[SSH] exit-status: -1
Finished: FAILURE
Any idea why the exit code of the command if -1? I have tried several variations without any luck.
Thanks.

You should examine the output of ps xu. Since kill will kill the processes sequentially, it may be the case that if there are multiple tomcat processes yielded by ps xu, the other ones will automatically terminate after the first one is terminated. Then kill attempts to terminate processes that no longer exist.

I suspect that Jenkins doesn't like the no process killed that the kill command prints of it doesn't run. Try redirecting stdout to /dev/null.

The questions is a bit old but as I stumbled upon that, here's another suggestion.
ps xu | awk '/[t]omcat/{print $2}'
returns the running tomcat AND the awk process itself, see here
<user> 2370 0.0 0.0 26144 1440 pts/7 R+ 10:51 0:00 awk /[t]omcat/{print $2}
The awk process instantly ends itself before running xargs on it, so one of the xargs has an exit code unequal 0.
Try running killall tomcat

Can you just do pkill tomcat?

Related

kthreaddi process is causing a high CPU usage

I'm using REDHAT 7.6 as an OS on my server and top command shows high CPU usage from kthreaddi process without obvious reason, now my solution is down, and also my Fusion middleware (I'm using ORACLE as database )
any solutions
I experienced this problem a few days ago, the quickest solution I could get was the following:
Use the following command:
ls -l /proc/<PID_of_process>/exe
This command shows the position where the process is running.
Create this folder.
Remove all the permissions for all users:
chmod o-rwx /tmp/<name_of_folder>
Then kill the process:
kill -9 <PID_of_process>
Kindly run below one liner bash shell script inside crontab every minute and it will kill the virus in a scripted way.
kill -9 $((ps -aux | grep -i 'kdevtmpfsi\|kinsing\|kthreaddi') 2>/dev/null |grep -v grep |awk '{print $2}')

gitlab runner: kill another job with exit status 0

I am using gitlab-runner on centos 7. I have created a pipeline which has multiple jobs. There is one job which goes on running and I need to stop that job from another job, So I kill that process (job) from another job using below command in the 2nd job (in the gitlab-ci.yml file).
script:
- ps -ef | grep ProcessName | awk '{print $2}' | xargs kill -9
When 2nd job kills the first, first job fails with exit status 1. I need it to be passed with exit status 0 as It is the required behavior in my scenario. So essentially what I need is to kill first job from another job but the killed job must give the status as passed and not as failed.
Solved it by giving kill -2 command instead of kill -9.
Here is the complete command.
ps -ef | grep ProcessName | awk '{print $2}' | xargs kill -2

Shell script returning non zero value after killing process

I am trying to kill a process using a shell script.Looks shell itself is getting killed in this process. Also I am seeing non zero return value of the script in terminal.
I am running it on Amazon Linux 2 with sudo.
#!/bin/bash
kill -9 $(ps -ef | grep myapp | grep -v grep | awk '{print $2}')
I am executing like:
sudo ./myscript.sh
"echo $?" after executing is returning 137 instead of zero. Can someone please help to understand what is going wrong.
Another observation:
if i directly run kill command in my terminal, i.e below command,
kill -9 $(ps -ef | grep myapp | grep -v grep | awk '{print $2}')
I see echo $? is returning zero.
Update:
Problem is solved. Name of process I am trying to kill is overlapping with name of my script. Hence grep is returning both the pid's. Both the process are getting killed. Also learnt that better way of doing this by using pkill or using pidof() to get pid.
If you want the exit code of your last-run command to be the exit code for the script, your script must end with exit $? as the last line. Any function before that must also end with the $? so the chain flows to that final line. Otherwise some other exit is taking place.
If the script is being killed along side the application or script you are trying to kill, then your ps and grep work is likely including both in the results. Look at the output of the ps and grep while the script is running. You could add a line prior to your kill line which just shows the output of the ps and greps so you can see what is actually getting killed.
Finally (and I don't think this is the case) if you are trying to end the script after the kill, manually run an exit (again likely using exit $? for the reason stated above) where appropriate within the script.
Hope that helps you get where you are going.

how to Kill hadoop task started using hadoop jar command?

I'm pretty new to using Hadoop.
I used hadoop jar command like the one below -
hadoop jar $jarpath/BDAnalytics.jar \
bigdat.twitter.crawler.CrawlTwitter \
$crwlInputFile > $logsFldr/crawler_$1.log 2>&1 &
But I need to kill this process, and not able to understand how.
There are a lot of links to kill hadoop jobs but since this is not a job but a task/java process.
I will high appreciate if you could let me know the command to kill such a process.
Thanks in advance!
-RG
You can use the shell command kill. For example, use ps -ef | grep bigdat.twitter.crawler.CrawlTwitter to find the pid, and use kill -9 pid_of_the_process to kill it. You can write a script containing the following command to do the kill action:
#!/bin/bash
kill -9 $(ps -ef | grep bigdat.twitter.crawler.CrawlTwitter | sed "s/\s\+/\t/g" | cut -f2)

Killing processes SHELL

This command ps -ef | grep php returns a list of processes
I want to kill in one command or with a shell script all those processes
Thanks
The easiest way to kill all commands with a given name is to use killall:
killall php
Note, this only sends an interrupt signal. This should be enough if the processes are behaving. If they're not dying from that, you can forcibly kill them using
killall -9 php
The normal way to do this is to use xargs as in ps -ef | grep php | xargs kill, but there are several ways to do this.
ps -ef lists all processes and then you use grep to pick a few lines that mention "php". This means that also commands that have "php" as part of their command line will match, and be killed. If you really want to match the command (and not the arguments as well), it is probably better to use pgrep php.
You can use a shell backtick to provide the output of a command as arguments to another command, as in
kill `pgrep php`
If you want to kill processes only, there is a command pkill that matches a pattern to the command. This can not be used if you want to do something else with the processes though. This means that if you want to kill all processes where the command contain "php", you can do this using pkill php.
Hope this helps.
You can find its pid (it's on the first column ps prints) and use the kill command to forcibly kill it:
kill -9 <pid you found>
Use xargs:
ps -ef | grep php | grep -v grep | awk '{print $2}' | xargs kill -9
grep -v grep is exclude the command itself and awk gives the list of PIDs which are then passed kill command.
Use pkill php. More on this topic in this similar question: How can I kill a process by name instead of PID?

Resources