gitlab runner: kill another job with exit status 0 - continuous-integration

I am using gitlab-runner on centos 7. I have created a pipeline which has multiple jobs. There is one job which goes on running and I need to stop that job from another job, So I kill that process (job) from another job using below command in the 2nd job (in the gitlab-ci.yml file).
script:
- ps -ef | grep ProcessName | awk '{print $2}' | xargs kill -9
When 2nd job kills the first, first job fails with exit status 1. I need it to be passed with exit status 0 as It is the required behavior in my scenario. So essentially what I need is to kill first job from another job but the killed job must give the status as passed and not as failed.

Solved it by giving kill -2 command instead of kill -9.
Here is the complete command.
ps -ef | grep ProcessName | awk '{print $2}' | xargs kill -2

Related

How do I make a stop after a run?

If I do a:
npm run script
Can I stop it with a stop?
npm stop script
Why I tried it and it does not work.
I know that with the combination of "Ctrl + c" I kill it, but I want to do it by command.
Try something like that:
ps -ef | grep script | awk '{print $2}' | head -n1 | xargs kill -9
This command should find first process named script on the list of all unix processes created by all users and kill it by with using its PID.

Bash function to kill process

I made an alias for this function in order to kill processes in bash:
On my .bashrc file
kill_process(){
# $1 being a parameter for the process name
kill $(ps ax | grep "$1" | awk '{print $1}')
}
alias kill_process=kill_process
So, suppose I want to kill the meteor process:
Let's see all meteor processes:
ps aux | grep 'meteor' | awk '{print $2}'
21565
21602
21575
21546
Calling the kill_process function with the alias
kill_process meteor
bash: kill: (21612) - No such process
So, the kill_process function effectively terminates the meteor processes, but it's kill command looks for an inexistent pid. Notice the pid 21612 wasn't listed by ps aux | grep. Any ideas to improve the kill_process function to avoid this?
I think in your case the killall command would do what you want:
killall NAME
The standard way of killing processes by name is using killall, as Swoogan suggests in his answer.
As to your kill_process function, the grep expression that filters ps will match the very own grep process (you can see this running the pipeline without awk), but by the time kill is invoked, that process is no longer running. That's the message you see.
Each time you run the command, grep runs again with a new PID: that's the reason you can't find it on the list when you test it.
You could:
Run ps first, pipe it into a file or variable, then grep
Filter grep's PID out of the list
(Simpler) supress kill output:
kill $(...) 2>/dev/null

ps aux auto close app

I'm trying to setup a task to kill certain server processes when the server gets into a weird state such as when it fails to boot one process, but another process gets keeps running and so not everything boots up. This is mainly a task for development so you can do jake killall to kill all processes associated with this project.
I'm having trouble figuring out how to get the pid after doing: ps aux | grep [p]rocess\ name | {HOW DO I GET THE PID NOW?} and then after getting the ID how do I pass that to kill -9 {PID HERE}
The PID is the second column, so you can do
ps aux | grep [p]rocess\ name | awk '{print $2}'
All together,
my_pid=$(ps aux | grep [p]rocess\ name | awk '{print $2}')
kill -9 $my_pid
You could also you killall <program> or pkill <program> or pgrep <program>

how to Kill hadoop task started using hadoop jar command?

I'm pretty new to using Hadoop.
I used hadoop jar command like the one below -
hadoop jar $jarpath/BDAnalytics.jar \
bigdat.twitter.crawler.CrawlTwitter \
$crwlInputFile > $logsFldr/crawler_$1.log 2>&1 &
But I need to kill this process, and not able to understand how.
There are a lot of links to kill hadoop jobs but since this is not a job but a task/java process.
I will high appreciate if you could let me know the command to kill such a process.
Thanks in advance!
-RG
You can use the shell command kill. For example, use ps -ef | grep bigdat.twitter.crawler.CrawlTwitter to find the pid, and use kill -9 pid_of_the_process to kill it. You can write a script containing the following command to do the kill action:
#!/bin/bash
kill -9 $(ps -ef | grep bigdat.twitter.crawler.CrawlTwitter | sed "s/\s\+/\t/g" | cut -f2)

Jenkins job to kill process (Tomcat) over ssh

I am using a Jenkins job to run a few simple shell commands (over ssh, via the Jenkins SSH Plugin); the commands are supposed to shut down a running Tomcat server:
sudo /opt/tomcat/bin/catalina.sh stop
ps xu | awk '/[t]omcat/{print $2}' | xargs -r kill -9
The job executes fine and does terminate the Tomcat, but unfortunately it also fails; the full output is:
[SSH] executing pre build script:
sudo /opt/tomcat/bin/catalina.sh stop
ps xu | awk '/[t]omcat/{print $2}' | xargs kill -9
[SSH] exit-status: -1
Finished: FAILURE
Any idea why the exit code of the command if -1? I have tried several variations without any luck.
Thanks.
You should examine the output of ps xu. Since kill will kill the processes sequentially, it may be the case that if there are multiple tomcat processes yielded by ps xu, the other ones will automatically terminate after the first one is terminated. Then kill attempts to terminate processes that no longer exist.
I suspect that Jenkins doesn't like the no process killed that the kill command prints of it doesn't run. Try redirecting stdout to /dev/null.
The questions is a bit old but as I stumbled upon that, here's another suggestion.
ps xu | awk '/[t]omcat/{print $2}'
returns the running tomcat AND the awk process itself, see here
<user> 2370 0.0 0.0 26144 1440 pts/7 R+ 10:51 0:00 awk /[t]omcat/{print $2}
The awk process instantly ends itself before running xargs on it, so one of the xargs has an exit code unequal 0.
Try running killall tomcat
Can you just do pkill tomcat?

Resources