I run a Spring Boot app with the following command,
java -jar myapp-1.0.0.jar & echo $! > "myapp.pid"
and killing the process with the following command
kill `cat "myapp.pid"` or kill -9 `cat "myapp.pid"`
and when I checked process is killed but terminal doesn't return, it becomes hanged. When I press a key, it returns normal. What is the problem here ?
As you are running it in background and executing one more command echo it is not coming back to $ prompt actually. Just try like below it will work:-
java -jar myapp-1.0.0.jar
And kill the process from different terminal
kill `cat "myapp.pid"` or kill -9 `cat "myapp.pid"`
Related
I currently have a bash script which launches my ros2 nodes.This works perfectly fine. I now tried to start the script as a background task and write the output into a file. When doing so, I am unable to terminate the nodes at a later point. Previously I terminated it by hitting Ctrl + C in the terminal and all nodes stopped. I tried to do same by saving the PID when starting the script and then killing it afterwards, however the nodes keep running.
Is there any possibility to stop all nodes started by the script? Stopping all ros2 nodes is not possible because I launch multiple in parallel.
Start
"./${SCRIPTFILE}" > $LOGFILE &
echo $! > $PIDFILE
Stop
kill -TERM $(cat $PIDFILE) 2> /dev/null
When you use kill -TERM $(cat $PIDFILE) 2> /dev/null it sends a SIGTERM, which is more of a cautious way to see if a shutdown is possible not endangering the integrity of open DBs or files, but it can be blocked or handled otherwise. If you want to kill a process regardless of it's state then use:
kill -9 -TERM $(cat $PIDFILE) 2> /dev/null
So I want to start a docker image, then a Django back-end and finally an angular front-end, let them run as long as I need to do tests/develop and then kill them when I'm done. To do this I first tried starting them all in a script and have them run in a background, and have a second script do kill %n for both processes. This doesn't work because the background processes are in another context, so the second script cannot reference them.
Then I tried this:
#!/bin/bash
# Exit Angular, Django and kill docker_img
function clean_up()
{
echo "Exiting..."
kill %2
kill %1
docker stop docker_img
reset
exit
}
# Trigger cleanup on CTRL + C
trap clean_up SIGINT
# Start docker database
docker start docker_img
# Start django backend
cd ~/Projects/DjangoBackend
source venv/bin/activate
python src/manage.py runserver &
sleep 3
echo 'Done starting django, starting angular'
sleep 1
# Start angular front end
cd ~/Projects/AngularFront
npm start &
However, after npm start & runs, the trap stops working, so it effectively becomes useless. I'm guessing it could be because once my script is done running the trap is no longer active, but I don't know how to fix this. What can I do?
If you are looking to kill a process in unix/linux, one way of doing it is you can record their PID in a file using ps -ef command.
And then use kill -9 to kill the process.
Example:
$ ps -ef | grep <process_name> | awk -F ' ' '{print $2}' > pid.txt
$ kill -9 `cat pid.txt`
ps -ef command will give all the running processes, using grep and process name, you can get PID of the particular process
awk is used to extract only PID from above command
kill -9 will forcefully kill the process
The answer seems to have been pretty easy, all I had to do was add wait to the end of the script, which allows the script to wait until the processes are done executing. Since two of the processes are servers, they don't stop unless prompted, so it'll just wait until SIGINT is received, at that point it'll run the clean_up function and exit gracefully.
Additionally, one could use the same trap but with the EXIT trigger instead of SIGINT to clean up when the script exits on it's own due to the processes closing.
I'm working with ash/dash and try to kill a subprocess - which doesn't seem to respond:
sh & opens a subprocess and jobs delivers [1]+ Stopped (tty Input) sh.
But trying to kill this Job with kill %1 or kill 26672 doesn't work. jobs delivers [1]+ Stopped (tty Input) sh again.
After putting the job to foreground with fg opens the shell for input. Neither ctrl+c nor ctrl+z are working but I can kill the process with exit or kill -SIGKILL $$ respectively stop/suspend the process with kill -STOP $$ (there is no suspend-command in ash).
On the other hand - doing this with i.e. sleep 100 works fine till I fg and stop the process with ctr+z. Then I'm not able to kill this stopped job.
So what am I missing and what could be the solution to kill a stopped job? Do I have to deal with set -m and how?
Thanks in advance.
You can try
kill -9 $(jobs -p)
Or, just do exit command to logout, it will automatically killed the stopped jobs
You can rung the Stopped process by kill -SIGCONT %numberand also if you need to kill that process you can kill it by kill -SIGTERM %number.Try this I think this will help you.
If I start the script by ./test.sh &, I am able to kill using kill -SIGINT PID.
But if I start my shell script using nohup ./test.sh & I am unable to kill the process using kill -SIGINT PID.
Kindly need your advice to kill the script using kill -SIGINT PID
The SIGINT signal means interrupt from keyboard; that's why it terminates a script run in foreground, but not in background neither using nohup.
To properly terminate your process use kill -TERM PID, which works in the 3 cases.
In my bash script I am writing, I am trying to start a process (sleep) in the background and then suspend it. Finally, the process with be finished. For some reason through, when I send the kill command with the stop signal, it just keeps running as if it received nothing. I can do this from the command line, but the bash script is not working as intended.
sleep 15&
pid=$!
kill -s STOP $pid
jobs
kill -s CONT $pid
You can make it work by enabling 'monitor mode' in your script: set -m
Please see why-cant-i-use-job-control-in-a-bash-script for further information