Is there some way I can get systemd to behave like this:
Send SIGTERM
Wait 2 minutes
Send SIGTERM again
Wait 2 minutes
Send SIGKILL
Step 1, 2 and 5 are easy enough as that means only adjusting TimeoutStopSec but I don't see any way to accomplish 3 and 4. Possible?
Using custom ExecStop commands:
ExecStop=kill $MAINPID
ExecStop=sleep 120
ExecStop=kill $MAINPID
ExecStop=sleep 120
ExecStop=kill -KILL $MAINPID
Related
I am using minicom in order to connect with my modem (quectelEC25). The goal is to send differente AT commands in order to retrieve ceratain information about the modem and save it in a outpu file. I wrote the following script in bash:
#!/bin/bash
while true;
do
sudo minicom -D /dev/ttyUSB2 -S script.txt -C AT_modems_responses_1.txt
sleep 1
done
Being the script.txt:
send AT
expect OK
send ATI
expect OK
send AT+COPS?
expect OK
start:
send AT+CCLK?
expect OK
send AT+CREG?
expect OK
send AT+CSQ
expect OK
sleep 1
goto start
The problem is that the AT commands stop working after 2 minutes (AT+CCLK? & AT+CSQ).
Why does it stop? What is the problem? Should I work with the AT commands in a different way?
Thank you in advance
The runscript by defautl exists after 120 seconds (2 minutes). This is the reason why the minicom was not working after 2 minutes, in order to run more time, a timeout has to be included in the script. For 5 minutes should be:
timeout 300
Don't know how it can be configured as infinite.
I have written a bash script to carry out some tests on my system. The tests run in the background and in parallel. The tests can take a long time and sometimes I may wish to abort the tests part way through.
If I Control+C then it aborts the parent script, but leaves the various children running. I wish to make it so that I can hit Control+C or otherwise to quit and then kill all child processes running in the background. I have a bit of code that does the job if I'm running running the background jobs directly from the terminal, but it doesn't work in my script.
I have a minimal working example.
I have tried using trap in combination with pgrep -P $$.
#!/bin/bash
trap 'kill -n 2 $(pgrep -P $$)' 2
sleep 10 &
wait
I was hoping that on hitting control+c (SIGINT) would kill everything that the script started but it actually says:
./breakTest.sh: line 1: kill: (3220) - No such process
This number changes, but doesn't seem to apply to any running processes, so I don't know where it is coming from.
I guess if the contents of the trap command get evaluated where the trap command occurs then it might explain the outcome. The 3220 pid might be for pgrep itself.
I'd appreciate some insight here
Thanks
I have found a solution using pkill. This example also deals with many child processes.
#!/bin/bash
trap 'pkill -P $$' SIGINT SIGTERM
for i in {1..10}; do
sleep 10 &
done
wait
This appears to kill all the child processes elegantly. Though I don't properly understand what the issue was with my original code, apart from sending the correct signal.
in bash whenever you you use & after a command it places that command as a background job ( this background jobs are called job_spec ) which is incremented by one until you exit that terminal session. You can use the jobs command to get the list of the background jobs running. To work with this jobs you have to use the % with the job id. The jobs command also accept other options such as jobs -p to see the proces sids of all jobs , jobs -p %JOB_SPEC to see the process of id of that particular job.
#!/usr/bin/env bash
trap 'kill -9 %1' 2
sleep 10 &
wait
or
#!/usr/bin/env bash
trap 'kill -9 $(jobs -p %1)' 2
sleep 10 &
wait
I implemented something like this few years back, you can take a look at it async bash
You can try something like the following:
pkill -TERM -P <your_parent_id_here>
I am using a systemd service which calls a process when it's been "started" (e.g. systemctl start test.service). As per the design, the process stays in a loop forever, we are able to see process existence using the ps command. We have also seen that the process is getting killed (as intended) for systemctl stop command.
However, our requirement is that we want to do some safe shutdown operations from within the process before it gets killed. But I am not sure how to detect a systemd stop operation from within the process.
Does a systemctl stop test.service command send SIGKILL or SIGTERM signal to kill the process? How can I detect a systemctl stop operation from within a process?
By default, a SIGTERM is sent, followed by 90 seconds of waiting followed by a SIGKILL.
Killing processes with systemd is very customizable and well-documented.
I recommend reading all of man systemd.kill as well as reading about ExecStop= in man systemd.service.
To respond to those signals, refer to the signal handling documentation for the language you are using.
Does a systemctl stop test.service command send SIGKILL or SIGTERM signal to kill the
process? How can i detect a systemctl stop operation from within a process?
Systemd sends SIGTERM signal to process. In process you have to register signals, which are "caught".
In process, eg. SIGTERM signal can be registered like this:
void signal_callback()
{
printf("Process is going down\n");
}
signal(SIGTERM, signal_callback)
When SIGTERM is sent to the process, the signal_callback() function is executed.
Below is the script that i tried to execute/automate testing,
while [ 1 ]; do
val=`expr $val + 1`
ksh ./run.ksh //This line needs the keyboard interaction so i can't run in background. It takes too long time to complete so i need to kill the above command using ctrl+C
echo "pid=$!"
echo "pid=$$"
sleep 40
val1=val;
done;
./run.ksh - is script that has some business logic to send the data to other machine and waits for the response. Even if the response received it waits for reasonable amount of time to complete the processing. Because it waits for the connection to be closed and doing other cleanup activity.
My problem is that i want to kill that script after few seconds by sending ctrl+C. When i googled i found that $! can be used to get the process id of the background process, but the same cannot be used in this case.
Is it possible to send the ctrl+C in the shell script?
Thanks in advance.
Use timeout command. For example it will be killed after 30 seconds.
timeout 30 ksh ./run.ksh
I am working with programs that use CTRL-C to stop a task, and what I want to do is to run that task for certain number of minutes and then have it stop like CTRL-C was pressed. The reason why I want it to stop like ctrl+c was pressed is because it auto saves when you stop the program instead of killing it and possibly losing the saved data.
edit; I don't want to use cron unless if it stops my script it will have the program save the data, I am hoping to accomplish this inside the shell script.
The trap statement catches these sequences and can be programmed to execute a list of commands upon catching those signals.
-#!/bin/bash
trap "echo Saving Data" SIGINT
while :
do
sleep 60
done
For Information on Traps : http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_12_02.html
Using timeout command to send SIGINT after 60 seconds:
timeout --signal=INT 60 /path/to/script.sh params
If you need to intercept ctrl+c, you should use the trap builtin, like this :
cleanup(){ # do something...; }
trap 'cleanup' 2
# rest of the code
2 on trap line is the SIGINT signal sended by ctrl+c, see man 7 signals
Try the following:
#!/bin/bash
set -m
/path/to/script.sh params &
set +m
bg_pid=$!
sleep 60
kill -2 $bg_pid
This should allow you to send SIGINT to a backgrounded process using Job Control and the set builtin