init process is created by 0 process and its pid is 1. I have known it's the ancestor of all the other processes except 0 process. init process creates idle process for each cpu in smp system and execute /sbin/init.But why it's a user_space process? It's behavior is more like a kernel-thread.
There is no process with pid 0.
/sbin/init is userspace program and it is the first process launched by kernel if kernel command line does not have init= as an argument.
It is idle task not a process and it is not user-space process. init process does not create idle-task.
Related
I am using taskset according to linux manual page in order to run a very processing intense task only on specific cores.
The taskset is encapsulated in a loop. Each time a new target directory is selected and the task is beeing run. Running the process multiple times in parallel may lead to fatal results.
The pseudo code is as follows:
#!/bin/bash
while :
do
target_dir=$(select_dir) # select new directory to process
sudo taskset -c 4,5,6,7,8,9,10,11 ./processing_intense_task --dir $target_dir
done
I have found nothing in the documentation if taskset actually waits for the process to finish.
If it does not wait, how do I wait for the task completion before starting a new instance of processing_intense_task?
the documentation if taskset actually waits for the process to finish.
Taskset executes exec, so it becomes the command. https://github.com/util-linux/util-linux/blob/master/schedutils/taskset.c#L246
This is the same as do other similar commands, like nice ionice.
If it does not wait,
Well, technically taskset doesn't wait, it becomes the command itself.
how do I wait for the task completion before starting a new instance of processing_intense_task?
You just wait for taskset process to finish, as it's the same process as the command. I.e. do nothing.
I have a script (script.sh) that spawns a whole lot of child processes. If I run the script from the shell via ./script.sh, I can kill the whole process tree via
kill -- -<PID>
where PID is the process ID of the script.sh process (this apparently equals the group ID).
However, if I spawn the script from Ruby via
pid = Process.spawn(script.sh)
I cannot manage to kill the process tree.
Process.kill(9,pid)
only kills the parent process. And even worst, the following
Process.kill(9,-Process.getpgid(pid)) ### Don't try this line at home
terminates my computer.
Trying to kill the processes via
system("kill -- -#{pid}")
also fails.
How am I supposed to kill this process tree from Ruby?
I think I have found the solution. Spawning the process as
pid = Process.spawn(script.sh, :pgroup => true)
makes me able to kill the process group via
Process.kill(9,-Process.getpgid(pid))
It looks like bash groups processes by default, while Spawn doesn't enable this by default.
The following code forks the main processes and runs a command in backticks. The kill at the end of the script only kills the forked process but not it's child processes (i.e. the sleep command).
pid = fork do
Thread.new do
`sleep 20`
end
end
sleep(1)
Process.kill("HUP",pid)
Is there a way to kill all child processes (generated by backtick commands in threads in the forked process) other than searching through the process tree?
Behind the scene both system and backtick operations use fork to fork
the current process and then they execute the given operation using
exec .
Since exec replaces the current process it does not return anything if
the operation is a success. If the operation fails then
`SystemCallError is raised.
http://blog.bigbinary.com/2012/10/18/backtick-system-exec-in-ruby.html
You can use
pid = Process.spawn('sleep 20')
to get the PID of the process immediately. Your code above would change to:
pid = Process.spawn('sleep 20')
sleep(1)
Process.kill('HUP',pid)
Motivation:
In a Java program, I'm setting a bash script to be executed on -XX:OnOutOfMemoryError. This script is responsible for uploading the heap-dump to HDFS. However, quite often only a part of the file gets uploaded.
I'm suspecting the JVM gets killed by cluster manager before the upload script completes. My guess is the JVM receives a process group kill signal and takes the bash script, i.e. its child process, down too.
The Question:
Is there a way in unix to run a sub-process in such a way that it does not die when it's parent receives a group kill signal?
You can use disown. Start the process in the background and then disown it, and any kill signals to the process parent will no longer be propagated to the child.
Script would look something like:
./handler_script &
disown
I am currently reading up on some more details on Bash scripting and especially process management here. In the section on "PIDs and Parents" I found the following statement:
A process's PID will NEVER be freed up for use after the process dies UNTIL the parent process waits for the PID to see whether it ended and retrieve its exit code.
So if I understand this correctly, if I start an process in a bash script, then the process terminates, that the PID cannot be used by any other process. Wouldn't this mean, that if I have a long running script, which repeatedly starts other sub-processes but never waits on them, that I'll eventually have a resource leak, because the used PIDs will not be returned back to the system?
How about if I actually wait for the other process, but the wait get's cancelled by a trap. Would this wait somehow still free up the PID, or do I have to wait again after the trap has been caught?
Luckily you won't. I can't tell you exactly why but you can easily test this. Run the following script (stop with Ctrl+C):
#!/bin/bash
while true; do
sleep 5 &
sleep 1
done
You can see you get no zombies (leaked PIDs) after 6+ seconds. To see some zombies use the following python code (again, stop with Ctrl+C):
#!/usr/bin/python
import subprocess, time
pl = []
while True:
pl.append(subprocess.Popen(["sleep", "5"]))
time.sleep(1)
After 6 seconds you'll see one zombie:
ps xaw | grep 'sleep'
...
26470 pts/2 Z+ 0:00 [sleep] <defunct>
...
My guess is that bash does wait and stores the results reaping the zombile processes with or without the builtin wait command. For the python script, if you remove the pl.append part the garbage collection releases the objects and does it's magic again reaping the zombies. Just for info a child may never become a zombie (from wikipedia, Zombie process):
...if the parent explicitly ignores SIGCHLD by setting its handler to SIG_IGN (rather
than simply ignoring the signal by default) or has the SA_NOCLDWAIT flag set, all
child exit status information will be discarded and no zombie processes will be left.
You don't have to explicitly wait on foreground processes because the shell in which your script is running waits on them. The next process won't start until the previous one finishes.
If you start many long running background processes, you could use all available PIDs, but that's subject to the limit of ulimit -u (which could be unlimited).