Process priority in Linux kernel [duplicate] - linux-kernel

This question already has an answer here:
what is the difference among three priorities used in Linux kernel?
(1 answer)
Closed 6 years ago.
I am new to Linux kernel and I have got confused . Please can anyone give answer to my questions :
Q1 -> Is static priority of a thread changes or not ? If changes then how it changes ?
Q2 -> What is the default value of static priority and dynamic priority for a process and thread in Linux kernel ?
Q3 -> What is the initial value of static priority and dynamic priority for a newly created thread and process?
Q4 -> When we talk about the priority of a process or thread (incrementing / decrementing priority , setting priority etc. ) , then which priority we are referring , is it static priority or dynamic priority ?

For currently running process, please run the below command
renice <priority_value> -p `pidof <process_name>`
Here, -20<=priority_value<=19
For new process
nice -n <priority_value> <process_name>

Related

Boost SML is there equvivalent for exit one SM into another by using pseudo exit state

I'm trying to convert my boost MSM state machine to SML.
My MSM state machine can be seen here:
Boost MSM process_event doesn't transit between SM states
After I looked on SML documentation it seems that it doesn't support pseudo states (exit/entry pseudo states).
From my understanding in order to transit OperationlSm state macine to GapWaiting state after receiving EvGetGrantsRsp event inside CyclingSm state machine (inner SM of OperationalSm) I must somehow explicitly tell OperationalSm to do this transition (what actually was done automatically for me here:
msmf::Row<Cycling::exit_pt<Cycling::PseudoCycleExit>, msmf::none, GapWaiting, msmf::none, msmf::none>,
My question is there some way I can create such behavior by using SML?

Is there any way to know which processes are running on which core in QNX

My system is running with QNX6.5 and it has 4 cpu cores. But I don't know which and all processes are running in each core. Is there any way to know in detail.
Thanks in advance
Processes typically run multiple threads (at least one - main thread); so the thread is actual running unit, not the process (and core affinity is settable per thread). Thus you'd need to know on which core(s) threads are running.
There is "%l" format option that tells you on what CPU particular thread is executing on:
# pidin -F "%b %50h %i %l" -p random
tid thread name cpu
1 1 0
Runmask : 0x0000007f
Inherit Mask: 0x0000007f
2 Timer Thread 1
Runmask : 0x0000007f
Inherit Mask: 0x0000007f
3 3 6
Runmask : 0x0000007f
Inherit Mask: 0x0000007f
Above we print thread id, thread name, run/inherit cpu masks and top right column is current cpu threads are running on, for the process called "random".
The best tooling for analyzing the details of process scheduling in QNX is the "System Analysis Toolkit", which uses the instrumentation features of the QNX kernel to provide a log of every scheduling event and message pass.
For QNX 6.5, the documentation can be found here: http://www.qnx.com/developers/docs/6.5.0SP1.update/index.html#./com.qnx.doc.instr_en_instr/about.html
Got the details by using below command.
pidin rmasks
which will give "pid, tid, and name" of every threads.
From the runmask value we can identify in which core it is running.
For me thread details also fine.

I can not take a critical section [duplicate]

This question already has answers here:
Critical section negative lock count
(2 answers)
Closed 7 years ago.
I have a thread stopped taking a critical section. The critical section does not have any thread owning, the only strange thing is that LockCount is -3.
LockCount -3
RecursionCount 0
OwningThread 0
LockSemaphore dfc
SpinCount 10000
Inside debug info the ContentionCount is 1.
How can I get a lockCount of -3? any idea?
Thank you.
Yes. You released it more times than you acquired it.
It's a good idea to use a scoped guard to auto-release it. Then you don't have to worry about multi release.

Why is a thread's status running but it doesn't use any CPU?

Today I found a very strange problem.
I ran Redhat Enterprise Linux 6, and the CPU was Intel E31275 (4 cores, 8 threads). I found one kernel thread (I called it as my_thread) didn't work correctly.
With "ps" command, I found the status of my_thread was always running:
ps ax
5545 ? R 3:14 [my_thread]
15774 ttyS0 Ss 0:00 -bash
...
But its running time was always 3:14. Since it ws running, why didn't the total time increase?
From the proc file /proc/5545/sched, I found the all statistics including wakeups count (se.nr_wakeups) for this thread was always the same, too.
From /proc/5545/stack, I found this thread called this function and never returned:
interruptible_sleep_on_timeout(&q, 3*HZ);
In theory this function would return every 3 seconds if no other threads woke up the thread. Each time after the function returned, se.nr_wakeups in /proc/5545/sched would be increased by 1. But this never happened after I found the thread had some problems.
Does any one have some ideas? Is it possible that interruptible_sleep_on_timeout() never returns?
Update:
I find the problem won't occur if I set CPU affinity for this thread. If I pin it to a dedicated core, then everything is OK. Are there any problems with SMP scheduling?
Update again:
After I disalbe hyperthread in BIOS, I have not seen such a problem until now.
First off, R indicates the thread is not in running state but runnable. That is, it does not mean it runs, it means it is in a state the scheduler is allowed to pick it for running. There is a big difference between the two.
In a similar sense interruptible_sleep_on_timeout(&q, 3*HZ); will not run the thread after 3 jiffies, but rather make it available for running after 3 jiffies - and indeed you see it in "ps" as available for running, so possibly the timeout has indeed occurred.
Since you did not say anything about the kernel thread in question I don't even know if it is in your own code or standard kernel code so I cannot really answer in detail.
One possible reason for the situation you described is that some other thread (user or kernel) has higher priority then your thread and so the scheduler never picks it for running. If so, it is not probably a thread running in real time priority (SCHED_FIFO or SCHED_RR).

How to kill slave kernel securely?

LinkClose[link] "does not necessarily terminate the program at the other end
of the connection" as it is said in the Documentation. Is there a way to kill the
process of the slave kernel securely?
EDIT:
In really I need a function in Mathematica that returns only when the process of the slave kernel has already killed and its memory has already released. Both LinkInterrupt[link, 1] and LinkClose[link] do not wait while the slave kernel exits. At this moment the only such function is seemed to be killProc[procID] function I had showed in one of answers at this page. But is there a built-in analog?
At this moment I know only one method to kill the MathKernel process securely. This method uses NETLink and seems to work only under Windows and requires Microsoft .NET 2 or later to be installed.
killProc[processID_] := If[$OperatingSystem === "Windows",
Needs["NETLink`"];
Symbol["LoadNETType"]["System.Diagnostics.Process"];
With[{procID = processID},
killProc[procID_] := (
proc = Process`GetProcessById[procID];
proc#Kill[]
);
];
killProc[processID]
];
(*Killing the current MathKernel process*)
killProc[$ProcessID]
Any suggestions or improvements will be appreciated.
Edit:
The more correct method:
Needs["NETLink`"];
LoadNETType["System.Diagnostics.Process"];
$kern = LinkLaunch[First[$CommandLine] <> " -mathlink -noinit"];
LinkRead[$kern];
LinkWrite[$kern, Unevaluated[$ProcessID]];
$kernProcessID = First#LinkRead[$kern];
$kernProcess = Process`GetProcessById[$kernProcessID];
AbortProtect[If[! ($kernProcess#Refresh[]; $kernProcess#HasExited),
$kernProcess#Kill[]; $kernProcess#WaitForExit[];
$kernProcess#Close[]];
LinkClose[$kern]]
Edit 2:
Even more correct method:
Needs["NETLink`"];
LoadNETType["System.Diagnostics.Process"];
$kern = LinkLaunch[First[$CommandLine] <> " -mathlink -noinit"];
LinkRead[$kern];
LinkWrite[$kern, Unevaluated[$ProcessID]];
$kernProcessID = First#LinkRead[$kern];
$kernProcess = Process`GetProcessById[$kernProcessID];
krnKill := AbortProtect[
If[TrueQ[MemberQ[Links[], $kern]], LinkClose[$kern]];
If[TrueQ[MemberQ[LoadedNETObjects[], $kernProcess]],
If[! TrueQ[$kernProcess#WaitForExit[100]],
Quiet#$kernProcess#Kill[]; $kernProcess#WaitForExit[]];
$kernProcess#Close[]; ReleaseNETObject[$kernProcess];
]
];
Todd Gayley has answered my question in the newsgroup. The solution is to send to the slave kernel an MLTerminateMessage. From
top-level code:
LinkInterrupt[link, 1] (* An undocumented form that lets you pick
the message type *)
In C:
MLPutMessage(link, MLTerminateMessage);
In Java using J/Link:
link.terminateKernel();
In .NET using .NET/Link:
link.TerminateKernel();
EDIT:
I have discovered that in standard cases when using LinkInterrupt[link, 1]
my operating system (Windows 2000 at the moment) releases physical memory
only in 0.05-0.1 second beginning with a moment of execution of
LinkInterrupt[link, 1] while with LinkClose[link] it releases physical
memory in 0.01-0.03 second (both values include the time, spent on execution
of the command itself). Time intervals were measured by using SessionTime[]
under equal conditions and are steadily reproduced.
Actually I need a function in Mathematica that returns only when the process of the slave kernel has already killed and its memory has already released. Both LinkInterrupt[link, 1] and LinkClose[link] do not wait while the slave kernel exits. At this moment the only such function is seemed to be killProc[procID] function I had showed in another answer at this page.

Resources