What is the priority of background processes in linux environment - shell

I would like to know how the OS prioritises the execution of background processes in Linux.
Suppose I have the below command, would it be executed right away, or would the OS prioritise the execution order.
nohup /bin/bash /tmp/kill_loop.sh &
Thanks

All processes running at the same nice value will get an equal cpu-timeslice.
Here is a simple test that launches 2 processes, both performing the exact same operations. One is launched in the background and the other in the foreground.
dd if=/dev/zero of=/dev/null bs=1 &
dd if=/dev/zero of=/dev/null bs=1
The relevant extract from subsequently running the top command
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1366 root 20 0 1576 532 436 R 100 0.0 0:30.79 dd
1365 root 20 0 1576 532 436 R 100 0.0 0:30.79 dd
Next, if both the processes are restricted to the same CPU,
taskset -c 0 dd if=/dev/zero of=/dev/null bs=1 &
taskset -c 0 dd if=/dev/zero of=/dev/null bs=1
Again the relevant extract from subsequently running the top command shows
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1357 root 20 0 1576 532 436 R 50 0.0 0:38.74 dd
1358 root 20 0 1576 532 436 R 50 0.0 0:38.74 dd
both the processes compete for CPU-timeslice and are equally prioritised.
Finally,
kill -SIGINT 1357 &
kill -SIGINT 1358 &
kill -SIGINT 1365 &
kill -SIGINT 1366 &
results in similar amounts of data copied and throughput.
25129255+0 records in
25129255+0 records out
25129255 bytes (25 MB) copied, 34.883 s, 720 kB/s
Slight discrepancies in output may occur in the throughput due to differences in the exact moment the individual processes respond to the break-signal and stop running.
However also note that sched_autogroup_enabled exists.
if enabled, sched_autogroup_enabled ensures that the fairness in distributing cpu-timeslice is now performed between individual shells. By distributing cpu equally amongst the various active shells.
Thus if a shell launches 1 process A,
and another shell launches 2 processes B and C,
then the CPU execution timeslice will typically be distributed as
A <-- 50% <---- shell1 50%
B <-- 25% <-.
C <-- 25% <--`- shell2 50%
(though all 3 processes A, B & C are running at the same nice level.)

The process priorities in Linux kernel is given by NICE values.
Refer to the link
http://en.wikipedia.org/wiki/Nice_(Unix)
The nice values (ranging between -20 to +19) define the process priorities, -20 being the highest priority task. Usually the user-space processes are given default nice values of '0'. You can check the nice values for the running processes on your shell using the below command.
ps -al
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 S 1039 1268 16889 0 80 0 - 11656 poll_s pts/8 00:00:08 vim
0 S 1047 1566 17683 0 80 0 - 2027 wait pts/18 00:00:00 arm-linux-andro
0 R 1047 1567 1566 21 80 0 - 9143 ? pts/18 00:00:00 cc1
0 R 1031 1570 15865 0 80 0 - 2176 - pts/24 00:00:00 ps
0 R 1031 17357 15865 99 80 0 - 2597 - pts/24 00:03:29 top
So from above output if you see the 'NI' column shows your nice values. When i tried running a background process, that too got a nice value of '0' (top is that process with PID 17357). That would mean, it will also be queued up for like a foreground process and will be scheduled likewise.

Related

Strange output from Docker image/container?

I am fairly new to Docker and am trying to run an image and when I do I would usually get “inside” the image if that makes sense, where i can access different directories that i have made inside.
However, when I have done it recently I have gotten the following output:
top - 15:49:10 up 2:36, 0 users, load average: 0.65, 0.70, 0.71
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
%Cpu(s): 5.9 us, 2.8 sy, 0.2 ni, 89.2 id, 1.8 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 3930660 total, 370676 free, 1749516 used, 1810468 buff/cache
KiB Swap: 4076540 total, 4076540 free, 0 used. 1550316 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 36536 2968 2604 R 0.0 0.1 0:00.05 top -b -c
top - 15:49:13 up 2:36, 0 users, load average: 0.65, 0.70, 0.71
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.0 us, 2.6 sy, 0.0 ni, 94.2 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 3930660 total, 366860 free, 1753244 used, 1810556 buff/cache
KiB Swap: 4076540 total, 4076540 free, 0 used. 1546536 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 36536 2968 2604 R 0.0 0.1 0:00.05 top -b -c^
from the following docker command:
sudo docker run -i -t ubuntu-latest
I am running docker 17.12 on ubuntu 16.04. At this time If I could receive a solution without having to post the dockerfile I will, due to certain information being present in the file.
Any feedback would be greatly appreciated
when a container launches it executes a binary which can be defined within the image or overridden with the cli using the entrypoint and command arguments.
see https://docs.docker.com/engine/reference/builder/#cmd vs https://docs.docker.com/engine/reference/builder/#entrypoint
In this case it looks like you've launched your container to run 'top' automatically which is why it's launching and executing top as pid 1 instead of an interactive bash session. If you could paste just your Dockerfile Entrypoint and CMD args it would be possible to know exactly what's happening but you should be able to override them via the cli with:
sudo docker run --entrypoint /bin/bash -i -t ubuntu-latest

What is LOCAL=NO in oracle processes

I was trying to find out processes that are consuming more memory on my Unix box using top command:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23421 test 18 0 6408m 2.8g 2.8g D 0.0 23.7 1:03.63 xyz
11874 test 15 0 6378m 1.9g 1.9g S 0.0 16.1 0:05.47 xyz
31217 test 15 0 6379m 1.9g 1.9g R 0.0 16.0 0:44.21 xyz
As above processes are consuming more than 15% of MEMory, I tried to search further:
-bash-3.2$ ps 23421 11874 31217
PID TTY STAT TIME COMMAND
23421 ? Ds 1:03 ora_dbw0_xyz
11874 ? Ss 0:05 oraclexyz (LOCAL=NO)
31217 ? Ds 0:46 oraclexyz (LOCAL=NO)
This command shows some output that Oracle database is consuming more memory.
On searching in internet I found that ora_dbw0 is some database writer process but I am not able to understand what is (LOCAL=NO) process and how is it associated with Oracle database. Please help me in understanding what are these processes.
(LOCAL=NO) processes are the processes of connections using SQL*net (localhost or remote machines) and are not using MTS (Multi Threaded server)
local processes, connections from the database server, using ORACLE_SID use the Bequeath protocol. In the process list these are show as :
oracledxxx (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))

User processes in D-state leads to a watchdog reset using Linux 2.6.24 and arm processor

Most of the user space processes are ending up in D-state after the unit runs for around 3-4 days, the unit is running on ARM processor. From the top o/p we can see that processes that are in D-state are waiting on system calls "page_fault" and "squashfs_readpage". Utimately this leads to a watchdog reset. The processes that go into D-sate would take unusually long time to recover.
Following is the top o/p when the system ends up in trouble:
top - 12:00:11 up 3 days, 2:40, 3 users, load average: 2.77, 1.90, 1.72
Tasks: 250 total, 3 running, 238 sleeping, 0 stopped, 9 zombie
Cpu(s): 10.0% us, 75.5% sy, 0.0% ni, 0.0% id, 10.3% wa, 0.0% hi, 4.2% si
Mem: 191324k total, 188896k used, 2428k free, 2548k buffers
Swap: 0k total, 0k used, 0k free, 87920k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1003 root 20 0 225m 31m 4044 S 15.2 16.7 0:21.91 user_process_1
3745 root 20 0 80776 9476 3196 **D** 9.0 5.0 1:31.79 user_process_2
129 root 15 -5 0 0 0 S 7.4 0.0 0:27.65 **mtdblockd**
4624 root 20 0 3640 256 160 **D** 6.5 0.1 0:00.20 GetCounters_cus
3 root 15 -5 0 0 0 S 3.2 0.0 43:38.73 ksoftirqd/0
31363 root 20 0 2356 1176 792 R 2.6 0.6 40:09.58 top
347 root 30 10 0 0 0 S 1.9 0.0 28:56.04 **jffs2_gcd_mtd3**
1169 root 20 0 225m 31m 4044 S 1.9 16.7 39:31.36 user_process_1
604 root 20 0 0 0 0 S 1.6 0.0 27:22.76 user_process_3
1069 root -23 0 225m 31m 4044 S 1.3 16.7 20:45.39 user_process_1
4545 root 20 0 3640 564 468 S 1.0 0.3 0:00.08 GetCounters_cus
64 root 15 -5 0 0 0 **D** 0.3 0.0 0:00.83 **kswapd0**
969 root 20 0 20780 1856 1376 S 0.3 1.0 14:18.89 user_process_4
973 root 20 0 225m 31m 4044 S 0.3 16.7 3:35.74 user_process_1
1070 root -23 0 225m 31m 4044 S 0.3 16.7 16:41.04 user_process_1
1151 root -81 0 225m 31m 4044 S 0.3 16.7 23:13.05 user_process_1
1152 root -99 0 225m 31m 4044 S 0.3 16.7 8:48.47 user_process_1
One more interesting observation is that when the system lands up in this problem, we can consistently see "mtdblockd" process running in the top o/p. We have swap disabled on this unit. there is no apparent memory leak in the unit.
Any idea what could be the possible reasons, the processes are stuck in D-sates?
D-state means the processes are stuck in the kernel in a TASK_UNINTERRUPTIBLE sleep, this is unlikely to be bugs in the Squashfs error handling code because if a process exited Squashfs holding a mutex, the system would quickly grind to a halt as other processes entered Squashfs and slept forever waiting for the mutex. You would also see a low load average/system time as most processes would be sleeping. Furthermore there is no evidence Squashfs has hit any I/O errors.
Load average (2.77) and system time (75.5%) is extremely high, coupled with the fact a lot of processes are in Squashfs_readpage (which is completing but slow), indicates the system is thrashing. There is too little memory and the system is spending all it's time constantly (re-)demand paging pages from disk. This will account for the fact a lot of processes are in Squashfs_readpage, system time is extremely high because the system is spending most of its time in Squashfs in the CPU intensive task of decompression. The other processes are stuck in Squashfs waiting on the decompressor mutex (only one process can be decompressing at a time because the decompressor state is shared).

Using the top program in bash to extract cpu time into a variable

I have an assignment in bash scripting trying to measure cpu time used for a process passed into the script by name. I can find the process id and pass it to the top program in bash. However, I haven't figured out how to extract the cpu time from the top program. for example:
top is printing out:
top - 00:57:07 up 6:06, 2 users, load average: 0.46, 0.31, 0.55
Tasks: 1 total, 1 running, 0 sleeping, 0 stopped, 0 zombie
%Cpu(s): 3.7 us, 0.8 sy, 0.0 ni, 94.6 id, 0.9 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem: 1928720 total, 1738072 used, 190648 free, 57184 buffers
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3337 amarkovi 20 0 372m 31m 10m R 0.7 1.7 13:28.74 chromium-browse
all I want from this is the TIME+ field to be assigned to variable so I can add up the time and print it out by it self.
I am a noob to bash scripting so please be patient.
thanks,
Do you have to use top? It should be much simpler (once you work out the right options) to use ps to give you just the fields you want, then use grep to select just the processes you want.
Since it's an assigment i don't want to spoil all the fun :D. I'll just point you to some commands of which can help you in your endeavour : sed, awk and cut. With this 3 you can solve it in many ways, enjoy!

How to obtain the virtual private memory of a process from the command line under OSX?

I would like to obtain the virtual private memory consumed by a process under OSX from the command line. This is the value that Activity Monitor reports in the "Virtual Mem" column. ps -o vsz reports the total address space available to the process and is therefore not useful.
You can obtain the virtual private memory use of a single process by running
top -l 1 -s 0 -i 1 -stats vprvt -pid PID
where PID is the process ID of the process you are interested in. This results in about a dozen lines of output ending with
VPRVT
55M+
So by parsing the last line of output, one can at least obtain the memory footprint in MB. I tested this on OSX 10.6.8.
update
I realized (after I got downvoted) that #user1389686 gave an answer in the comment section of the OP that was better than my paltry first attempt. What follows is based on user1389686's own answer. I cannot take credit for it -- I've just cleaned it up a bit.
original, edited with -stats vprvt
As Mahmoud Al-Qudsi mentioned, top does what you want. If PID 8631 is the process you want to examine:
$ top -l 1 -s 0 -stats vprvt -pid 8631
Processes: 84 total, 2 running, 82 sleeping, 378 threads
2012/07/14 02:42:05
Load Avg: 0.34, 0.15, 0.04
CPU usage: 15.38% user, 30.76% sys, 53.84% idle
SharedLibs: 4668K resident, 4220K data, 0B linkedit.
MemRegions: 15160 total, 961M resident, 25M private, 520M shared.
PhysMem: 917M wired, 1207M active, 276M inactive, 2400M used, 5790M free.
VM: 171G vsize, 1039M framework vsize, 1523860(0) pageins, 811163(0) pageouts.
Networks: packets: 431147/140M in, 261381/59M out.
Disks: 487900/8547M read, 2784975/40G written.
VPRVT
8631
Here's how I get at this value using a bit of Ruby code:
# Return the virtual memory size of the current process
def virtual_private_memory
s = `top -l 1 -s 0 -stats vprvt -pid #{Process.pid}`.split($/).last
return nil unless s =~ /\A(\d*)([KMG])/
$1.to_i * case $2
when "K"
1000
when "M"
1000000
when "G"
1000000000
else
raise ArgumentError.new("unrecognized multiplier in #{f}")
end
end
Updated answer, thats work under Yosemite, from user1389686:
top -l 1 -s 0 -stats mem -pid PID

Resources