How to know the number of active threads in Puma - shell

I am trying to see the number of active puma threads on my server.
I can not see it through ps:
$ ps aux | grep puma
healthd 2623 0.0 1.8 683168 37700 ? Ssl May02 5:38 puma 2.11.1 (tcp://127.0.0.1:22221) [healthd]
root 8029 0.0 0.1 110460 2184 pts/0 S+ 06:34 0:00 grep --color=auto puma
root 18084 0.0 0.1 56836 2664 ? Ss May05 0:00 su -s /bin/bash -c puma -C /opt/elasticbeanstalk/support/conf/pumaconf.rb webapp
webapp 18113 0.0 0.8 83280 17324 ? Ssl May05 0:04 puma 2.16.0 (unix:///var/run/puma/my_app.sock) [/]
webapp 18116 3.5 6.2 784992 128924 ? Sl May05 182:35 puma: cluster worker 0: 18113 [/]
As in the configuration I have:
threads 8, 32
I was expecting to see at least 8 puma threads?

To quickly answer the question, the number of threads used by a
process running on a given PID, can be obtained using the
following :
% ps -h -o nlwp <pid>
This will just return you the total number of threads used by the
process. The option -h removes the headers and the option -o nlwp
formats the output of ps such that it only outputs the Number of Light Weight Processes (NLWP) or threads. Example, when only a single process puma is running and its PID is obtained with pgrep, you get:
% ps -h -o nlwp $(pgrep puma)
4
What is the difference between process, thread and light-weight process?
This question has been answered already in various places
[See here, here and the excellent geekstuff
article]. The quick, short and ugly version is :
a process is essentially any running instance of a program.
a thread is a flow of execution of the process. A process
containing multiple execution-flows is known as multi-threaded
process and shares its resources amongst its threads (memory,
open files, io, ...). The Linux kernel has no knowledge of what
threads are and only knows processes. In the past,
multi-threading was handled on a user level and not kernel
level. This made it hard for the kernel to do proper process
management.
Enter lightweight processes (LWP). This is essentially the
answer to the issue with threads. Each thread is considered to
be an LWP on kernel level. The main difference between a
process and an LWP is that the LWP shares resources. In other words, an Light Weight Process is kernel-speak for what users call a thread.
Can ps show information about threads or LWP's?
The ps command or process status command provides information
about the currently running processes including their corresponding
LWPs or threads. To do this, it makes use of the /proc directory
which is a virtual filesystem and regarded as the control and
information centre of the kernel. [See here and here].
By default ps will not give you any information about the LWPs,
however, adding the option -L and -m to the command generally does
the trick.
man ps :: THREAD DISPLAY
H Show threads as if they were processes.
-L Show threads, possibly with LWP and NLWP columns.
m Show threads after processes.
-m Show threads after processes.
-T Show threads, possibly with SPID column.
For a single process puma with pid given by pgrep puma
% ps -fL $(pgrep puma)
UID PID PPID LWP C NLWP STIME TTY STAT TIME CMD
kvantour 2160 2876 2160 0 4 15:22 pts/39 Sl+ 0:00 ./puma
kvantour 2160 2876 2161 99 4 15:22 pts/39 Rl+ 0:14 ./puma
kvantour 2160 2876 2162 99 4 15:22 pts/39 Rl+ 0:14 ./puma
kvantour 2160 2876 2163 99 4 15:22 pts/39 Rl+ 0:14 ./puma
however, adding the -m option clearly gives a nicer overview. This
is especially handy when multiple processes are running with the same
name.
% ps -fmL $(pgrep puma)
UID PID PPID LWP C NLWP STIME TTY STAT TIME CMD
kvantour 2160 2876 - 0 4 15:22 pts/39 - 0:44 ./puma
kvantour - - 2160 0 - 15:22 - Sl+ 0:00 -
kvantour - - 2161 99 - 15:22 - Rl+ 0:14 -
kvantour - - 2162 99 - 15:22 - Rl+ 0:14 -
kvantour - - 2163 99 - 15:22 - Rl+ 0:14 -
In this example, you see that process puma with PID 2160 runs with 4
threads (NLWP) having the ID's 2160--2163. Under STAT you see two different values Sl+ and 'Rl+'. Here the l is an indicator for multi-threaded. S and R stand for interruptible sleep (waiting for an event to complete) and respectively running. So we see that 3 of the 4 threads are running at 99% CPU and one thread is sleeping.
You also see the total accumulated CPU time (44s) while a single thread only runs for 14s.
Another way to obtain information is by directly using the format
specifiers with -o or -O.
man ps :: STANDARD FORMAT SPECIFIERS
lwp lightweight process (thread) ID of the dispatchable
entity (alias spid, tid). See tid for additional
information. Show threads as if they were processes.
nlwp number of lwps (threads) in the process. (alias thcount).
So you can use any of lwp,spid or tid and nlwp or thcount.
If you only want to get the number of threads of a process called
puma, you can use :
% ps -o nlwp $(pgrep puma)
NLWP
4
or if you don't like the header
% ps -h -o nlwp $(pgrep puma)
4
You can get a bit more information with :
% ps -O nlwp $(pgrep puma)
PID NLWP S TTY TIME COMMAND
19304 4 T pts/39 00:00:00 ./puma
Finally, you can combine the flags with ps aux to list the threads.
% ps aux -L
USER PID LWP %CPU NLWP %MEM VSZ RSS TTY STAT START TIME COMMAND
...
kvantour 1618 1618 0.0 4 0.0 33260 1436 pts/39 Sl+ 15:17 0:00 ./puma
kvantour 1618 1619 99.8 4 0.0 33260 1436 pts/39 Rl+ 15:17 0:14 ./puma
kvantour 1618 1620 99.8 4 0.0 33260 1436 pts/39 Rl+ 15:17 0:14 ./puma
kvantour 1618 1621 99.8 4 0.0 33260 1436 pts/39 Rl+ 15:17 0:14 ./puma
...
Can top show information about threads or LWP's?
top has the option to show threads by hitting H in the interactive mode or by launching top with top -H. The problem is that it lists the threads as processes (similar to ps -fH).
% top
top - 09:42:10 up 17 days, 3 min, 1 user, load average: 3.35, 3.33, 2.75
Tasks: 353 total, 3 running, 347 sleeping, 3 stopped, 0 zombie
%Cpu(s): 75.5 us, 0.6 sy, 0.5 ni, 22.6 id, 0.0 wa, 0.0 hi, 0.8 si, 0.0 st
KiB Mem : 16310772 total, 8082152 free, 3662436 used, 4566184 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 11363832 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
868 kvantour 20 0 33268 1436 1308 S 299.7 0.0 46:16.22 puma
1163 root 20 0 920488 282524 258436 S 2.0 1.7 124:48.32 Xorg
...
Here you see that puma runs at about 300% CPU for an accumulated time of 46:16.22. There is, however, no indicator that this is a threaded process. The only indicator is the CPU usage, however, this could be below 100% if 3 threads are "sleeping"? Furthermore, the status flag states S which indicates that the first thread is asleep. Hitting H give you then
% top -H
top - 09:48:30 up 17 days, 10 min, 1 user, load average: 3.18, 3.44, 3.02
Threads: 918 total, 5 running, 910 sleeping, 3 stopped, 0 zombie
%Cpu(s): 75.6 us, 0.2 sy, 0.1 ni, 23.9 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 16310772 total, 8062296 free, 3696164 used, 4552312 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 11345440 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
870 kvantour 20 0 33268 1436 1308 R 99.9 0.0 21:45.35 puma
869 kvantour 20 0 33268 1436 1308 R 99.7 0.0 21:45.43 puma
872 kvantour 20 0 33268 1436 1308 R 99.7 0.0 21:45.31 puma
1163 root 20 0 920552 282288 258200 R 2.0 1.7 124:52.05 Xorg
...
Now we see only 3 threads. As one of the Threads is "sleeping", it is way down the bottom as top sorts by CPU usage.
In order to see all threads, it is best to ask top to display a specific pid (for a single process):
% top -H -p $(pgrep puma)
top - 09:52:48 up 17 days, 14 min, 1 user, load average: 3.31, 3.38, 3.10
Threads: 4 total, 3 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 75.5 us, 0.1 sy, 0.2 ni, 23.6 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st
KiB Mem : 16310772 total, 8041048 free, 3706460 used, 4563264 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 11325008 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
869 kvantour 20 0 33268 1436 1308 R 99.9 0.0 26:03.37 puma
870 kvantour 20 0 33268 1436 1308 R 99.9 0.0 26:03.30 puma
872 kvantour 20 0 33268 1436 1308 R 99.9 0.0 26:03.22 puma
868 kvantour 20 0 33268 1436 1308 S 0.0 0.0 0:00.00 puma
When you have multiple processes running, you might be interested in hitting f and toggle PGRP on. This shows the Group PID of the process. (PID in ps where PID in top is LWP in ps).
How do I get the thread count without using ps or top?
The file /proc/$PID/status contains a line stating how many threads
the process with PID $PID is using.
% grep Threads /proc/19304/status
Threads: 4
General comments
It is possible that you do not find the process of another user
and therefore cannot get the number of threads that process is using. This could be due to the mount options of /proc/ (hidepid=2).
Used example program:
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[]) {
char c = 0;
#pragma omp parallel shared(c) {
int i = 0;
if (omp_get_thread_num() == 0) {
printf("Read character from input : ");
c = getchar();
} else {
while (c == 0) i++;
printf("Total sum is on thread %d : %d\n", omp_get_thread_num(), i);
}
}
}
compiled with gcc -o puma --openmp

If you are just looking for number of threads that are spawned by the process, you can see the number of task folders created under /proc/[pid-of-process]/task because each thread creates a folder under this path. So counting the number of folders would be sufficient.
In fact the ps utility itself reads the information from this path, a file /proc/[PID]/cmdline which is represented in a more readable way.
From Linux Filesystem Hierarchy
/proc is very special in that it is also a virtual file-system. It's sometimes referred to as a process information pseudo-file system. It doesn't contain 'real' files but run-time system information (e.g. system memory, devices mounted, hardware configuration, etc). For this reason it can be regarded as a control and information center for the kernel. In fact, quite a lot of system utilities are simply calls to files in this directory.
All you need to get the PID of the process puma, use ps or any utility of your choice
ps aux | awk '/[p]uma/{print $1}'
or more directly use pidof(8) - Linux man page which gets you the PID directly given the process name as input
pidof -s puma
Now that you have the PID to count the number of task/ folders your process had created use the find command
find /proc/<PID>/task -maxdepth 1 -type d -print | wc -l

ps aux | grep puma will give you list of process for only puma. You need to find out , how many threads are running by the particular Process. May be this will help you:
ps -T -p 2623
You need to provide process id, for which you want to find out number of threads. Make sure you are providing accurate process id.

Using ps and wc to count puma threads:
ps --no-headers -T -C puma | wc -l
The string "puma" can be replaced as desired. Example, count bash threads:
ps --no-headers -T -C bash | wc -l
On my system that outputs:
9
The code in the question, ps aux | grep puma, has a few grep related problems:
It returns grep --color=auto puma, which isn't a puma thread at all.
Similarly any util or command with the string "puma", e.g. a util called notpuma, would be matched by grep.

I found "htop" to be an excellent solution. Just toggle "tree view" and you can view each puma-worker and the threads under that worker.

Number of puma threads for each worker:
ps aux | awk '/[p]uma/{print $2}' | xargs ps -h -o nlwp
Sample output:
7
59
59
61
59
60
59
59
59

Related

How is the kernel process kswapd started step by step?

I understand the impacts and functions of the kernel processkswapd.
As the output of ps -elf | grep swapd, I found kswapd is started by kthreadd. But how is it started step by step? Where's the extract related source code?
Here is the output of ps -elf | grep swapd:
$ ps -elf | head -n 1; sudo ps -elf | grep -i kswapd
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
1 S root 46 2 0 80 0 - 0 kswapd 11:42 ? 00:00:00 [kswapd0]
You see, the PID of the kernel process kthreadd is 2:
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
1 S root 2 0 0 80 0 - 0 kthrea 6/2 00:00:00 [kthreadd]
In addition, I can't find a binary program with the same name throughout the rootfs. For details, see below:
$ cat /proc/46/cmdline
#outputs nothing
sudo find / -iname kswapd 2>/dev/null
#outputs nothing
I think mm/vmscan.c has all or most of the answers you're looking for.
If you're asking how kswapd is initialized, the file contains kswapd_init().
If you're asking how kswapd is woken up by a process that needs more memory, the file contains wakeup_kswapd().
You can use a combination of grep, printk, and dump_stack() commands to step through the instructions executed before and aaft

Memory usage per process

How can one see the output of the memory usage per process in Windows using bash (Git bash) and without any additional tools installation?
I read about top command but there is no such thing in the default version of bash. Also, I have read about ps but it does not give the memory usage at all as in some examples I have seen (maybe some version has been changed).
Since Linux processes in WSL run in a container (conceptually similar to Docker), they can only see processes in the same container, nothing else.
You can see the virtual and resident size of processes in WSL by issuing the following command:
ps -eHww -o uid,pid,ppid,psr,vsz,rss,stime,time,cmd
Outputs:
max#supernova:~$ uname -a
Linux supernova 4.4.0-17763-Microsoft #379-Microsoft Wed Mar 06 19:16:00 PST 2019 x86_64 x86_64 x86_64 GNU/Linux
max#supernova:~$ ps -eHww -o uid,pid,ppid,psr,vsz,rss,stime,time,cmd
UID PID PPID PSR VSZ RSS STIME TIME CMD
0 1 0 0 8324 156 23:36 00:00:00 /init
0 3 1 0 8328 156 23:36 00:00:00 /init
1000 4 3 0 16796 3424 23:36 00:00:00 -bash
1000 35 4 0 17084 1716 23:57 00:00:00 ps -eHww -o uid,pid,ppid,psr,vsz,rss,stime,time,cmd

Docker kill process inside container

I exec into Docker container with docker exec -it container-name bash
Inside container I run command ps aux | grep processName
I receive a PID and after that I run:
kill processId but receive:
-bash: kill: (21456) - No such process
Am I missing something or? I know that Docker shows different process IDs from top command inside the host and ps aux inside the container (How to kill process inside container? Docker top command), but I am running this from inside container?
That response is because the process you are trying to kill is not existing at the moment of killing it. For example, if you launch ps aux you can get an output like this inside a container (it depends of the container of course):
oot#69fbbc0ff80d:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18400 3424 pts/0 Ss 13:55 0:00 bash
root 15 0.0 0.0 36840 2904 pts/0 R+ 13:57 0:00 ps aux
Then if you try to kill process with PID 15 you'll get the error because PID 15 is finished at the moment of trying to kill it. The ps command terminates after showing you the processes info. So:
root#69fbbc0ff80d:/# kill 15
bash: kill: (15) - No such process
In a docker container you can kill process in the same way as normal excepting the root process (id 1). You can't kill it:
root#69fbbc0ff80d:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18400 3424 pts/0 Ss 13:55 0:00 bash
root 16 0.0 0.0 36840 2952 pts/0 R+ 13:59 0:00 ps aux
root#69fbbc0ff80d:/# kill 1
root#69fbbc0ff80d:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18400 3424 pts/0 Ss 13:55 0:00 bash
root 17 0.0 0.0 36840 2916 pts/0 R+ 13:59 0:00 ps aux
As you can see you can't kill it. Anyway if you want to proof that you can kill processes you can do:
root#69fbbc0ff80d:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18400 3424 pts/0 Ss 13:55 0:00 bash
root 18 0.0 0.0 36840 3064 pts/0 R+ 14:01 0:00 ps aux
root#69fbbc0ff80d:/# sleep 1000 &
[1] 19
root#69fbbc0ff80d:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18400 3424 pts/0 Ss 13:55 0:00 bash
root 19 0.0 0.0 4372 724 pts/0 S 14:01 0:00 sleep 1000
root 20 0.0 0.0 36840 3016 pts/0 R+ 14:01 0:00 ps aux
root#69fbbc0ff80d:/# kill 19
root#69fbbc0ff80d:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18400 3424 pts/0 Ss 13:55 0:00 bash
root 21 0.0 0.0 36840 2824 pts/0 R+ 14:01 0:00 ps aux
[1]+ Terminated sleep 1000
Hope it helps.

Inaccessible tty still has some bash processes

This is a very strange situation. I'm on OS X 10.11.6
I have an old tty still hanging around (ttys001) but I don't know how to access it and why its still there. It simply does not have any window on the os x desktop. I'm on ttys000.
$ tty
/dev/ttys000
This means I'm currently on ttys000
$ w
22:01 up 15 days, 7:47, 3 users, load averages: 1.65 1.43 1.45
USER TTY FROM LOGIN# IDLE WHAT
Sidharth console - 30Jul16 15days -
Sidharth s000 - 13:48 - w
Sidharth s001 - Thu13 9:12 -bash
I can understand the login from console (it happens automatically) but where is this s001 (i.e. ttys001) coming from -- I can't switch to it -- I don't see any os x terminal windows corresponding to ttys001.
USER PID PPID PGID SESS JOBC STAT TT TIME COMMAND
root 30994 30725 30994 0 0 Ss s000 0:00.04 login -pf Sidharth
Sidharth 30995 30994 30995 0 1 S s000 0:00.33 -bash
root 32409 30995 32409 0 1 R+ s000 0:00.01 ps aj
root 26065 1 26065 0 0 Ss+ s001 0:00.04 login -pfl Sidharth /bin/bash -c exec -la bash /bin/bash
Sidharth 26066 26065 26065 0 0 S+ s001 0:00.28 -bash
Sidharth 29465 26066 26065 0 0 S+ s001 0:00.00 -bash
These are the various processes with associated ttys. Again, I can't understand the life of me what 26065, 26066 and 29465 (all associated with 26065) are doing/why are they there.
Some observations: the parent of 30944 is 30725 which is the Mac Terminal application (this makes sense). But equally interesting is that the parent of 26065 (corresponding to login -pfl Sidharth /bin/bash -c exec -la bash /bin/bash is launchd i.e. pid 1)
I've noticed stuff like this before: there is usually an old ttys but its not visible.
Nope I'don't have any additional tabs open in my os x terminal program that could cause this
My question is this: Why is my ttys001 inaccessible? How do I "get to" ttys001

How can I tell many unicorn workers are running on a heroku dyno right now?

I know that you can look in config/unicorn.rb (or the equivalent) and see what those settings are, but I'm wondering specifically how I can tell, right now, how many unicorn workers are running on a given dyno.
I tried to ps aux after running 'heroku run bash' but that didn't give me the actual processes the dyno was running.
If you run:
$ heroku run bash
$ unicorn -c config/unicorn.rb &
$ ps euf
you should get something similar to this:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
u16236 2 0.0 0.0 19444 2024 ? S 20:55 0:00 bash GOOGLE_ANALYTICS_ID=XXX HEROKU_POSTGRESQL_COPPER_URL=postgres://XXX:
u16236 3 19.4 0.3 288716 131568 ? Sl 20:55 0:04 \_ unicorn master -c config/unicorn.rb -l0.0.0.0:8080 GOOGLE_ANALYTICS_ID=XXX
u16236 5 31.0 0.3 305844 129636 ? Sl 20:55 0:04 | \_ sidekiq 3.2.5 app [0 of 2 busy] GOOGLE_ANALYTICS_
u16236 7 0.0 0.3 288716 124724 ? Sl 20:55 0:00 | \_ unicorn worker[0] -c config/unicorn.rb -l0.0.0.0:8080 GOOGLE_ANALYTICS_ID=XXX
u16236 10 0.0 0.3 288716 124728 ? Sl 20:55 0:00 | \_ unicorn worker[1] -c config/unicorn.rb -l0.0.0.0:8080 GOOGLE_ANALYTICS_ID=XXX
u16236 13 0.0 0.3 288716 124728 ? Sl 20:55 0:00 | \_ unicorn worker[2] -c config/unicorn.rb -l0.0.0.0:8080 GOOGLE_ANALYTICS_ID=XXX
u16236 30 0.0 0.0 15328 1104 ? R+ 20:55 0:00 \_ ps euf GOOGLE_ANALYTICS_ID=XXX DEVISE_PEPPER=XXX
You can see that processes 7, 10, & 13 are my 3 Unicorn workers that are each consuming 30% of total memory.

Resources