How can I list all running Windows processes using Ruby without any additional library? - ruby

I want to list all processes running on my Windows system using Ruby without installing any additional dependency or library that is not already part of Ruby. I have not found any way to do this online. Is there any clean way to do this from Ruby?

You can use the Kernal::system method to execute a command line argument. For example:
system("tasklist")
Image Name PID Session Name Session# Mem Usage
========================= ======== ================ =========== ============
System Idle Process 0 Services 0 24 K
...
ruby.exe 1336 Console 1 9,100 K
tasklist.exe 944 Console 1 5,332 K
Alternately--as points #Pavling out--you can use [Kernal::`](aka backtick), but some find it less readable. YMMV.

Related

How to generate flamegraphs from macOS process samples?

Anyone have a clean process for converting samples on macOS to FlameGraphs?
After a bit of fiddling I thought I could perhaps use a tool such as flamegraph-sample, but it seems to give me some trouble and so I thought perhaps there may be other more up-to-date options that I'm missing insomuch that this tool gives an error:
$ sudo sample PID -file ~/tmp/sample.txt -fullPaths 1
Sampling process 198 for 1 second with 1 millisecond of run time between samples
Sampling completed, processing symbols...
Sample analysis of process 35264 written to file ~/tmp/sample.txt
$ python stackcollapse-sample.py ~/tmp/sample.txt > ~/tmp/sample_collapsed.txt
$ flamegraph.pl ~/tmp/sample_collapsed.txt > ~/tmp/sample_collapsed_flamegraph.svg
Ignored 2335 lines with invalid format
ERROR: No stack counts found

How to make cpuset.cpu_exclusive function of cpuset work correctly

I'm trying to use the kernel's cpuset to isolate my process. To obtain this, I follow the instructions(2.1 Basic Usage) from kernel doc cpusets, however, it didn't work in my environment.
I have tried in both my centos7 server and my ubuntu16.04 work pc, but neither did work.
centos kernel version:
[root#node ~]# uname -r
3.10.0-327.el7.x86_64
ubuntu kernel version:
4.15.0-46-generic
What I have tried is as follows.
root#Latitude:/sys/fs/cgroup/cpuset# pwd
/sys/fs/cgroup/cpuset
root#Latitude:/sys/fs/cgroup/cpuset# cat cpuset.cpus
0-3
root#Latitude:/sys/fs/cgroup/cpuset# cat cpuset.mems
0
root#Latitude:/sys/fs/cgroup/cpuset# cat cpuset.cpu_exclusive
1
root#Latitude:/sys/fs/cgroup/cpuset# cat cpuset.mem_exclusive
1
root#Latitude:/sys/fs/cgroup/cpuset# find . -name cpuset.cpu_excl
usive | xargs cat
0
0
0
0
0
1
root#Latitude:/sys/fs/cgroup/cpuset# mkdir my_cpuset
root#Latitude:/sys/fs/cgroup/cpuset# echo 1 > my_cpuset/cpuset.cpus
root#Latitude:/sys/fs/cgroup/cpuset# echo 0 > my_cpuset/cpuset.mems
root#Latitude:/sys/fs/cgroup/cpuset# echo 1 > my_cpuset/cpuset.cpu_exclusive
bash: echo: write error: Invalid argument
root#Latitude:/sys/fs/cgroup/cpuset#
It just printed the error bash: echo: write error: Invalid argument.
Google it, however, I can't get the correct answers.
As I pasted above, before my operation, I confirmed that the cpuset root path have enabled the cpu_exclusive function and all the cpus are not been excluded by other sub-cpuset.
By using ps -o pid,psr,comm -p $PID, I can confirm that the cpus can be assigned to some process if I don't care cpu_exclusive. But I have also proved that if cpu_exclusive is not set, the same cpus can also be assigned to another processes.
I don't know if it is because some pre-setting are missed.
What I expected is "using cpuset to obtain exclusive use of cpus". Can anyboy give any clues?
Thanks very much.
i believe it is a mis-understanding of cpu_exclusive flag, as i did. Here is the doc https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt, quoting:
If a cpuset is cpu or mem exclusive, no other cpuset, other than
a direct ancestor or descendant, may share any of the same CPUs or
Memory Nodes.
so one possible reason you have bash: echo: write error: Invalid argument, is that you have some other cgroup cpuset enabled, and it conflicts with your operations of echo 1 > my_cpuset/cpuset.cpu_exclusive
please run find . -name cpuset.cpus | xargs cat to list all your cgroup's target cpus.
assume you have 12 cpus, if you want to set cpu_exclusive of my_cpuset, you need to carefully modify all the other cgroups to use cpus, eg. 0-7, then set cpus of my_cpuset to be 8-11. After all these cpus configurations , you can set cpu_exclusive to be 1.
But still, other process can still use cpu 8-11. Only the tasks that belongs to the other cgroups will not use cpu 8-11
for me, i had some docker container running, which prevents me from setting my cpuset cpu_exclusive
with kernel doc, i do not think it is possible to use cpus exclusively by cgroup itself. One approach (i know this approach is running on production) is that we isolate cpus, and manage the cpu affinity/cpuset by ourselves

How to suppress the general information for top command

I wish to suppress the general information for the top command
using a top parameter.
By general information I mean the below stuff :
top - 09:35:05 up 3:26, 2 users, load average: 0.29, 0.22, 0.21
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.3%us, 0.7%sy, 0.0%ni, 96.3%id, 0.8%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 3840932k total, 2687880k used, 1153052k free, 88380k buffers
Swap: 3998716k total, 0k used, 3998716k free, 987076k cached
What I do not wish to do is :
top -u user | grep process_name
or
top -bp $(pgrep process_name) | do_something
How can I achieve this?
Note: I am on Ubuntu 12.04 and top version is 3.2.8.
Came across this question today. I have a potential solution - create a top configuration file from inside top's interactive mode when the summary area is disabled. Since this file is also read at startup of top in batch mode, it will cause the summary area to be disabled in batch mode too.
Follow these steps to set it up..
Launch top in interactive mode.
Once inside interactive mode, disable the summary area by successively pressing 'l', 'm' and 't'.
Press 'W' (upper case) to write your top configuration file (normally, ~/.toprc)
Exit interactive mode.
Now when you run top in batch mode the summary area will not appear (!)
Taking it one step further...
If you only want this for certain situations and still want the summary area most of the time, you could use an alternate top configuration file. However, AFAIK, the way to get top to use an alternate config file is a bit funky. There are a couple of ways to do this. The approach I use is as follows:
Create a soft-link to the top executable. This does not have to be done as root, as long as you have write access to the link's location...
ln -s /usr/bin/top /home/myusername/bin/omgwtf
Launch top by typing the name of the link ('omgwtf') rather than 'top'. You will be in normal top interactive mode, but when you save the configuration file it will write to ~/.omgwtfrc, leaving ~/.toprc alone.
Disable the summary area and write the configuration file same as before (press 'l', 'm', 't' and 'W')
In the future, when you're ready to run top without summary info in batch mode, you'll have to invoke top via the link name you created. For example,
% omgwtf -usyslog -bn1
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
576 syslog 20 0 264496 8144 1352 S 0.0 0.1 0:03.66 rsyslogd
%
If you're running top in batch mode (-b -n1), just delete the header lines with sed:
top -b -n1 | sed 1,7d
That will remove the first 7 header lines that top outputs and returns only the processes.
It's known as the "Summary Area" and i don't think there is a way at top initialization to disable those.
But while top is running, you can disable those by pressing l, t, m.
From man top:
Summary-Area-defaults
'l' - Load Avg/Uptime On (thus program name)
't' - Task/Cpu states On (1+1 lines, see '1')
'm' - Mem/Swap usage On (2 lines worth)
'1' - Single Cpu On (thus 1 line if smp)
This will dump the output and it can be redirected to any file if needed.
top -n1 |grep -Ev "Tasks:|Cpu(s):|Swap:|Mem:"
To monitoring a particular process, following command is working for me -
top -sbn1 -p $(pidof <process_name>) | grep $(pidof <process_name>)
And to get the all process information you can use the following -
top -sbn1|sed -n '/PID/,/^$/p'
egrep may be good enough in this case, but I would add that perl -lane could do this kind of thing with lightning speed:
top -b -n 1 | perl -lane '/PID/ and $x=1; $x and print' | head -n10
This way you may forget the precise arguments for grep, sed, awk, etc. for good because perl is typically much faster than those tools.
On a mac you cannot use -b which is used in many of the other answers.
In that case the command would be top -n1 -l1 | sed 1,10d
Grabbing only the first process line (and its header), only logging once, instead of interactive, then suppress the general information for top command which are the first 10 lines.

Oracle Database CPU Usage on AIX

I want to find the CPU process usage for all Oracle processes on an AIX box.
On Solaris I can do the following:
prstat -n 400 -c -s cpu -p 9013 1 1
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
9013 oracle 3463M 2928M sleep 53 0 0:00:35 0.9% oracle/2
Total: 1 processes, 2 lwps, load averages: 2.25, 2.32, 2.40
This basically reports the CPU usage for a given process ID (in this case 9013). Given a list of all Oracle PID’s I can use this command to get the CPU usage for each one, sum them up and hey presto I have my Oracle database CPU usage.
How can I get the same with AIX?
Thanks
You can try nmon or topas, which will show the current %CPU. You might also want to look into using WLM to create a class for all the Oracle processes, then use wlmstat to see the CPU usage for that class. That would save you the trouble of adding them up manually.

Linpack sometimes starting, sometimes not, but nothing changed

I installed Linpack on a 2-Node cluster with Xeon processors. Sometimes if I start Linpack with this command:
mpiexec -np 28 -print-rank-map -f /root/machines.HOSTS ./xhpl_intel64
linpack starts and prints the output, sometimes I only see the mpi mappings printed and then nothing following. To me this seems like random behaviour because I don't change anything between the calls and as already mentioned, Linpack sometimes starts, sometimes not.
In top I can see that xhpl_intel64processes have been created and they are heavily using the CPU but when watching the traffic between the nodes, iftop is telling me that it nothing is sent.
I am using MPICH2 as MPI implementation. This is my HPL.dat:
# cat HPL.dat
HPLinpack benchmark input file
Innovative Computing Laboratory, University of Tennessee
HPL.out output file name (if any)
6 device out (6=stdout,7=stderr,file)
1 # of problems sizes (N)
10000 Ns
1 # of NBs
250 NBs
0 PMAP process mapping (0=Row-,1=Column-major)
1 # of process grids (P x Q)
2 Ps
14 Qs
16.0 threshold
1 # of panel fact
2 PFACTs (0=left, 1=Crout, 2=Right)
1 # of recursive stopping criterium
4 NBMINs (>= 1)
1 # of panels in recursion
2 NDIVs
1 # of recursive panel fact.
1 RFACTs (0=left, 1=Crout, 2=Right)
1 # of broadcast
1 BCASTs (0=1rg,1=1rM,2=2rg,3=2rM,4=Lng,5=LnM)
1 # of lookahead depth
1 DEPTHs (>=0)
2 SWAP (0=bin-exch,1=long,2=mix)
64 swapping threshold
0 L1 in (0=transposed,1=no-transposed) form
0 U in (0=transposed,1=no-transposed) form
1 Equilibration (0=no,1=yes)
8 memory alignment in double (> 0)
edit2:
I now just let the program run for a while and after 30min it tells me:
# mpiexec -np 32 -print-rank-map -f /root/machines.HOSTS ./xhpl_intel64
(node-0:0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15)
(node-1:16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31)
Assertion failed in file ../../socksm.c at line 2577: (it_plfd->revents & 0x008) == 0
internal ABORT - process 0
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)
Is this a mpi problem?
Do you know what type of problem this could be?
I figured out what the problem was: MPICH2 uses different random ports each time it starts and if these are blocked your application wont start up correctly.
The solution for MPICH2 is to set the environment variable MPICH_PORT_RANGE to START:END, like this:
export MPICH_PORT_RANGE=50000:51000
Best,
heinrich

Resources