Is there a way to see how many context switches each thread generates? (both in and out if possible) Either in X/s, or to let it run and give aggregated data after some time.
(either on linux or on windows)
I have found only tools that give aggregated context-switching number for whole os or per process.
My program makes many context switches (50k/s), probably a lot not necessary, but I am not sure where to start optimizing, where do most of those happen.
On recent GNU/Linux systems you can use SystemTap to collect the data you want on every call to sched_switch(). The schedtimes.stp example is probably a good start: http://sourceware.org/systemtap/examples/keyword-index.html#SCHEDULER
Linux
I wrote a small script to see the details of a specific thread of the process. By executing this script you can see context switch as well.
if [ "$#" -ne 2 ]; then
echo "INVALID ARGUMENT ERROR: Please use ./see_thread.sh processName threadNumber"
exit
fi
ls /proc/`pgrep $1`/task | head -n$2 | tail -n+$2>temp
cat /proc/`pgrep $1`/task/`cat temp`/sched
Hope this will help.
I've a bash script that calculates voluntary and non-voluntary context switches made by a thread during a specific time frame. I'm not sure whether this will serve your purpose but I'll post it anyway.
This script is looping over all threads of a process and recording "voluntary_ctxt_switches" & "nonvoluntary_ctxt_switches" from /proc/< process-id>/task/< thread-id>/status. What I do generally is record these counters at the start of a performance run and record again at the end of the run and then calculate difference as total vol & non-vol ctx switches during the performance run.
pid=`ps -ef | grep <process name> | grep $USER | grep -v grep | awk '{print $2}'`
echo "ThreadId;Vol_Ctx_Switch;Invol_Ctx_Switch"
for tid in `ps -L --pid ${pid} | awk '{print $2}'`
do
if [ -f /proc/$pid/task/$tid/status ]
then
vol=`cat /proc/$pid/task/$tid/status | grep voluntary_ctxt_switches | grep -v nonvoluntary_ctxt_switches | awk '{print $NF}'`
non_vol=`cat /proc/$pid/task/$tid/status | grep nonvoluntary_ctxt_switches | awk '{print $NF}'`
fi
echo "$tid;$vol;$non_vol"
done
Script is bit heavy, in my case process has around 2500 threads. Total time to collect ctx switches is around 10 seconds.
Related
I got a spring boot app, with the bash loader in the beginning. When I unzip it, the script in the beginning gets lost. I need it though for re-assembly. So the idea was to split it off with head -c. But I have no idea how to determine the byte location efficiently. less tells me the amount of bytes of the script when I open the zip with it, but I'd like to automate it. Is there a possibility to determine it with (un)zip? Or is there another easy way?
I thought of determining the end location of exit 0. In my current app, this is at 8720. With
echo 'ibase=16;obase=A;'$(xxd nevisadmin-app.jar | grep -m 1 "exit 0" | awk -F: '{print $1}') | bc
I get 8704 (because it's at the end of the line), but this is super fragile, because it'll fail, if the xxd output is not in the same line e.g.
000021f0 ... bla bla ex
00002200 ... it 0 binarystartshere
Thanks
It seems, that search for exit 0 is a reliable way to determine the end of the script of spring boot.
Extracting the script would be done like this":
head -n $(grep -a -n "exit 0" springboot-app.jar | awk -F: '{print $1}') springboot-app.jar > startscript.sh
So it doesn't determine the length, but that becomes irrelevant to the original question.
If someone looks up this question, the answer would be to instead of redirecting the output to a file, one could pie it to wc -c.
Ive seen the same question asked on linux and windows but not mac (terminal). Can anyone tell me how to get the current processor utilization in %, so an example output would be 40%. Thanks
This works on a Mac (includes the %):
ps -A -o %cpu | awk '{s+=$1} END {print s "%"}'
To break this down a bit:
ps is the process status tool. Most *nix like operating systems support it. There are a few flags we want to pass to it:
-A means all processes, not just the ones running as you.
-o lets us specify the output we want. In this case, it all we want to the cpu% column of ps's output.
This will get us a list of all of the processes cpu usage, like
0.0
1.3
27.0
0.0
We now need to add up this list to get a final number, so we pipe ps's output to awk. awk is a pretty powerful tool for parsing and operating on text. We just simply add up the numbers, then print out the result, and add a "%" on the end.
Adding up all those CPU % can give a number > 100% (probably multiple cores).
Here's a simpler method, although it comes with some problems:
top -l 2 | grep -E "^CPU"
This gives 2 samples, the first of which is nonsense (because it calculates CPU load between samples).
Also, you need to use RegEx like (\d+\.\d*)% or some string functions to extract values, and add "user" and "sys" values to get the total.
(From How to get CPU utilisation, RAM utilisation in MAC from commandline)
Building on previous answers from #Jon R. and #Rounak D, the following line prints the sum of user and system values, with the added percent. I've have tested this value and I like that it roughly tracks well with the percentages shown in the macOS Activity Monitor.
top -l 2 | grep -E "^CPU" | tail -1 | awk '{ print $3 + $5"%" }'
You can then capture that value in a variable in script like this:
cpu_percent=$(top -l 2 | grep -E "^CPU" | tail -1 | awk '{ print $3 + $5"%" }')
PS: You might also be interested in the output of uptime, which shows system load.
Building upon #Jon R's answer, we can pick up the user CPU utilization through some simple pattern matching
top -l 1 | grep -E "^CPU" | grep -Eo '[^[:space:]]+%' | head -1
And if you want to get rid of the last % symbol as well,
top -l 1 | grep -E "^CPU" | grep -Eo '[^[:space:]]+%' | head -1 | sed s/\%/\/
top -F -R -o cpu
-F Do not calculate statistics on shared libraries, also known as frameworks.
-R Do not traverse and report the memory object map for each process.
-o cpu Order by CPU usage
Answer Source
You can do this.
printf "$(ps axo %cpu | awk '{ sum+=$1 } END { printf "%.1f\n", sum }' | tail -n 1),"
Ive seen the same question asked on linux and windows but not mac (terminal). Can anyone tell me how to get the current processor utilization in %, so an example output would be 40%. Thanks
This works on a Mac (includes the %):
ps -A -o %cpu | awk '{s+=$1} END {print s "%"}'
To break this down a bit:
ps is the process status tool. Most *nix like operating systems support it. There are a few flags we want to pass to it:
-A means all processes, not just the ones running as you.
-o lets us specify the output we want. In this case, it all we want to the cpu% column of ps's output.
This will get us a list of all of the processes cpu usage, like
0.0
1.3
27.0
0.0
We now need to add up this list to get a final number, so we pipe ps's output to awk. awk is a pretty powerful tool for parsing and operating on text. We just simply add up the numbers, then print out the result, and add a "%" on the end.
Adding up all those CPU % can give a number > 100% (probably multiple cores).
Here's a simpler method, although it comes with some problems:
top -l 2 | grep -E "^CPU"
This gives 2 samples, the first of which is nonsense (because it calculates CPU load between samples).
Also, you need to use RegEx like (\d+\.\d*)% or some string functions to extract values, and add "user" and "sys" values to get the total.
(From How to get CPU utilisation, RAM utilisation in MAC from commandline)
Building on previous answers from #Jon R. and #Rounak D, the following line prints the sum of user and system values, with the added percent. I've have tested this value and I like that it roughly tracks well with the percentages shown in the macOS Activity Monitor.
top -l 2 | grep -E "^CPU" | tail -1 | awk '{ print $3 + $5"%" }'
You can then capture that value in a variable in script like this:
cpu_percent=$(top -l 2 | grep -E "^CPU" | tail -1 | awk '{ print $3 + $5"%" }')
PS: You might also be interested in the output of uptime, which shows system load.
Building upon #Jon R's answer, we can pick up the user CPU utilization through some simple pattern matching
top -l 1 | grep -E "^CPU" | grep -Eo '[^[:space:]]+%' | head -1
And if you want to get rid of the last % symbol as well,
top -l 1 | grep -E "^CPU" | grep -Eo '[^[:space:]]+%' | head -1 | sed s/\%/\/
top -F -R -o cpu
-F Do not calculate statistics on shared libraries, also known as frameworks.
-R Do not traverse and report the memory object map for each process.
-o cpu Order by CPU usage
Answer Source
You can do this.
printf "$(ps axo %cpu | awk '{ sum+=$1 } END { printf "%.1f\n", sum }' | tail -n 1),"
my first question here. Hope it's a good one.
So I'm hoping to create a script that kills another script running arecord when my disk gets to a certain usage. (I should point out, I'm not exactly sure how I got to that df filter... just kinda searched around...) My plan is to run both scripts (the one recording, and the one monitoring disk usage) in separate screens.
I'm doing this all on a Raspberry Pi, btw.
So this is my code so far:
#!/bin/bash
DISK=$(df / | grep / | awk '{ print $5}' | sed 's/%//g')
until [ $DISK -ge 50 ]
do
sleep 1
done
killall arecord
This code works when I play with the starting value ("50" changed to "30" or so). But it doesn't seem to "monitor" my disk the way I want it to. I have a bit of an idea what's going on: the variable DISK is only assigned once, not checked or redefined periodically.
In other words, I probably want something in my until loop that "gets" the disk usage from df, right? What are some good ways of going about it?
=
PS I'd be super interested in hearing how I might incorporate this whole script's purpose into the script running arecord itself, but that's beyond me right now... and another question...
You are only setting DISK once since it's done before the loop starts and not done as part of the looping process.
A simple fix is to incorporate the evaluation of the disk space into the actual while loop itself, something like:
#!/bin/bash
until [ $(df / | awk 'NR==2 {print $5}' | tr -d '%') -ge 50 ] ; do
sleep 1
done
killall arecord
You'll notice I've made some minor mods to the command as well, specifically:
You can use awk itself to get the relevant line from the ps output, no need to use grep in a separate pipeline stage.
I prefer tr for deleting single characters, sed can do it but it's a bit of overkill.
Can somebody please help me to write a KSH Script to get the CPU usage of the AIX server ?
Here I want my script to get the Current usage of CPU that time it is executed
There are a number of tools on AIX (and elsewhere) to get the current CPU usage.
nmon
On AIX (and Linux) you have nmon. This gives very detailed infos on memory, cpu usage, disk usage, etc. It is normally used as an interactive tool.
sar
call sar -u 1 1 to get the current cpu usage. See the manual page of sar for a whole lot of options. Depending on your installation you need to be root or add your user to the group "adm".
Just call w -u. It outputs a little bit more than you ask for. If you don't need that you can use awk/sed/cut to cut it away.
I use the following script in bash, but I just tried it in ksh and it works all the same:
top -bn2 | grep 'Cpu(s)' | sed -n '2s/.*, *\([0-9.]*\)%* id.*/\1/p' | awk '{print "CPU: " 100 - $1" %"}
You can also use
top -bn1 | grep 'Cpu(s)' | sed -n 's/.*, *\([0-9.]*\)%* id.*/\1/p' | awk '{print "CPU: " 100 - $1" %"}'
for faster response, but the result will be less accurate.