Calculating CPU usage from /proc/stat - bash

When reading /proc/stat, I get these return values:
cpu 20582190 643 1606363 658948861 509691 24 112555 0 0 0
cpu0 3408982 106 264219 81480207 19354 0 35 0 0 0
cpu1 3395441 116 265930 81509149 11129 0 30 0 0 0
cpu2 3411003 197 214515 81133228 418090 0 1911 0 0 0
cpu3 3478358 168 257604 81417703 30421 0 29 0 0 0
cpu4 1840706 20 155376 83328751 1564 0 7 0 0 0
cpu5 1416488 15 171101 83410586 1645 13 108729 0 0 0
cpu6 1773002 7 133686 83346305 25666 10 1803 0 0 0
cpu7 1858207 10 143928 83322929 1819 0 8 0 0 0
Some sources state to read only the first four values to calculate CPU usage, while some sources say to read all the values.
Do I read only the first four values to calculate CPU utilization; the values user, nice, system, and idle? Or do I need all the values? Or not all, but more than four? Would I need iowait, irq, or softirq?
cpu 20582190 643 1606363
Versus the entire line.
cpu 20582190 643 1606363 658948861 509691 24 112555 0 0 0
Edits: Some sources also state that iowait is added into idle.
When calculating a specific process' CPU usage, does the method differ?

The man page states that it varies with architecture, and also gives a couple of examples describing how they are different:
In Linux 2.6 this line includes three additional columns: ...
Since Linux 2.6.11, there is an eighth column, ...
Since Linux 2.6.24, there is a ninth column, ...
When "some people said to only use..." they were probably not taking these into account.
Regarding whether the calculation differs across CPUs: You will find lines related to "cpu", "cpu0", "cpu1", ... in /proc/stat. The "cpu" fields are all aggregates (not averages) of corresponding fields for the individual CPUs. You can check that for yourself with a simple awk one-liner.
cpu 84282 747 20805 1615949 44349 0 308 0 0 0
cpu0 26754 343 9611 375347 27092 0 301 0 0 0
cpu1 12707 56 2581 422198 5036 0 1 0 0 0
cpu2 33356 173 6160 394561 7508 0 4 0 0 0
cpu3 11464 174 2452 423841 4712 0 1 0 0 0

Related

AWK Formatting Using First Row as a Header and Iterating by column

I'm struggling trying to format a collectd ploted file si I can later import it to an influx db instance.
This is how the file looks like:
#Date Time [CPU]User% [CPU]Nice% [CPU]Sys% [CPU]Wait% [CPU]Irq% [CPU]Soft% [CPU]Steal% [CPU]Idle% [CPU]Totl% [CPU]Intrpt/sec [CPU]Ctx/sec [CPU]Proc/sec [CPU]ProcQue [CPU]ProcRun [CPU]L-Avg1 [CPU]L-Avg5 [CPU]L-Avg15 [CPU]RunTot [CPU]BlkTot [MEM]Tot [MEM]Used [MEM]Free [MEM]Shared [MEM]Buf [MEM]Cached [MEM]Slab [MEM]Map [MEM]Anon [MEM]Commit [MEM]Locked [MEM]SwapTot [MEM]SwapUsed [MEM]SwapFree [MEM]SwapIn [MEM]SwapOut [MEM]Dirty [MEM]Clean [MEM]Laundry [MEM]Inactive [MEM]PageIn [MEM]PageOut [MEM]PageFaults [MEM]PageMajFaults [MEM]HugeTotal [MEM]HugeFree [MEM]HugeRsvd [MEM]SUnreclaim [SOCK]Used [SOCK]Tcp [SOCK]Orph [SOCK]Tw [SOCK]Alloc [SOCK]Mem [SOCK]Udp [SOCK]Raw [SOCK]Frag [SOCK]FragMem [NET]RxPktTot [NET]TxPktTot [NET]RxKBTot [NET]TxKBTot [NET]RxCmpTot [NET]RxMltTot [NET]TxCmpTot [NET]RxErrsTot [NET]TxErrsTot [DSK]ReadTot [DSK]WriteTot [DSK]OpsTot [DSK]ReadKBTot [DSK]WriteKBTot [DSK]KbTot [DSK]ReadMrgTot [DSK]WriteMrgTot [DSK]MrgTot [INODE]NumDentry [INODE]openFiles [INODE]MaxFile% [INODE]used [NFS]ReadsS [NFS]WritesS [NFS]MetaS [NFS]CommitS [NFS]Udp [NFS]Tcp [NFS]TcpConn [NFS]BadAuth [NFS]BadClient [NFS]ReadsC [NFS]WritesC [NFS]MetaC [NFS]CommitC [NFS]Retrans [NFS]AuthRef [TCP]IpErr [TCP]TcpErr [TCP]UdpErr [TCP]IcmpErr [TCP]Loss [TCP]FTrans [BUD]1Page [BUD]2Pages [BUD]4Pages [BUD]8Pages [BUD]16Pages [BUD]32Pages [BUD]64Pages [BUD]128Pages [BUD]256Pages [BUD]512Pages [BUD]1024Pages
20190228 00:01:00 12 0 3 0 0 1 0 84 16 26957 20219 14 2991 3 0.05 0.18 0.13 1 0 198339428 197144012 1195416 0 817844 34053472 1960600 76668 158641184 201414800 0 17825788 0 17825788 0 0 224 0 0 19111168 3 110 4088 0 0 0 0 94716 2885 44 0 5 1982 1808 0 0 0 0 9739 9767 30385 17320 0 0 0 0 0 0 12 13 3 110 113 0 16 16 635592 7488 0 476716 0 0 0 0 0 0 0 0 0 0 0 8 0 0 22 0 1 0 0 0 0 48963 10707 10980 1226 496 282 142 43 19 6 132
20190228 00:02:00 11 0 3 0 0 1 0 85 15 26062 18226 5 2988 3 0.02 0.14 0.12 2 0 198339428 197138128 1201300 0 817856 34054692 1960244 75468 158636064 201398036 0 17825788 0 17825788 0 0 220 0 0 19111524 0 81 960 0 0 0 0 94420 2867 42 0 7 1973 1842 0 0 0 0 9391 9405 28934 16605 0 0 0 0 0 0 9 9 0 81 81 0 11 11 635446 7232 0 476576 0 0 0 0 0 0 0 0 0 0 0 3 0 0 8 0 1 0 0 0 0 49798 10849 10995 1241 499 282 142 43 19 6 132
20190228 00:03:00 11 0 3 0 0 1 0 85 15 25750 17963 4 2980 0 0.00 0.11 0.10 2 0 198339428 197137468 1201960 0 817856 34056400 1960312 75468 158633880 201397832 0 17825788 0 17825788 0 0 320 0 0 19111712 0 75 668 0 0 0 0 94488 2869 42 0 5 1975 1916 0 0 0 0 9230 9242 28411 16243 0 0 0 0 0 0 9 9 0 75 75 0 10 10 635434 7232 0 476564 0 0 0 0 0 0 0 0 0 0 0 2 0 0 6 0 1 0 0 0 0 50029 10817 10998 1243 501 282 142 43 19 6 132
20190228 00:04:00 11 0 3 0 0 1 0 84 16 25755 17871 10 2981 5 0.08 0.11 0.10 3 0 198339428 197140864 1198564 0 817856 34058072 1960320 75468 158634508 201398088 0 17825788 0 17825788 0 0 232 0 0 19111980 0 79 2740 0 0 0 0 94488 2867 4 0 2 1973 1899 0 0 0 0 9191 9197 28247 16183 0 0 0 0 0 0 9 9 0 79 79 0 10 10 635433 7264 0 476563 0 0 0 0 0 0 0 0 0 0 0 5 0 0 12 0 1 0 0 0 0 49243 10842 10985 1245 501 282 142 43 19 6 132
20190228 00:05:00 12 0 4 0 0 1 0 83 17 26243 18319 76 2985 3 0.06 0.10 0.09 2 0 198339428 197148040 1191388 0 817856 34059808 1961420 75492 158637636 201405208 0 17825788 0 17825788 0 0 252 0 0 19112012 0 85 18686 0 0 0 0 95556 2884 43 0 6 1984 1945 0 0 0 0 9176 9173 28153 16029 0 0 0 0 0 0 10 10 0 85 85 0 12 12 635473 7328 0 476603 0 0 0 0 0 0 0 0 0 0 0 3 0 0 7 0 1 0 0 0 0 47625 10801 10979 1253 505 282 142 43 19 6 132
What I'm trying to do, is to get it in a format that looks like this:
cpu_value,host=mxspacr1,instance=5,type=cpu,type_instance=softirq value=180599 1551128614916131663
cpu_value,host=mxspacr1,instance=2,type=cpu,type_instance=interrupt value=752 1551128614916112943
cpu_value,host=mxspacr1,instance=4,type=cpu,type_instance=softirq value=205697 1551128614916128446
cpu_value,host=mxspacr1,instance=7,type=cpu,type_instance=nice value=19250943 1551128614916111618
cpu_value,host=mxspacr1,instance=2,type=cpu,type_instance=softirq value=160513 1551128614916127690
cpu_value,host=mxspacr1,instance=1,type=cpu,type_instance=softirq value=178677 1551128614916127265
cpu_value,host=mxspacr1,instance=0,type=cpu,type_instance=softirq value=212274 1551128614916126586
cpu_value,host=mxspacr1,instance=6,type=cpu,type_instance=interrupt value=673 1551128614916116661
cpu_value,host=mxspacr1,instance=4,type=cpu,type_instance=interrupt value=701 1551128614916115893
cpu_value,host=mxspacr1,instance=3,type=cpu,type_instance=interrupt value=723 1551128614916115492
cpu_value,host=mxspacr1,instance=1,type=cpu,type_instance=interrupt value=756 1551128614916112550
cpu_value,host=mxspacr1,instance=6,type=cpu,type_instance=nice value=21661921 1551128614916111032
cpu_value,host=mxspacr1,instance=3,type=cpu,type_instance=nice value=18494760 1551128614916098304
cpu_value,host=mxspacr1,instance=0,type=cpu,type_instance=interrupt value=552 1551
What I have managed to do so far is just to convert the date string into EPOCH format.
I was thinking somehow to use the first value "[CPU]" as the measurement, and the "User%" as the type, the host I can take it from the system where the script will run.
I would really appreciate your help, because I really basic knowledge of text editing.
Thanks.
EDIT: this is what would expect to get with the information of the second line using as a header the first row:
cpu_value,host=mxspacr1,type=cpu,type_instance=user% value=0 1551128614916131663
EDIT: This is what I have so far, and I'm stuck here.
awk -v HOSTNAME="$HOSTNAME" 'BEGIN { FS="[][]"; getline; NR==1; f1=$2; f2=$3 } { RS=" "; printf f1"_measurement,host="HOSTNAME",type="f2"value="$3" ", system("date +%s -d \""$1" "$2"\"") }' mxmcaim01-20190228.tab
And this is what I get, but this is only for 1 column, now I don't know how to process the remaining columns such as Nice, Sys, Wait and so on.
CPU_measurement,host=mxmcamon05,type=User% value= 1552014000
CPU_measurement,host=mxmcamon05,type=User% value= 1551960000
CPU_measurement,host=mxmcamon05,type=User% value= 1551343500
CPU_measurement,host=mxmcamon05,type=User% value= 1551997620
CPU_measurement,host=mxmcamon05,type=User% value= 1551985200
CPU_measurement,host=mxmcamon05,type=User% value= 1551938400
CPU_measurement,host=mxmcamon05,type=User% value= 1551949200
CPU_measurement,host=mxmcamon05,type=User% value= 1551938400
CPU_measurement,host=mxmcamon05,type=User% value= 1551938400
CPU_measurement,host=mxmcamon05,type=User% value= 1551945600
CPU_measurement,host=mxmcamon05,type=User% value= 1551938400
Please help.
EDIT. First of all, Thanks for your help.
Taking Advantage from you knowledge in text editing, I was expecting to use this for 3 separate files, but unfortunately and I don't know why the format is different, like this:
#Date Time SlabName ObjInUse ObjInUseB ObjAll ObjAllB SlabInUse SlabInUseB SlabAll SlabAllB SlabChg SlabPct
20190228 00:01:00 nfsd_drc 0 0 0 0 0 0 0 0 0 0
20190228 00:01:00 nfsd4_delegations 0 0 0 0 0 0 0 0 0 0
20190228 00:01:00 nfsd4_stateids 0 0 0 0 0 0 0 0 0 0
20190228 00:01:00 nfsd4_files 0 0 0 0 0 0 0 0 0 0
20190228 00:01:00 nfsd4_stateowners 0 0 0 0 0 0 0 0 0 0
20190228 00:01:00 nfs_direct_cache 0 0 0 0 0 0 0 0 0 0
So I don't how to handle the arrays in a way that I can use nfsd_drc as the type and then Iterate through ObjInUse ObjInUseB ObjAll ObjAllB SlabInUse SlabInUseB SlabAll SlabAllB SlabChg SlabPct and use them like the type_instance and finally the value in this case for ObjInUse will be 0, ObjInUseB = 0, ObjAll = 0, an so one, making something like this:
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=ObjectInUse value=0 1551128614916131663
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=ObjInuseB value=0 1551128614916131663
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=ObjAll value=0 1551128614916112943
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=ObjAllB value=0 1551128614916128446
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=SlabInUse value=0 1551128614916111618
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=SlabInUseB value=0 1551128614916127690
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=SlabAll value=0 1551128614916127265
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=SlabAllB value=0 1551128614916126586
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=SlabChg value=0 1551128614916116661
slab_value,host=mxspacr1,type=nfsd_drc,type_instance=SlabPct value=0 1551128614916115893
slab_value is a hard-coded value.
Thanks.
It is not clear where do instance and type_instance=interrupt come from in your final desired format. Otherwise awk code below should work.
Note: it doesn't strip % from tag values and prints timestamp at end of line in seconds (append extra zeros if you want nanoseconds).
gawk -v HOSTNAME="$HOSTNAME" 'NR==1 {split($0,h,/[ \t\[\]]+/,s); for(i=0;i<length(h);i++){ h[i]=tolower(h[i]); };}; NR>1 { for(j=2;j<NF;j++) {k=2*j; printf("%s_value,host=%s,type=%s,type_instance=%s value=%s %s\n", h[k], HOSTNAME, h[k], h[k+1],$(j+1), mktime(substr($1,1,4)" "substr($1,5,2)" "substr($1,7,2)" "substr($2,1,2)" "substr($2,4,2)" "substr($2,7,2)));}}' mxmcaim01-20190228.tab

How are floppy disk sectors numbered

I was wondering how are floppy disk sectors ordered, I am currently writing a program to access the root directory of a floppy disk (fat12 formated High Density), I can load it with debug at sector 13h but in assembly it is at head 1 track 0 sector 2 why is sector 13h, not at head 0 track 1 sector 1?
That's because the sectors on the other side of the disk comes before the sectors on the second track on the first side.
Sectors 0 through 17 (11h) are found at head 0 track 0. Sectors 18 (12h) through 35 (23h) are found at head 1 track 0.
Logical sectors are numbered from zero up, but the sectors in a track are numbered from 1 to 18 (12h).
sector# head track sector usage
------- ---- ----- ------ --------
0 0h 0 0 1 1h boot
1 1h 0 0 2 2h FAT 1
2 2h 0 0 3 3h |
3 3h 0 0 4 4h v
4 4h 0 0 5 5h
5 5h 0 0 6 6h
6 6h 0 0 7 7h
7 7h 0 0 8 8h
8 8h 0 0 9 9h
9 9h 0 0 10 ah
10 ah 0 0 11 bh FAT 2
11 bh 0 0 12 ch |
12 ch 0 0 13 dh v
13 dh 0 0 14 eh
14 eh 0 0 15 fh
15 fh 0 0 16 10h
16 10h 0 0 17 11h
17 11h 0 0 18 12h
18 12h 1 0 1 1h
19 13h 1 0 2 2h root
20 14h 1 0 3 3h |
21 15h 1 0 4 4h v
...

Crash While running Pushgp using SBCL 1.1.6.0-3c5581a on mac OS X Yosemite Ver. 10.10.3

The following error message occured While running Pushgp using SBCL 1.1.6.0-3c5581a on mac OS X Yosemite Ver. 10.10.3, what do i do next to correct it and avoid future occurrence? Thanks
Producing next generation...
Heap exhausted during garbage collection: 48 bytes available, 80 requested.
Gen StaPg UbSta LaSta LUbSt Boxed Unboxed LB LUB !move Alloc Waste Trig WP GCs Mem-age
0: 0 0 0 0 0 0 0 0 0 0 0 5368709 0 0 0.0000
1: 0 0 0 0 0 0 0 0 0 0 0 5368709 0 0 0.0000
2: 0 0 0 0 0 0 0 0 0 0 0 5368709 0 0 0.0000
3: 0 0 0 0 0 0 0 0 0 0 0 5368709 0 0 0.0000
4: 90540 90458 0 0 16653 42019 3720 930 0 257732888 1634024 2000000 0 0 1.0370
5: 0 0 0 0 0 0 0 0 0 0 0 2000000 0 0 0.0000
6: 0 0 0 0 5615 1269 0 0 0 28196864 0 2000000 5549 0 0.0000
Total bytes allocated = 533559232
Dynamic-space-size bytes = 536870912
GC control variables:
*GC-INHIBIT* = true
*GC-PENDING* = in progress
fatal error encountered in SBCL pid 776:
Heap exhausted, game over.
Welcome to LDB, a low-level debugger for the Lisp runtime environment.
ldb>
Assuming your running out of memory is not caused by a larger problem, you might just be able to run sbcl with a larger dynamic space size, e.g.
sbcl --dynamic-space-size 2048

How to join general string to every first column of every sub row

I want to join every first general string in the case below "ADMIN" and "DB" to the data which they represent and the place which will they take to be every time on the first column.
Example:
ADMIN
ADMIN_DB Running 1 0 1 0 0 0 80
ADMIN_CATALOG Running 0 0 1 0 0 0 452
ADMIN_CAT Running 0 0 1 0 0 0 58
DB
SLAVE_DB Running 2 0 3 0 0 0 94
DB_BAK Running 1 0 1 0 0 0 54
HISTORY_DB Running 0 0 1 0 0 0 40
HISTORY_DB_BAK Running 0 0 1 0 0 0 59
Expectation:
ADMIN ADMIN_DB Running 1 0 1 0 0 0 80
ADMIN ADMIN_CATALOG Running 0 0 1 0 0 0 452
ADMIN ADMIN_CAT Running 0 0 1 0 0 0 58
DB SLAVE_DB Running 2 0 3 0 0 0 94
DB DB_BAK Running 1 0 1 0 0 0 54
DB HISTORY_DB Running 0 0 1 0 0 0 40
DB HISTORY_DB_BAK Running 0 0 1 0 0 0 59
In the past I have one example this is the start point which can do the thing but I'm not aware so much in that kind of scripting: perl -ne 'chomp; if($. % 2){print "$_,";next;}
How about
awk 'NF==1{ val=$0; next} {print val" "$0}' input
You can format the output using the column utilty as
$ awk 'NF==1{ val=$0; next} { print val" "$0}' input | column -t
ADMIN ADMIN_DB Running 1 0 1 0 0 0 80
ADMIN ADMIN_CATALOG Running 0 0 1 0 0 0 452
ADMIN ADMIN_CAT Running 0 0 1 0 0 0 58
DB SLAVE_DB Running 2 0 3 0 0 0 94
DB DB_BAK Running 1 0 1 0 0 0 54
DB HISTORY_DB Running 0 0 1 0 0 0 40
DB HISTORY_DB_BAK Running 0 0 1 0 0 0 59

Go routine performance maximizing

I writing a data mover in go. Taking data located in one data center and moving it to another data center. Figured go would be perfect for this given the go routines.
I notice if I have one program running 1800 threads the amount of data being transmitted is really low
here's the dstat print out averaged over 30 seconds
---load-avg--- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
1m 5m 15m |usr sys idl wai hiq siq| read writ| recv send| in out | int csw
0.70 3.58 4.42| 10 1 89 0 0 0| 0 156k|7306k 6667k| 0 0 | 11k 6287
0.61 3.28 4.29| 12 2 85 0 0 1| 0 6963B|8822k 8523k| 0 0 | 14k 7531
0.65 3.03 4.18| 12 2 86 0 0 1| 0 1775B|8660k 8514k| 0 0 | 13k 7464
0.67 2.81 4.07| 12 2 86 0 0 1| 0 1638B|8908k 8735k| 0 0 | 13k 7435
0.67 2.60 3.96| 12 2 86 0 0 1| 0 819B|8752k 8385k| 0 0 | 13k 7445
0.47 2.37 3.84| 11 2 86 0 0 1| 0 2185B|8740k 8491k| 0 0 | 13k 7548
0.61 2.22 3.74| 10 2 88 0 0 0| 0 1229B|7122k 6765k| 0 0 | 11k 6228
0.52 2.04 3.63| 3 1 97 0 0 0| 0 546B|1999k 1365k| 0 0 |3117 2033
If I run 9 instances of the program with 200 threads each I see much better performance
---load-avg--- ----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
1m 5m 15m |usr sys idl wai hiq siq| read writ| recv send| in out | int csw
8.34 9.56 8.78| 53 8 36 0 0 3| 0 410B| 38M 32M| 0 0 | 41k 26k
8.01 9.37 8.74| 74 10 12 0 0 4| 0 137B| 51M 51M| 0 0 | 59k 39k
8.36 9.31 8.74| 75 9 12 0 0 4| 0 1092B| 51M 51M| 0 0 | 59k 39k
6.93 8.89 8.62| 74 10 12 0 0 4| 0 5188B| 50M 49M| 0 0 | 59k 38k
7.09 8.73 8.58| 75 9 12 0 0 4| 0 410B| 51M 50M| 0 0 | 60k 39k
7.40 8.62 8.54| 75 9 12 0 0 4| 0 137B| 52M 49M| 0 0 | 61k 40k
7.96 8.63 8.55| 75 9 12 0 0 4| 0 956B| 51M 51M| 0 0 | 59k 39k
7.46 8.44 8.49| 75 9 12 0 0 4| 0 273B| 51M 50M| 0 0 | 58k 38k
8.08 8.51 8.51| 75 9 12 0 0 4| 0 410B| 51M 51M| 0 0 | 59k 39k
load average is a little high but I'll worry about that later. The network traffic though is almost hitting the network potential.
I'm on Ubuntu 12.04,
8 Gigs Ram,
2.3 GHz processors (says EC2 :P)
Also, I've increased my file descriptors from 1024 to 10240
I thought go was designed for this kind of thing or am I expecting too much of go for this application?
Is there something trivial that I'm missing? Do I need to configure my system to maximizes go's potential?
EDIT
I guess my question wasn't clear enough. Sorry. I'm not asking for magic from go, I know the computers have limitations to what they can handle.
So I'll rephrase. Why is 1 instance with 1800 go routines != 9 instances with 200 threads each? Same amount of go routines significantly less performance for 1 instance compared to 9 instances.
Please note, that goroutines are also limited to your local maschine and that channels are not natively network enabled, i.e. your particular case is probably not biting go's chocolate site.
Also: What did you expect from throwing (suposedly) every transfer into a goroutine? IO-Operations tend to have their bottleneck where the bits hit the metal, i.e. the physical transfer of the data to the medium. Think of it like that: No matter how many Threads or (Goroutines in this case) try to write to Networkcard, you still only have one Networkcard. Most likely hitting it with to many concurrent write calls will only slow things down, since the involved overhead increases
If you think this is not the problem or want to audit your code for optimized performance, go has neat builtin features to do so: profiling go programs (official go blog)
But still the actual bottleneck might well be outside your go program AND/OR in the way it interacts with the os.
Adressing your actual problem without code is pointless guessing. Post some and everyone will try their best to help you.
You will probably have to post your source code to get any real input, but just to be sure, you have increased number of cpus to use?
import "runtime"
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
}

Resources