bash process substitution with tee dropping data - bash

I expect to see 1MiB in the wc's character count (which works at the bottom when 'tee' is writing to a file rather than a process substitution).
Is there something I'm missing about using process substitution with tee?
[192.168.20.40 (02b4b472) ~ 18:52:39]# dd if=/dev/zero bs=1M count=1 | tee >(dd bs=1M count=1 | wc | sed 's/^/ /' > /dev/stderr) | wc
0+1 records in
0+1 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.00022373 s, 36.6 MB/s
0 0 8192
0 0 81920
[192.168.20.40 (02b4b472) ~ 18:52:56]# dd if=/dev/zero bs=1M count=1 | tee /tmp/foo | wc
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0416881 s, 25.2 MB/s
0 0 1048576
[192.168.20.40 (02b4b472) ~ 18:53:41]#
Using tee -p handles the stdout, but not the process substitution... which still comes up short.
[192.168.20.40 (02b4b472) ~ 19:17:05]# dd if=/dev/zero bs=1M count=1 | tee -p warn >(dd bs=1M count=1 | wc | sed 's/^/ /' > /dev/stderr) | wc
0+1 records in
0+1 records out
8192 bytes (8.2 kB, 8.0 KiB) copied, 0.00084348 s, 9.7 MB/s
0 0 8192
1+0 records in
1+0 records out
1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0417175 s, 25.1 MB/s
0 0 1048576
[192.168.20.40 (02b4b472) ~ 19:18:25]#

Related

Multiple arithmetic operation in awk

I have this awk function to do speed conversion.
function hum(x) {
split( "B/s KB/s MB/s GB/s TB/s", v); s=1
while( x>1000 ){ x/=1000; s++ }
printf( "%0.2f %s" , x , v[s] )
}
hum($1)
It works great if used standalone.
$ awk -f /tmp/test.awk <<< 1000000
1000.00 KB/s
$ awk -f /tmp/test.awk <<< 100000000
100.00 MB/s
Now, I want to convert bytes number to bit, to do that I need to multiple the number/input first by 8. I tried to modify the function a bit.
function hum(x) {
split( "B/s KB/s MB/s GB/s TB/s", v); s=1
while( x>1000 ){ res = x * 8 / 1000; s++ }
printf( "%0.2f %s\n" , res , v[s] )
}
hum($1)
But it hangs when I tried to execute it, I had to Ctrl-C to cancel the operation. Any idea what's wrong ?
Now, I want to convert bytes number to bit, to do that I need to multiple the number/input first by 8.
Then multiply the input value by 8 just once:
function hum(x) {
split("b/s Kb/s Mb/s Gb/s Tb/s", v); s=1
x *= 8
while(x > 1000) {x /= 1000; s++}
printf("%0.2f %s\n", x, v[s])
}
hum($1)
Instead of a loop, let's use log():
$ cat program.awk
function hum2(x, v,p) {
split( "b/s kb/s Mb/s Gb/s Tb/s", v) # remember to B -> b
x*=8 # to bits conversion
p=int(log(x)/log(1000)) # figure v slot
return sprintf("%0.2f %s" , x/1000^p , v[p+1] ) # p+1 due to 1 basedness
}
{
print $1,hum2($1) # input bytes, output bits
}
Take it for a spin:
$ for (( i=1 ; i<=10**14 ; i=i*10 )) ; do echo $i ; done | awk -f program.awk
Output:
1 8.00 b/s
10 80.00 b/s
100 800.00 b/s
1000 8.00 kb/s
10000 80.00 kb/s
100000 800.00 kb/s
1000000 8.00 Mb/s
10000000 80.00 Mb/s
100000000 800.00 Mb/s
1000000000 8.00 Gb/s
10000000000 80.00 Gb/s
100000000000 800.00 Gb/s
1000000000000 8.00 Tb/s
10000000000000 80.00 Tb/s
100000000000000 800.00 Tb/s
I wrote this one a while back that might not be exactly what you need but should be easily adaptable -
a function that takes in # of bytes as mandatory 1st parameter,
with optional 2nd param for calculating powers of 1000 instead of 1024
enter M m or 10 for 1000,
anything else, including blank, defaults to 1024
and auto formats it in the most logical range for human readability,
spanning all the way from kilo(kibi)-bit/s,
to YOTTA(yobi)-bit/s
*** extra credit for anyone who can figure out what's common among that list of numbers in my sample below.
——————————————————————— ———————————————————————
mawk/gawk '
function bytesformat(_, _______, __, ___, ____, _____, ______)
{
_____=__=(____^=___*=((____=___+=___^="")/___)+___+___)
___/=___
sub("^0+","",_)
____=_____-=substr(_____,index(_____,index(_____,!__)))*\
(_______ ~ "^(10|[Mm])$")
_______=length((____)____)
if ((____*__)<(_______*_)) {
do {
____ *= _____
++___
} while ((____*__)<(_______*_))
}
__=_
sub("(...)+$", ",&", __)
gsub("[^#-.][^#-.][^#-.]", "&,", __)
gsub("[,]*$|^[,]+", "", __)
sub("^[.]", "0&", __)
return \
sprintf(" %10.4f %sb/s (%42s byte%s) ",
_==""?+_:_/(_____^___)*_______,
substr("KMGTPEZY",___,___~___),
__==""?+__:__, (_~_)<_?"s":" ")
} { printf("%35s bytes ::: %s\n",
$1,
bytesformat($1, 10)) }'
6841 bytes ::: 54.7280 Kb/s ( 6,841 bytes)
15053 bytes ::: 120.4240 Kb/s ( 15,053 bytes)
23677 bytes ::: 189.4160 Kb/s ( 23,677 bytes)
32839 bytes ::: 262.7120 Kb/s ( 32,839 bytes)
42293 bytes ::: 338.3440 Kb/s ( 42,293 bytes)
52183 bytes ::: 417.4640 Kb/s ( 52,183 bytes)
62233 bytes ::: 497.8640 Kb/s ( 62,233 bytes)
72733 bytes ::: 581.8640 Kb/s ( 72,733 bytes)
83269 bytes ::: 666.1520 Kb/s ( 83,269 bytes)
138641 bytes ::: 1.1091 Mb/s ( 138,641 bytes)
149767 bytes ::: 1.1981 Mb/s ( 149,767 bytes)
162011 bytes ::: 1.2961 Mb/s ( 162,011 bytes)
174221 bytes ::: 1.3938 Mb/s ( 174,221 bytes)
186343 bytes ::: 1.4907 Mb/s ( 186,343 bytes)
199181 bytes ::: 1.5934 Mb/s ( 199,181 bytes)
211559 bytes ::: 1.6925 Mb/s ( 211,559 bytes)
224449 bytes ::: 1.7956 Mb/s ( 224,449 bytes)
237733 bytes ::: 1.9019 Mb/s ( 237,733 bytes)
128260807 bytes ::: 1.0261 Gb/s ( 128,260,807 bytes)
128565049 bytes ::: 1.0285 Gb/s ( 128,565,049 bytes)
128932561 bytes ::: 1.0315 Gb/s ( 128,932,561 bytes)
129304523 bytes ::: 1.0344 Gb/s ( 129,304,523 bytes)
129765859 bytes ::: 1.0381 Gb/s ( 129,765,859 bytes)
130111459 bytes ::: 1.0409 Gb/s ( 130,111,459 bytes)
130533133 bytes ::: 1.0443 Gb/s ( 130,533,133 bytes)
131012801 bytes ::: 1.0481 Gb/s ( 131,012,801 bytes)
131305043 bytes ::: 1.0504 Gb/s ( 131,305,043 bytes)
128004093619 bytes ::: 1.0240 Tb/s ( 128,004,093,619 bytes)
128026268633 bytes ::: 1.0242 Tb/s ( 128,026,268,633 bytes)
128056111093 bytes ::: 1.0244 Tb/s ( 128,056,111,093 bytes)
128071706179 bytes ::: 1.0246 Tb/s ( 128,071,706,179 bytes)
128082430067 bytes ::: 1.0247 Tb/s ( 128,082,430,067 bytes)
128102475287 bytes ::: 1.0248 Tb/s ( 128,102,475,287 bytes)
128115312811 bytes ::: 1.0249 Tb/s ( 128,115,312,811 bytes)
128157555781 bytes ::: 1.0253 Tb/s ( 128,157,555,781 bytes)
128175556181 bytes ::: 1.0254 Tb/s ( 128,175,556,181 bytes)
128004004377827 bytes ::: 1.0240 Pb/s ( 128,004,004,377,827 bytes)
128040044659991 bytes ::: 1.0243 Pb/s ( 128,040,044,659,991 bytes)
128074066014953 bytes ::: 1.0246 Pb/s ( 128,074,066,014,953 bytes)
128127783733093 bytes ::: 1.0250 Pb/s ( 128,127,783,733,093 bytes)
128177777757611 bytes ::: 1.0254 Pb/s ( 128,177,777,757,611 bytes)
128200131001829 bytes ::: 1.0256 Pb/s ( 128,200,131,001,829 bytes)
128221782218423 bytes ::: 1.0258 Pb/s ( 128,221,782,218,423 bytes)
128237784424429 bytes ::: 1.0259 Pb/s ( 128,237,784,424,429 bytes)
128262808216561 bytes ::: 1.0261 Pb/s ( 128,262,808,216,561 bytes)
128055360778053559 bytes ::: 1.0244 Eb/s ( 128,055,360,778,053,559 bytes)
128082834342828077 bytes ::: 1.0247 Eb/s ( 128,082,834,342,828,077 bytes)
128112814740831073 bytes ::: 1.0249 Eb/s ( 128,112,814,740,831,073 bytes)
128172605482718161 bytes ::: 1.0254 Eb/s ( 128,172,605,482,718,161 bytes)
128203333333333399 bytes ::: 1.0256 Eb/s ( 128,203,333,333,333,399 bytes)
128240343634404269 bytes ::: 1.0259 Eb/s ( 128,240,343,634,404,269 bytes)
128272818280928081 bytes ::: 1.0262 Eb/s ( 128,272,818,280,928,081 bytes)
128282816070718271 bytes ::: 1.0263 Eb/s ( 128,282,816,070,718,271 bytes)
128289494449498271 bytes ::: 1.0263 Eb/s ( 128,289,494,449,498,271 bytes)
128030578058078030329 bytes ::: 1.0242 Zb/s ( 128,030,578,058,078,030,329 bytes)
128172161171772727271 bytes ::: 1.0254 Zb/s ( 128,172,161,171,772,727,271 bytes)
128234814212823481421 bytes ::: 1.0259 Zb/s ( 128,234,814,212,823,481,421 bytes)
128282727262616060507 bytes ::: 1.0263 Zb/s ( 128,282,727,262,616,060,507 bytes)
128286164949865319531 bytes ::: 1.0263 Zb/s ( 128,286,164,949,865,319,531 bytes)
128372737272827373721 bytes ::: 1.0270 Zb/s ( 128,372,737,272,827,373,721 bytes)
128393838393839382839 bytes ::: 1.0272 Zb/s ( 128,393,838,393,839,382,839 bytes)
128505500051850550037 bytes ::: 1.0280 Zb/s ( 128,505,500,051,850,550,037 bytes)
128669659758768758857 bytes ::: 1.0294 Zb/s ( 128,669,659,758,768,758,857 bytes)
130000000000093999992023 bytes ::: 1.0400 Yb/s ( 130,000,000,000,093,999,992,023 bytes)
131111111311113111311131 bytes ::: 1.0489 Yb/s ( 131,111,111,311,113,111,311,131 bytes)
131111153353153531553111 bytes ::: 1.0489 Yb/s ( 131,111,153,353,153,531,553,111 bytes)
131111531315333335313531 bytes ::: 1.0489 Yb/s ( 131,111,531,315,333,335,313,531 bytes)
131113133333311333331111 bytes ::: 1.0489 Yb/s ( 131,113,133,333,311,333,331,111 bytes)
131113551355135511111111 bytes ::: 1.0489 Yb/s ( 131,113,551,355,135,511,111,111 bytes)
131131113131113131131111 bytes ::: 1.0490 Yb/s ( 131,131,113,131,113,131,131,111 bytes)
131131331133111331313111 bytes ::: 1.0491 Yb/s ( 131,131,331,133,111,331,313,111 bytes)
131133133131333313331311 bytes ::: 1.0491 Yb/s ( 131,133,133,131,333,313,331,311 bytes)
Any idea what's wrong ?
You have created infinte loop which cause it to hang, 1st code has
while( x>1000 ){ x/=1000; s++ }
here each turn decrease x 1000 times so it will finally be equal or lower 1000. 2nd code has
while( x>1000 ){ res = x * 8 / 1000; s++ }
here each turn x is used to compute res but value of x itself never changes, thus you will get either 0 executions of while body or +infinity executions of while body.
Note that But it hangs when I tried to execute it might be observed also in languages other than GNU AWK which do support while construct when you do not change value of variable on which truthiness of condition depends.

text formating to specific width

I wrote a script to show down- and up-speed of my notebook with polybar. The problem I run into is to put the output of echo in formation.
ATM my output looks like this (bash script loops in terminal) ...
WLAN0: ⬇️ 14 MiB/s ⬆️ 16 KiB/s
WLAN0: ⬇️ 60 B/s ⬆️ 0 B/s
WLAN0: ⬇️ 120 B/s ⬆️ 120 B/s
But I want it lined up, like this ...
WLAN0: ⬇️ 14 MiB/s ⬆️ 16 KiB/s
WLAN0: ⬇️ 60 B/s ⬆️ 0 B/s
WLAN0: ⬇️ 120 B/s ⬆️ 120 B/s
The essence of my code is the following simplified line ...
echo "yada: ⬇️ $string1 ⬆️ $string2"
The variables include a number and text (up to 10 chars max) each, depending on transfer speed.
So there should be at least 12 static fields between the two emoji.
But I have no clue how and I am hoping you can explain to me how to format some kind of variables width, with printf I assume.
Align left with printf:
string1="14 MiB/s"; string2="16 KiB/s"
printf "yada: ⬇️ %-12s ⬆️ %-12s\n" "$string1" "$string2"
Output:
yada: ⬇️ 14 MiB/s ⬆️ 16 KiB/s
Here you is how you can align columns:
#!/usr/bin/env bash
dnIcon=$'\342\254\207\357\270\217'
upIcon=$'\342\254\206\357\270\217'
nMiBs=14
sMiBs="$nMiBs MiB/s"
nKiBs=16
sKiBs="$nKiBs KiB/s"
printf 'WLAN0: %s %-10s %s %-10s\n' "$dnIcon" "$sMiBs" "$upIcon" "$sKiBs"
Sample output:
WLAN0: ⬇️ 14 MiB/s ⬆️ 16 KiB/s

How much RAM is actually available for applications in Linux?

I’m working on embedded Linux targets (32-bit ARM) and need to determine how much RAM is available for applications once the kernel and core software are launched. Available memory reported by free and /proc/meminfo don’t seem to align with what testing shows is actually usable by applications. Is there a way to correctly calculate how much RAM is truly available without running e.g., stress on each system?
The target system used in my tests below has 256 MB of RAM and does not use swap (CONFIG_SWAP is not set). I’m used the 3.14.79-rt85 kernel in the tests below but have also tried 4.9.39 and see similar results. During boot, the following is reported:
Memory: 183172K/262144K available (5901K kernel code, 377K rwdata, 1876K rodata, 909K init, 453K bss, 78972K reserved)
Once system initialization is complete and the base software is running (e.g., dhcp client, ssh server, etc.), I get the following reported values:
[root#host ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 210016 320 7880 0 0 0 0 186 568 0 2 97 0 0
[root#host ~]# free -k
total used free shared buff/cache available
Mem: 249616 31484 209828 68 8304 172996
Swap: 0 0 0
[root#host ~]# cat /proc/meminfo
MemTotal: 249616 kB
MemFree: 209020 kB
MemAvailable: 172568 kB
Buffers: 712 kB
Cached: 4112 kB
SwapCached: 0 kB
Active: 4684 kB
Inactive: 2252 kB
Active(anon): 2120 kB
Inactive(anon): 68 kB
Active(file): 2564 kB
Inactive(file): 2184 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 2120 kB
Mapped: 3256 kB
Shmem: 68 kB
Slab: 13236 kB
SReclaimable: 4260 kB
SUnreclaim: 8976 kB
KernelStack: 864 kB
PageTables: 296 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 124808 kB
Committed_AS: 47944 kB
VmallocTotal: 1810432 kB
VmallocUsed: 3668 kB
VmallocChunk: 1803712 kB
[root#host ~]# sysctl -a | grep '^vm'
vm.admin_reserve_kbytes = 7119
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 3
vm.extfrag_threshold = 500
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 32
vm.max_map_count = 65530
vm.min_free_kbytes = 32768
vm.mmap_min_addr = 4096
vm.nr_pdflush_threads = 0
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.scan_unevictable_pages = 0
vm.stat_interval = 1
vm.swappiness = 60
vm.user_reserve_kbytes = 7119
vm.vfs_cache_pressure = 100
Based on the numbers above, I expected to have ~160 MiB available for future applications. By tweaking sysctl vm.min_free_kbytes I can boost this to nearly 200 MiB since /proc/meminfo appears to take this reserve into account, but for testing I left it set as it is above.
To test how much RAM was actually available, i used the stress tool as follows:
stress --vm 11 --vm-bytes 10M --vm-keep --timeout 5s
At 110 MiB, the system remains responsive and both free and vmstat reflect the increased RAM usage. The lowest reported free/available values are below:
[root#host ~]# free -k
total used free shared buff/cache available
Mem: 249616 146580 93196 68 9840 57124
Swap: 0 0 0
[root#host ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
11 0 0 93204 1792 8048 0 0 0 0 240 679 50 0 50 0 0
Here is where things start to break down. After increasing stress’ memory usage to 120 MiB - still well shy of the 168 MiB reported as available - the system freezes for the 5 seconds while stress is running. Continuously running vmstat during the test (or as continuously as possible due to the freeze) shows:
[root#host ~]# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 209664 724 6336 0 0 0 0 237 666 0 1 99 0 0
3 0 0 121916 1024 6724 0 0 289 0 1088 22437 0 45 54 0 0
1 0 0 208120 1328 7128 0 0 1652 0 4431 43519 28 22 50 0 0
Due to the significant increase in interrupts and IO, I’m guessing the kernel is evicting pages containing executable code and then promptly needing to read them back in from flash. My questions are a) is this a correct assessment? and b) why would the kernel be doing this with RAM still available?
Note that if try to use a single worker with stress and claim 160 MiB of memory, the OOM gets activated and kills the test. OOM does not trigger in the scenarios described above.

Is it "growpart" or "resize2fs" for these new c5 instanced?

$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 4G 0 loop /var/tmp
nvme0n1 259:0 0 500G 0 disk
├─nvme0n1p1 259:1 0 1M 0 part
└─nvme0n1p2 259:2 0 300G 0 part /
$ sudo fdisk -l /dev/nvme0n1p2
Disk /dev/nvme0n1p2: 322.1 GB, 322120433152 bytes, 629141471 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Thanks!

Cut -b does not stop at expected point

I'm trying to extract a range of bytes from a file. File contains continuous 16 bit sample data. I would think cut -b should work but I am getting error in the data.
Extracting 20 bytes (10 samples)
cut -b188231561-188231580 file.dat > out.dat
I expect it to create a 20 byte file with 10 samples (last sample should be the -79). However it creates a 5749bytes file, with the following contents (displayed using od -s)
0000000 -69 -87 -75 -68 -83 -94 -68 -67
0000020 -82 -79 2570 2570 2570 2570 2570 2570
0000040 2570 2570 2570 2570 2570 2570 2570 2570
*
0013140 -65 -67 -69 -69 -71 -66 -72 -68
0013160 -69 -80 10
0013165
As you can see, there is a whole bunch of repeated 2570 values where cut was supposed to stop.
What am I doing wrong here? Also the Wikipedia article on cut says -b is limited to 1023 byte lines. Although the man pages for cut don't seem to mention this limitation.
Is there a better bash command to extract bytes N-M from a binary file? I already wrote a Perl script to do it. I'm just curious.
cut -b is used to get certain bytes from each line, it can't be used to get bytes from the file as a whole.
You can use head/tail instead:
N=120
M=143
tail -c +$N file | head -c $((M-N))

Resources