I am looking for a simplest way to parse disks in zpool.
A list of disks in space separated format.
For example below output shows zpool information. Is there any command to get list of physical disks only?
# zpool status pool
pool: pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
c2d44s2 ONLINE 0 0 0
c2d45s2 ONLINE 0 0 0
c2d46s2 ONLINE 0 0 0
errors: No known data errors
This should work although it might need some fixes for complex zpool status output :
# cat parsezs
awk '
NF != 5 {next}
$1 == "NAME" {getline;show=1;next}
$1 ~ "mirror" {next}
$1 ~ "raidz" {next}
$1 ~ "replacing" {next}
$1 ~ "error" {next}
show == 1 {printf("%s ",$1)}
END {printf("\n")}'
# zpool status pool | parsezs
c2d44s2 c2d45s2 c2d46s2
This will also work and shows you the pool name too. Note that you need nawk, which is a more modern version of Awk on Solaris:
zpool status | nawk 'BEGIN{Disp=0}{if($1=="pool:") {if(Disp!=0) print ""; else Disp = 1; printf("%s ",$2)} ; if($1~"^c[0-9]") printf("%s ",$1)}END{print ""}'
The Disp variable is just to tidy up the output. This is a typcal result:
js_data_san c0t6006016049B13A00B337B4F7F1DDE411d0 c0t6006016093003B0022E5A8A8C833E711d0
rpool c0t5000CCA07D07C764d0 c0t5000CCA07D07C514d0
s10patchchk-ospool c0t6006016093003B00B488CFCF10D8E611d0
vmware_ds_nfs01 c0t6006016049B13A005AE1A9648112E511d0
So in that example rpool and js_data_san each has two devices. It doesn't indicate if they are mirrored or concatenated, but that would be easy to change in the script.
Related
I have a .txt file with numeric indices of certain 'outlier' data points, each on their own line, called by $outlier_file:
1
7
30
43
48
49
56
57
65
Using the following code, I can successfully remove certain files (volumes of neuroimaging data in this case) by using while + read.
while read outlier; do
# Remove current outlier vol from eddy unwarped DWI data
rm $DWI_path/$1/vol000*"$outlier".nii.gz;
done < $outlier_file
However, I also need to remove the numbers located at these 'outlier' indices from another text file stored in $bvec_file, which has 69 columns & 3 rows. Within each row, the numbers are space delimited. So e.g., for this example, I need to remove all 3 rows of column 1, 7, 30, etc. and then save this version with the outliers removed into a new *.txt file.
0 0.9988864166 -0.0415925034 -0.06652866169 -0.6187155495 0.2291534462 0.8892356214 0.7797364286 0.1957395685 0.9236669465 -0.5400265342 -0.3845263463 -0.4903989539 0.4863306385 -0.6496130843 0.5571164636 0.8110081715 0.9032142094 -0.3234596075 -0.1551409525 -0.806059879 0.4811597826 -0.7820757748 -0.9528881463 0.1916556621 -0.007136403284 -0.2459431735 -0.7915263574 -0.1938049261 -0.1578786349 0.8688043633 -0.5546072294 -0.4019951732 0.2806154851 0.3478762022 0.9548067252 -0.9696777541 -0.4816255837 -0.7962240023 0.6818610905 0.7097978218 0.6739686799 0.1317547111 -0.7648252249 -0.1456021218 -0.5948047487 0.0934205064 0.5268769564 -0.8618324858 -0.3721029232 -0.1827616535 0.691353613 0.4159071597 0.4605505287 0.1312199424 0.426674893 -0.4068291509 0.7167859082 0.2330824665 0.01909161256 -0.06375254731 -0.5981122948 -0.2672253674 0.6875472994 0.2302943724 0 0 0 0
0 0.04258194557 0.9988207007 0.6287131425 0.7469024143 0.5528476637 0.3024964957 0.1446931241 0.9305823612 0.1675139932 0.8208211337 0.8238722992 0.5983722761 0.4238174961 0.639429196 0.1072148887 0.5551578885 0.003337599176 0.511740508 0.9516619405 0.3851404227 0.8526321065 0.1390947346 0.2030449535 0.7759459569 0.165587903 0.9523372297 0.5801228933 0.3277276562 0.7413928896 0.442482978 0.2320585706 0.1079269171 0.1868672655 0.1606136006 0.2968573235 0.1682337977 0.8745679247 0.5989061899 0.4172933119 0.01746934331 0.5641480832 0.7455469091 0.3471016571 0.8035001467 0.5870623128 0.361107261 0.8192579877 0.4160218909 0.5651330299 0.4070513153 0.7221181184 0.714223583 0.6971767133 0.4937978446 0.4232911691 0.8011701162 0.2870385494 0.9016941521 0.09688949547 0.9086826131 0.2631932421 0.152678096 0.6295753848 0.9712458578 0 0 0 0
0 -0.02031513434 -0.02504539005 -0.7747862425 0.2435730944 0.8011542666 0.343155766 -0.6091592581 -0.3093581909 -0.3446424728 -0.1860752773 -0.4163819443 -0.6336083058 0.7641081337 -0.4112580017 -0.8234841915 0.1845683194 0.4291770641 -0.7959243273 -0.2650864686 0.449371034 -0.203724703 0.6074620459 0.2253373638 -0.6009791836 -0.9861692137 0.1804598471 0.1922068008 -0.9246806119 0.6522353256 -0.2222336438 0.7990992685 -0.9092588527 -0.9414539684 0.9236803664 0.0148272357 -0.1772637652 0.05628269894 -0.08566629406 -0.6007759525 0.7041888058 0.4769729119 0.6532997034 -0.5427364139 -0.5772239915 0.5491494803 0.9278330427 0.2263117816 -0.290121617 0.7363179158 0.8949343019 -0.02399176716 0.5629439653 -0.5493977074 -0.8596191107 -0.7992328333 0.4388809483 0.6354737076 0.3641705918 0.9951120218 0.412591228 -0.75696169 0.9514620339 -0.3618197699 0.06038199928 0 0 0 0
As far as I've gotten in one approach is using awk to index the right columns.. (just printing them right now) but I can only get this to work if I call $1 (i.e., the numeric index of the first outlier column)...
awk -F ' ' '{print $1}' $bvec_file
If I try to refer to the value in $outlier, it doesn't work. Instead, this prints the entire contents of $bvec_file
while read outlier; do
# Remove current outlier vol from eddy unwarped DWI data
rm $DWI_path/$1/vol000*"$outlier".nii.gz;
# Remove outlier #'s from bvec file
awk -F ' ' '{print $1}' $bvec_file
done < $outlier_file
I am completely stuck on how to get this done. Any advice would be greatly appreciated.
To delete the outliers from bvec_file after the loop and only delete the ones where the associated file was successfully removed:
#!/usr/bin/env bash
tmp=$(mktemp) || exit 1
while IFS= read -r outlier; do
# Remove current outlier vol from eddy unwarped DWI data
rm "$DWI_path/$1"/vol000*"$outlier".nii.gz &&
echo "$outlier"
done < "$outlier_file" |
awk '
NR==FNR { os[$0]; next }
{
for (o in os) {
$o=""
}
$0=$0; $1=$1
}
1' - "$bvec_file" > "$tmp" &&
mv "$tmp" "$bvec_file"
Or to delete the outliers one at a time as the files are removed:
#!/usr/bin/env bash
tmp=$(mktemp) || exit 1
while IFS= read -r outlier; do
# Remove current outlier vol from eddy unwarped DWI data
rm "$DWI_path/$1"/vol000*"$outlier".nii.gz &&
# Remove outlier #'s from bvec file
awk -v o="$outlier" '{$o=""; $0=$0; $1=$1} 1' "$bvec_file" > "$tmp" &&
mv "$tmp" "$bvec_file"
done < <(sort -rnu "$outlier_file")
Always quote your shell variables, see https://mywiki.wooledge.org/Quotes, and the && at the end of each line is to ensure the next command only runs if the previous commands succeeded.
The magical incantation in the awk script does the following - lets say your input is a b c and the outlier field is field number 2, b:
$ echo 'a b c'
a b c
$
$ echo 'a b c' | awk -v o=2 '{$o=""; print NF ":", $0}'
3: a c
$
$ echo 'a b c' | awk -v o=2 '{$o=""; $0=$0; print NF ":", $0}'
2: a c
$
$ echo 'a b c' | awk -v o=2 '{$o=""; $0=$0; $1=$1; print NF ":", $0}'
2: a c
The o="" sets the field value to null, the $0=$0 forces awk to resplit $0 into fields so it effectively deletes field 2 (as opposed to the previous step which set it to null but it still existed as such), and the $1=$1 recombines $0 from it's fields replacing every FS (any contiguous chain of white space chars including the 2 blanks now between a and c) with OFS (a single blank char).
Team, I have below command that is working fine but I am enhancing it to get result like below
my goal is to report the count and display statement with it.
I have three conditions to be met
1 - if result = 0 mounts not found
2 - if result = 1-64, mounts found under 64
3 - if result = 64+, mounts found above 64
if count is 0 I want to output:
0 mounts found on this hostname
if 1-64 mounts found, then I want to say whatever number is found
x mounts found on hostname.
if anything beyond 64 mounts are found, then i want to say
x mounts found on hostname that are more than 64
mount | grep csi | grep -e /dev/sd | wc -l && echo "mounts found on $HOSTNAME"
I am trying to learn how to compare returned count to 64 and displace statement accordingly. I need a single line shell command for all this and not a multiple coz i need to fit it in ansible shell module.
sample output:
mount | grep csi
tmpfs on /var/lib/kubelet/pods/abaa868f-2109-11ea-a1f8-ac1f6b5995dc/volumes/kubernetes.io~secret/csi-nodeplugin-token-type tmpfs (rw,relatime)
/host/dev/sdc on /var/lib/kubelet/pods/11ea-a1f8-ac1f6b5995dc/volumes/kubernetes.io~csi/ea6728b2-08d0-5fb7-b93a-5f63e49f770c/mount type iso9660 (ro,relatime,nojoliet,check=s,map=n,blocksize=2048,fsc,readahead=4096)
mount | grep csi | grep /dev/sd
/host/dev/sdc on /var/lib/kubelet/pods/11ea-a1f8-ac1f6b5995dc/volumes/kubernetes.io~csi/b93a-5f63e49f770c/mount type iso9660 (ro,relatime,nojoliet,check=s,map=n,blocksize=2048,fsc,readahead=4096)
any hint why is this not working below?
tried solution: with awk and comparison operator
mount | grep -Ec '/dev/sd.*\<csi' | awk '$0 = 0 { printf "No mounts found", $0,"TRUE" ; } ($0 > 0 && $0 <= 64) { print "Mounts are less than 64", $0 ;} $0 > 64 { print "Mounts are more than 64", $0 ;}'
output:
node1
expected:
node1 No mounts found
With extended and optimized pipeline:
mount | grep -Ec '/dev/sd.*\<csi' \
| awk '{ print $0,"mounts found on hostname"($0>64? " that are more than 64." : ".") }'
grep's -c option - suppress normal output; instead print a count of matching lines
The symbols \< and \> respectively match the empty string at the beginning and end of a word.
Need help with "printf" and "for" loop.
I have individual files each named after a user (e.g. john.txt, david.txt) and contains various commands that each user ran. Example of commands are (SUCCESS, TERMINATED, FAIL, etc.). Files have multiple lines with various text but each line contains one of the commands (1 command per line).
Sample:
command: sendevent "-F" "SUCCESS" "-J" "xxx-ddddddddddddd"
command: sendevent "-F" "TERMINATED" "-J" "xxxxxxxxxxx-dddddddddddddd"
I need to go through each file, count the number of each command and put it in another output file in this format:
==== John ====
SUCCESS - 3
TERMINATED - 2
FAIL - 4
TOTAL 9
==== David ====
SUCCESS - 1
TERMINATED - 1
FAIL - 2
TOTAL 4
P.S. This code can be made more compact, e.g there is no need to use so many echo's etc, but the following structure is being used to make it clear what's happening:
ls | grep .txt | sed 's/.txt//' > names
for s in $(cat names)
do
suc=$(grep "SUCCESS" "$s.txt" | wc -l)
termi=$(grep "TERMINATED" "$s.txt"|wc -l)
fail=$(grep "FAIL" "$s.txt"|wc -l)
echo "=== $s ===" >>docs
echo "SUCCESS - $suc" >> docs
echo "TERMINATED - $termi" >> docs
echo "FAIL - $fail" >> docs
echo "TOTAL $(($termi+$fail+$suc))">>docs
done
Output from my test files was like :
===new===
SUCCESS - 0
TERMINATED - 0
FAIL - 0
TOTAL 0
===vv===
SUCCESS - 0
TERMINATED - 0
FAIL - 0
TOTAL 0
based on karafka's suggestions instead of using the above lines for the for-loopyou can directly use the following:
for f in *.txt
do
something
#in order to print the required name in the file without the .txt you can do a
printf "%s\n" ${f::(-4)}
awk to the rescue!
$ awk -vOFS=" - " 'function pr() {s=0;
for(k in a) {s+=a[k]; print k,a[k]};
print "\nTOTAL "s"\n\n\n"}
NR!=1 && FNR==1 {pr(); delete a}
FNR==1 {print "==== " FILENAME " ===="}
{a[$4]++}
END {pr()}' file1 file2 ...
if your input file is not structured (key is not always on fourth field), you can do the same with pattern match.
I have this command which will output 0, 1 or 2.
This line of code is part of a config file (zabbix), only reason for one-liner code.
mysql -u root -e "show slave status\G" | \
grep -E 'Slave_IO_Running:|Slave_SQL_Running:' | cut -f2 -d':' | \
sed "s/No/0/;s/Yes/1/" | awk '{ SUM += $1} END { print SUM }'
But I want it to output values to be like this so I can setup alert with correct status:
If only Slave_IO_Running is No then output 1.
If only Slave_SQL_Running is No then output 2.
If both are Yes then output 3.
If both are No then output 0.
If no lines/output from show slave status command then output 4.
So something like modify first entry of No with a unique value using sed or awk. And second entry with unique value and so on.
Output of show slave status\G
mysql> show slave status\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.10.10.10
Master_User: replicationslave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.009081
Read_Master_Log_Pos: 856648307
Relay_Log_File: mysqld-relay-bin.002513
Relay_Log_Pos: 1431694
Relay_Master_Log_File: mysql-bin.009081
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
You can do all the string processing here in awk:
mysql -u root -e "show slave status\G" | awk 'BEGIN {output=0} /Slave_IO_Running.*No/ {output+=1} /Slave_SQL_Running.*No/ {output +=2} END {if(output==3}{print 0} else {if(output==0} {print 3} else {print output}}'
This will start the output counter at 0, if we match Slave_IO_Running with No we'll add 1. If we match Slave_SQL_Running with No we'll add 2, then at the end we'll print the total, which will be 0 if neither are matched, 1 if only IO is No, 2 if only SQL is No and 3 if both are No. Since you want to print 0 if both are Yes we reverse our count at the end, if we got a 3 then both were "No" so print 0, otherwise if it was 0 print 3, else print its own value.
The following awk code could be compacted into a single line if you feel the urge to do that:
awk -F: -v ret=0 '
/Slave_IO_Running:.*No/ { ret=1 }
/Slave_IO_Running:.*Yes/ { yes++ }
/Slave_SQL_Running:.*No/ { ret=(ret==1) ? 0 : 2 }
/Slave_SQL_Running:.*Yes/ { yes++ }
END { print (yes==2) ? 3 : ret }
'
No grep or cut or sed is required, this takes the output of your mysql command directly. It also assumes that Slave_IO_Running will always appear before Slave_SQL_Running in the output of your command.
The notation in the third line and last line functions as an in-line "if" statement -- if the value of ret equals 1, set ret to 0; otherwise set ret to 2.
Whenever you have name to value pairs in your data it's usually clearest, simplest and easiest to enhance later to first create an array mapping the names to the values and then access the values by their names, e.g.:
awk '
{ f[$1]=$2 }
END {
if (f["Slave_10_Running:"] == "Yes")
rslt = (f["Slave_SQL_Running:"] == "Yes" ? 3 : 2)
else
rslt = (f["Slave_SQL_Running:"] == "Yes" ? 1 : 0)
print rslt
}
' file
1
I have a text file with the following information:
Filesystem Use%
/dev/sda1 44%
/dev/sda7 35%
/dev/sda3 2%
/dev/sda2 5%
/dev/sda5 47%
tmpfs 0%
Now, I want to make a batch file that reads this text file, store the numbers of the lines 2,3,4,5 e 6 into some variables and then compare these numbers with a specific value set by me. The comparison would be something like this:
variable = 44
if variable > 90
then it presents a console message whith the all the line of the variable stored.
variabletwo =35
if variabletwo > 90
then it presents a console message whith the all the line of the variable stored.
and so on...
Can someone help me please?
awk will ignore the % when converting the field to a scalar, so you can just do:
awk 'NR > 1 && NR < 7 && $2 > 90' input-file
to print each line (restricted to lines 2 thru 6) in which the second field is greater than 90. You probably want a better way to restrict the lines, though. Possibly
awk '$1 ~ /^\/dev/ && $2 > 90' input-file
If you want to include more text, do something like:
awk '$1 ~ /^\/dev/ && $2 > 90 { print "$1 is over the limit: $2" }
A regular script in pure bash:
#!/bin/bash
THRESHOLD=20
FILE_TO_TEST="/your/inputfile/here"
{
read
while read DISK USED
do
[[ ${USED/'%'/} -gt ${THRESHOLD} ]] && echo "$DISK" "$USED"
done
} < "$FILE_TO_TEST"