incomplete output variable stored - bash

im actually working with a small script, this script uses a comand from a NAS EMC Storage, the main idea is to storage and output variable and use it for other command.
nameserver="$(nas_server -list -all | awk 'NR == 3 {print $6}')"
serverparam1="$(server_param "$nameserver" -facility NDMP -list)"
echo "$serverparam1"
So.. this command nas_server -list -all | awk 'NR == 3 {print $6} returns "server_3"
the idea is to storage the name "server_3" and use it in this other command:
server_param server_3 -facility NDMP -list
The problem with all this stuff, is that the output print is not "server_3" only get "ver_3" i don't know why this is happening.
This is the ouput of the terminal:
[nasadmin#xxxx ~]$ ./test.sh
: ver_3
: unknown hostver_3
This is the output from server_param
[nasadmin#xxxx ~]$ server_param server_3 -facility NDMP -list
server_3 :
param_name facility default current configured
maxProtocolVersion NDMP 4 4
scsiReserve NDMP 0 0
DHSMPassthrough NDMP 0 0
CDBFsinfoBufSizeInKB NDMP 1024 1024
noxlt NDMP 0 0
bufsz NDMP 128 128
convDialect NDMP 8859-1 8859-1
concurrentDataStreams NDMP 4 4
includeCkptFs NDMP 1 1
md5 NDMP 1 1
snapTimeout NDMP 5 5
dialect NDMP
forceRecursiveForNonDAR NDMP 0 0
excludeSvtlFs NDMP 1 1
tapeSilveringStr NDMP ts ts
portRange NDMP 1024-65535 1024-65535
snapsure NDMP 0 0
v4OldTapeCompatible NDMP 1 1
[nasadmin#xxxx ~]$ nas_server -list -all
id type acl slot groupID state name
1 1 0 2 0 server_2
2 4 0 3 0 server_3
id acl server mountedfs rootfs name
1 0 1 17 13 TEST_VDM-1
2 0 1 16 14 TEST_VDM-2
Thanks

This worked for me
nameserver="$(nas_server -list -all | awk 'NR == 5 {print $6}')"
nameserver1="$(dos2unix $nameserver)"
serverparam0="$(server_param "$nameserver0" -facility NDMP -list)"
echo "$serverparam0"

Related

Splitting a large file containing multiple molecules

I have a file that contains 10,000 molecules. Each molecule is ending with keyword $$$$. I want to split the main files into 10,000 separate files so that each file will have only 1 molecule. Each molecule have different number of lines. I have tried sed on test_file.txt as:
sed '/$$$$/q' test_file.txt > out.txt
input:
$ cat test_file.txt
ashu
vishu
jyoti
$$$$
Jatin
Vishal
Shivani
$$$$
output:
$ cat out.txt
ashu
vishu
jyoti
$$$$
I can loop it through whole main file to create 10,000 separate files but how to delete the last molecule that was just moved to new file from main file. Or please suggest if there is a better method for it, which I believe there is. Thanks.
Edit1:
$ cat short_library.sdf
untitled.cdx
csChFnd80/09142214492D
31 34 0 0 0 0 0 0 0 0999 V2000
8.4660 6.2927 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
8.4660 4.8927 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1.2124 2.0951 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
2.4249 2.7951 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
1 2 2 0 0 0 0
2 3 1 0 0 0 0
30 31 1 0 0 0 0
31 26 1 0 0 0 0
M END
> <Mol_ID> (1)
1
> <Formula> (1)
C22H24ClFN4O3
> <URL> (1)
http://www.selleckchem.com/products/Gefitinib.html
$$$$
Dimesna.cdx
csChFnd80/09142214492D
16 13 0 0 0 0 0 0 0 0999 V2000
2.4249 1.4000 0.0000 S 0 0 0 0 0 0 0 0 0 0 0 0
3.6415 2.1024 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
4.8540 1.4024 0.0000 C 0 0 0 0 0 0 0 0 0 0 0 0
5.4904 1.7512 0.0000 Na 0 3 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 1 0 0 0 0
1 14 2 0 0 0 0
M END
> <Mol_ID> (2)
2
> <Formula> (2)
C4H8Na2O6S4
> <URL> (2)
http://www.selleckchem.com/products/Dimesna.html
$$$$
Here's a simple solution with standard awk:
LANG=C awk '
{ mol = (mol == "" ? $0 : mol "\n" $0) }
/^\$\$\$\$\r?$/ {
outFile = "molecule" ++fn ".sdf"
print mol > outFile
close(outFile)
mol = ""
}
' input.sdf
If you have csplit from GNU coreutils:
csplit -s -z -n5 -fmolecule test_file.txt '/^$$$$$/+1' '{*}'
This will do the whole job directly in bash:
molsplit.sh
#!/bin/bash
filenum=0
end=1
while read -r line; do
if [[ $end -eq 1 ]]; then
end=0
filenum=$((filenum + 1))
exec 3>"molecule${filenum}.sdf"
fi
echo "$line" 1>&3
if [[ "$line" = '$$$$' ]]; then
end=1
exec 3>&-
fi
done
Input is read from stdin, though that would be easy enough to change. Something like this:
./molsplit.sh < test_file.txt
ADDENDUM
From subsequent commentary, it seems that the input file being processed has Windows line endings, whereas the processing environment's native line ending format is UNIX-style. In that case, if the line-termination style is to be preserved then we need to modify how the delimiters are recognized. For example, this variation on the above will recognize any line that starts with $$$$ as a molecule delimiter:
#!/bin/bash
filenum=0
end=1
while read -r line; do
if [[ $end -eq 1 ]]; then
end=0
filenum=$((filenum + 1))
exec 3>"molecule${filenum}.sdf"
fi
echo "$line" 1>&3
case $line in
'$$$$'*) end=1; exec 3>&-;;
esac
done
The same statement that sets the current output file name also closes the previous one. close(_)^_ here is same as close(_)^0, which ensures the filename always increments for the next one, even if the close() action resulted in an error.
— if the output file naming scheme allows for leading-edge zeros, then change that bit to close(_)^(_<_), which ALWAYS results in a 1, for any possible string or number, including all forms of zero, the empty string, inf-inities, and nans.
mawk2 'BEGIN { getline __<(_ = "/dev/null")
ORS = RS = "[$][$][$][$][\r]?"(FS = RS)
__*= gsub("[^$\n]+", __, ORS)
} NF {
print > (_ ="mol" (__+=close(_)^_) ".txt") }' test_file.txt
The first part about getline from /dev/null neither sets $0 | NF nor modifies NR | FNR, but it's existence ensures the first time close(_) is called it wouldn't error out.
gcat -n mol12345.txt
1 Shivani
2 jyoti
3 Shivani
4 $$$$
it was reasonably speedy - from 5.60 MB synthetic test file created 187,710 files in 11.652 secs.

Merging many files based on matching column

I have many files (I posted 5 as an example)
If there is no match with the 1st file then 0 should be appended in output
file1
1001 1 2
1002 1 2
1003 3 5
1004 6 7
1005 8 9
1009 2 3
file2
1002 7
1003 8
file3
1001 5
1002 3
file4
1002 10
1004 60
1007 4
file5
1001 102
1003 305
1005 809
output desired
1001 1 2 0 5 0 102
1002 1 2 7 3 10 0
1003 3 5 8 0 0 305
1004 6 7 0 0 60 0
1005 8 9 0 0 0 809
1007 0 0 0 0 4 0
1009 2 3 0 0 0 0
Using the below code I can merge two files, BUT how to merge all
awk 'FNR==NR{a[$1]=$2;next}{print $0,a[$1]?a[$1]:"0"}' file2 file1
1001 1 2 0
1002 1 2 7
1003 3 5 8
1004 6 7 0
1005 8 9 0
Thanks in advance
GNU Join to the rescue!
$ join -a1 -a2 -e '0' -o auto file1 file2 \
| join -a1 -a2 -e '0' -o auto - file3 \
| join -a1 -a2 -e '0' -o auto - file4 \
| join -a1 -a2 -e '0' -o auto - file5
The options -a1 and -a2 tell join to insert missing fields. and the -e '0' tells it to replace them with a ZERO. The output is specified with -o auto which assumes to take all fields.
When having a large amount of files, you cannot use the pipeline construct, but you could use a simple for loop:
out=output
tmp=$(mktemp)
[[ -e "$out" ]] && rm -rf "$out" || touch "$out"
for file in f*; do
join -a1 -a2 -e0 -o auto "$out" "$file" > "$tmp"
mv "$tmp" "$out"
done
cat "$out"
or if you really like the pipeline:
pipeline="cat /dev/null"
for file in f*; do pipeline="$pipeline | join -a1 -a2 -e0 -o auto - $file"; done
eval "$pipeline"
very much of interest here: Is there a limit on how many pipes I can use?
Remark: the usage of auto is extremely useful in this case but not part of the POSIX standard. It is a GNU extension which is part of the GNU coreutils. A pure POSIX version would read a bit more cumbersome as:
$ join -a1 -a2 -e '0' -o 0 1.2 2.2 file1 file2 \
| join -a1 -a2 -e '0' -o 0 1.2 1.3 2.2 - file3 \
| join -a1 -a2 -e '0' -o 0 1.2 1.3 1.4 2.2 - file4 \
| join -a1 -a2 -e '0' -o 0 1.2 1.3 1.4 1.5 2.2 - file5
More information on man join
With GNU awk for true multi-dimensional arrays and sorted_in:
$ cat tst.awk
FNR==1 { numCols = colNr }
{
key = $1
for (i=2; i<=NF; i++) {
colNr = numCols + i - 1
val = $i
lgth = length(val)
vals[key][colNr] = val
wids[colNr] = (lgth > wids[colNr] ? lgth : wids[colNr])
}
}
END {
numCols = colNr
PROCINFO["sorted_in"] = "#ind_num_asc"
for (key in vals) {
printf "%s", key
for (colNr=1; colNr<=numCols; colNr++) {
printf "%s%*d", OFS, wids[colNr], vals[key][colNr]
}
print ""
}
}
$ awk -f tst.awk file*
1001 1 2 0 5 0 102
1002 1 2 7 3 10 0
1003 3 5 8 0 0 305
1004 6 7 0 0 60 0
1005 8 9 0 0 0 809
1007 0 0 0 0 4 0
1009 2 3 0 0 0 0
Using GNU awk
awk '
NR>FNR && FNR==1{
colcount+=cols
}
{
for(i=2;i<=NF;i++){
rec[$1][colcount+i-1]=$i
}
}
{
cols=NF-1
}
END{
colcount++
for(ind in rec){
printf "%s%s",ind,OFS
for(i=1;i<=colcount;i++){
printf "%s%s",rec[ind][i]?rec[ind][i]:0,OFS
}
print ""
}
}' file{1..5} | sort -k1 | column -t
Output
1001 1 2 0 5 0 102
1002 1 2 7 3 10 0
1003 3 5 8 0 0 305
1004 6 7 0 0 60 0
1005 8 9 0 0 0 809
1006 0 0 0 0 0 666
Note: Will work for the case mentioned here and for any type of values.

Add lines with 0 for missing values in a datatable

I have a dataset counting occurences of bins, for instance:
1 10
2 15
3 1
5 50
8 990
As you can see, I am missing bins in the first column. As I want to plot this data, I'm looking for a way to add those missing value, with a 0 on the second column, e.g. if I know my bins go up to 10:
1 10
2 15
3 1
4 0
5 50
6 0
7 0
8 990
9 0
10 0
I'm looking for a unix/bash solution as it fits my pipeline and my files are rather big, but maybe R is more suited for this ?
EDIT: Thanks to karafaka sir, adding solutions which will capture very first line's digits too.
awk -v value=10 '$1-prev>1{while(++prev<$1){print prev,"0"}} {prev=$1;print} END{if(prev<value){while(prev<=value){print prev,"0";prev++}}}' Input_file
Let's say following is the Input_file:
cat Input_file
3 10
4 15
7 1
9 50
19 990
Then after running above code we will get following output.
1 0
2 0
3 10
4 15
5 0
6 0
7 1
8 0
9 50
10 0
11 0
12 0
13 0
14 0
15 0
16 0
17 0
18 0
19 990
Could you please try following.
awk -v value=10 'prev && $1-prev>1{while(++prev<$1){print prev,"0"}} {prev=$1;print} END{if(prev<value){while(prev<=value){print prev,"0";prev++}}}' Input_file
Adding a non-one liner form of solution too now.
awk -v value=10 '
prev && $1-prev>1{
while(++prev<$1){
print prev,"0"
}
}
{
prev=$1
print
}
END{
if(prev<value){
while(prev<=value){
print prev,"0"
prev++
}
}
}' Input_file
we can combine seq and awk to make the task easier:
awk 'NR==FNR{a[$1]=$0;next}{print $1 in a?a[$1]:$1 FS 0}' file <(seq 10)
You can do this as well:
awk 'NR==FNR{a[$1]=$0;next}{print $1 in a?a[$1]:$0}' f <(seq -f '%g 0' 10)
Test with your data:
kent$ cat f
1 10
2 15
3 1
5 50
8 990
kent$ awk 'NR==FNR{a[$1]=$0;next}{print $1 in a?a[$1]:$1 FS 0}' f <(seq 10)
1 10
2 15
3 1
4 0
5 50
6 0
7 0
8 990
9 0
10 0
Using Bash and join:
$ join -a 1 --nocheck-order -e 0 -o 1.1,2.2 <(seq 10) file
Output:
1 10
2 15
3 1
4 0
5 50
6 0
7 0
8 990
9 0
10 0
another awk
$ awk -v mx=10 '{while(++k<$1) print k,0}1;
END {while(k++<mx) print k,0}' file
this will fill the first records if missing as well.
$ awk '{n[$1]=$2} END{for (i=1;i<=10;i++) print i,n[i]+0}' file
1 10
2 15
3 1
4 0
5 50
6 0
7 0
8 990
9 0
10 0

How to find sum of elements in column inside of a text file (Bash)

I have a log file with lots of unnecessary information. The only important part of that file is a table which describes some statistics. My goal is to have a script which will accept a column name as argument and return the sum of all the elements in the specified column.
Example log file:
.........
Skipped....
........
WARNING: [AA[409]: Some bad thing happened.
--- TOOL_A: READING COMPLETED. CPU TIME = 0 REAL TIME = 2
--------------------------------------------------------------------------------
----- TOOL_A statistics -----
--------------------------------------------------------------------------------
NAME Attr1 Attr2 Attr3 Attr4 Attr5
--------------------------------------------------------------------------------
AAA 885 0 0 0 0
AAAA2 1 0 2 0 0
AAAA4 0 0 2 0 0
AAAA8 0 0 2 0 0
AAAA16 0 0 2 0 0
AAAA1 0 0 2 0 0
AAAA8 0 0 23 0 0
AAAAAAA4 0 0 18 0 0
AAAA2 0 0 14 0 0
AAAAAA2 0 0 21 0 0
AAAAA4 0 0 23 0 0
AAAAA1 0 0 47 0 0
AAAAAA1 2 0 26 0
NOTE: Some notes
......
Skipped ......
The expected usage script.sh Attr1
Expected output:
888
I've tried to find something with sed/awk but failed to figure out a solution.
tldr;
$ cat myscript.sh
#!/bin/sh
logfile=${1}
attribute=${2}
field=$(grep -o "NAME.\+${attribute}" ${logfile} | wc -w)
sed -nre '/NAME/,/NOTE/{/NAME/d;/NOTE/d;s/\s+/\t/gp;}' ${logfile} | \
cut -f${field} | \
paste -sd+ | \
bc
$ ./myscript.sh mylog.log Attr3
182
Explanation:
assign command-line arguments ${1} and ${2} to the logfile and attribute variables, respectively.
with wc -w, count the quantity of words within the line that
contains both NAME and ${attribute} (the field index) and assign it to field
with sed
suppress automatic printing (-n) and enable extended regular expressions (-r)
find lines between the NAME and NOTE lines, inclusive
delete the lines that match NAME and NOTE
translate each contiguous run of whitespace to a single tab and print the result
cut using the field index
paste all numbers as an infix summation
evaluate the infix summation via bc
Quick and dirty (without any other spec)
awk -v CountCol=2 '/^[^[:blank:]]/ && NF == 6 { S += $( CountCol) } END{ print S + 0 }' YourFile
with column name
awk -v ColName='Attr1' '/^[[:blank:]]/ && NF == 6 { for(i=1;i<=NF;i++){if ( $i == ColName) CountCol = i } /^[^[:blank:]]/ && NF == 6 && CountCol{ S += $( CountCol) } END{ print S + 0 }' YourFile
you should add a header/trailer filter to avoid noisy line (a flag suit perfect for this) but lack of info about structure to set this flag, i use sthe simple field count (assuming text field have 0 as value so not changing the sum when taken in count)
$ awk -v col='Attr3' '/NAME/{for (i=1;i<=NF;i++) f[$i]=i} col in f{sum+=$(f[col]); if (!NF) {print sum+0; exit} }' file
182

How to Extract some Fields from Real Time Output of a Command in Bash script

I want to extract some fields out of output of command xentop. It's like top command; provides an ongoing look at cpu usage,memory usage,...in real time.
If I run this command in batch mode, I will have its output as you see in a file:
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
Domain-0 -----r 13700 33.0 7127040 85.9 no limit n/a 8 0 0 0 0 0 0 0 0 0 0
fed18 -----r 738 190.6 1052640 12.7 1052672 12.7 3 1 259919 8265 1 0 82432 22750 2740966 1071672 0
and running this
cat file| tr '\r' '\n' | sed 's/[0-9][;][0-9][0-9][a-Z]/ /g' | col -bx | awk '{print $1,$4,$6}'
on this file gives me what I want
NAME CPU(%) MEM(%)
Domain-0 33.0 85.9
fed18 190.6 12.7
but my script doesn't work on realtime output of xentop. I even tried to just run xentop one time by setting itteration option as 1(xentop -i 1) but It does not work!
How can I pipe output of xentop as "not" realtime to my script?
It may not be sending any output to the standard output stream. There are several ways of sending output to the screen without using stdout. A quick google search didn't provide much information about how it works internally.
I use xentop version 1.0 on xenserver 7.0 like :
[root#xen] xentop -V
xentop 1.0
[root#xen] cat /etc/centos-release
XenServer release 7.0.0-125380c (xenenterprise)
If you want to save the xentop output you can do it with '-b' (batch mode) and '-i' (number of iterations before exiting) options :
[root#xen] xentop -b -i 1
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
Domain-0 -----r 132130 0.0 4194304 1.6 4194304 1.6 16 0 0 0 0 0 0 0 0 0 0
MY_VM --b--- 5652 0.0 16777208 6.3 16915456 6.3 4 0 0 0 1 - - - - - 0
[root#xen] xentop -b -i 1 > output.txt
[root#xen] cat output.txt
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
Domain-0 -----r 132130 0.0 4194304 1.6 4194304 1.6 16 0 0 0 0 0 0 0 0 0 0
MY_VM --b--- 5652 0.0 16777208 6.3 16915456 6.3 4 0 0 0 1 - - - - - 0

Resources