Using bc in awk - bash

I am trying to use bc in an awk script. In the code below, I am trying to convert hexadecimal number to binary and store it in a variable.
#!/bin/awk -f
{
binary_vector = $(bc <<< "ibase=16;obase=2;FF")
}
Where do I go wrong?

Not saying it's a good idea but:
$ awk 'BEGIN {
cmd = "bc <<< \"ibase=16;obase=2;FF\""
rslt = ((cmd | getline line) > 0 ? line : -1)
close(cmd)
print rslt
}'
11111111
Also see http://gnu.org/software/gawk/manual/gawk.html#Bitwise-Functions and http://gnu.org/software/gawk/manual/gawk.html#Nondecimal-Data

The following one-liner Awk script should do what you want:
awk -vVAR=$(read -p "Enter number: " -u 0 num; echo $num) \
'BEGIN{system("echo \"ibase=16;obase=2;"VAR"\"|bc");}'
Explanation:
-vVAR Passes the variable VAR into Awk
-vVAR=$(read -p ... ) Sets the variable VAR from the
shell to the user input.
system("echo ... |bc") Uses the Awk system built in command to execute the shell commands. Notice how the quoting stops at the variable VAR and then continues just after it, thats so that Awk interprets VAR as an Awk variable and not as part of the string put into the system call.
Update - to use it in an Awk variable:
awk -vVAR=$(read -p "Enter number: " -u 0 num; echo $num) \
'BEGIN{s="echo \"ibase=16;obase=2;"VAR"\"|bc"; s | getline awk_var;\
close(s); print awk_var}'
s | getline awk_var will put the output of the command s into the Awk variable awk_var. Note the string is built before sending it to getline - if not (unless you parenthesize the string concatenation) Awk will try to send it to getline in separate pieces %s VAR %s.
The close(s) closes the pipe - although for bc it doesn't matter and Awk automatically closes pipes upon exit - if you put this into a more elaborate Awk script it is best to explicitly close the pipe. According to the Awk documentation some commands such as mail will wait on the pipe to close prior to completion.
http://www.staff.science.uu.nl/~oostr102/docs/nawk/nawk_39.html

By the way you wrote your example, it looks like you want to convert an awk record ( line ) into an associative array. Here's an awk executable script that allows that by running the bc command over values in a split type array:
#!/usr/bin/awk -f
{
# initialize the a array
cnt = split($0, a, FS)
if( convertArrayBase(10, 2, a, cnt) > -1 ) {
# use the array here
for(i=1; i<=cnt; i++) {
print a[i]
}
}
}
# Destructively updates input array, converting numbers from ibase to obase
#
# #ibase: ibase value for bc
# #obase: obase value for bc
# #a: a split() type associative array where keys are numeric
# #cnt: size of a ( number of fields )
#
# #return: -1 if there's a getline error, else cnt
#
function convertArrayBase(ibase, obase, a, cnt, i, b, cmd) {
cmd = sprintf("echo \"ibase=%d;obase=%d", ibase, obase)
for(i=1; i<=cnt; i++ ) {
cmd = cmd ";" a[i]
}
cmd = cmd "\" | bc"
i = 0 # reset i
while( (cmd | getline b) > 0 ) {
a[++i] = b
}
close( cmd )
return i==cnt ? cnt : -1
}
When used with an input of:
1 2 3
4 s 1234567
this script outputs the following:
1
10
11
100
0
100101101011010000111
The convertArrayBase function operates on split type arrays. So you have to initialize the input array (a here) with the full row (as shown) or a field's subflds(not shown) before calling the it. It destructively updates the array.
You could instead call bc directly with some helper files to get similar output. I didn't find that bc supported - ( stdin as a file name ) so
it's a little more than I'd like.
Making a start_cmds file like this:
ibase=10;obase=2;
and a quit_cmd like:
;quit
Given an input file (called data.semi) where the data is separated by a ;, like this:
1;2;3
4;s;1234567
you can run bc like:
$ bc -q start_cmds data.semi quit_cmd
1
10
11
100
0
100101101011010000111
which is the same data that the awk script is outputting, but only calling bc a single time with all of the inputs. Now, while that data isn't in an awk associative array in a script, the bc output could be written as stdin input to awk and reassembed into an array like:
bc -q start_cmds data.semi quit_cmd | awk 'FNR==NR {a[FNR]=$1; next} END { for( k in a ) print k, a[k] }' -
1 1
2 10
3 11
4 100
5 0
6 100101101011010000111
where the final dash is telling awk to treat stdin as an input file and lets you add other files later for processing.

Related

To split and arrange number in single inverted

I have around 65000 products codes in a text file.I wanted to split those number in group of 999 each .Then-after want each 999 number with single quotes separated by comma.
Could you please suggest how I can achieve above scenario through Unix script.
87453454
65778445
.
.
.
.
Till 65000 productscodes
Need to arrange in below pattern:
'87453454','65778445',
With awk:
awk '
++c == 1 { out = "\047" $0 "\047"; next }
{ out = out ",\047" $0 "\047" }
c == 999 { print out; c = 0 }
END { if (c) print out }
' file
Or, with GNU sed:
sed "
:a
\$bb
N
0~999{
:b
s/\n/','/g
s/^/'/
s/$/'/
b
}
ba" file
With Perl:
perl -ne '
sub pq { chomp; print "\x27$_\x27" } pq;
for (1 .. 998) {
if (defined($_ = <>)) {
print ",";
pq
}
}
print "\n"
' < file
Credit for Mauke perl#libera.chat
65000 isn't that many lines for awk - just do it all in one shot :
mawk 'BEGIN { FS = RS; RS = "^$"; OFS = (_="\47")(",")_
} gsub(/^|[^0-9]*$/,_, $!(NF = NF))'
'66771756','69562431','22026341','58085790','22563930',
'63801696','24044132','94255986','56451624','46154427'
That's for grouping them all in one line. To make 999 ones, try
jot -r 50 10000000 99999999 |
# change "5" to "999" here
rs -C= 0 5 |
mawk 'sub(".*", "\47&\47", $!(NF -= _==$NF ))' FS== OFS='\47,\47'
'36452530','29776340','31198057','36015730','30143632'
'49664844','83535994','86871984','44613227','12309645'
'58002568','31342035','72695499','54546650','21800933'
'38059391','36935562','98323086','91089765','65672096'
'17634208','14009291','39114390','35338398','43676356'
'14973124','19782405','96782582','27689803','27438921'
'79540212','49141859','25714405','42248622','25589123'
'11466085','87022819','65726165','86718075','56989625'
'12900115','82979216','65469187','63769703','86494457'
'26544666','89342693','64603075','26102683','70528492'
_==$NF checks whether right most column is empty or not,
—- i.e. whether there's a trailing edge sep that needds to be trimmed
If your input file only contains short codes as shown in your example, you could use the following hack:
xargs -L 999 bash -c "printf \'%s\', \"\$#\"; echo" . <inputFile >outputFile
Alternatively, you can use this sed command:
sed -Ene"s/(.*)/'\1',/;H" -e{'0~999','$'}'{z;x;s/\n//g;p}' <inputFile >outputFile
s/(.*)/'\1',/ wraps each line in '...',
but does not print it (-n)
instead, H appends the modified line to the so called hold space; basically a helper variable storing a single string.
(This also adds a line break as a separator, but we remove that later).
Every 999 lines (0~999) and at the end of the input file ($) ...
... the hold space is then printed and cleared (z;x;...;p)
while deleting all delimiter-linebreaks (s/\n//g) mentioned earlier.

Saving lines between "start"s and "end"s to different arrays

I have a file that consists of a bunch of things but what I need are numbers between start and end strings: For example :
ghghgh
start
23
34
22
12
end
ghbd
wodkkh
234
start
14
56
74
end
So, I need two arrays here one containing 23,34,22,12 and one containing 14,56,74. What's the best command to use?
If I only had one start and one end I would be able to use mapfile and awk to obtain that array, but there's many start and ends in the file and I need to save all the arrays.
You can do it with sed.
sed -n '/start/{:a;N;/end/!ba;s/\n/, /g;s/, [^,][a-z][^,]*//Ig;s/start, //p}'
The code will iterate through all chunks between 'start' and 'end' lines.
It will remove all items with non-digit symbols and output each "array" on separate line.
Here is output from your data sample:
23, 34, 22, 12
14, 56, 74
You need to implement a small state machine - switching between in block and out of block:
awk '/end/{block = 0; print a; a = ""} (block) {a = a " " $0} /start/{block = 1}'
If at end, leave block, print and empty the accumulator. If in block, accumulate current line. If at start, mark that we're inside a block.
You can tell awk to change the output file every time a new sequence starts
awk '/start/{i++;f=1;next} /end/{f=0} f{print > "arr"i}' file
For the example file, this will create files: arr1, arr2. Then you can create separated arrays with the lines of these files:
for i in $( ls arr* ); do readarray -t $i < $i; done
note: I have assumed that all lines between matching patterns are numeric and acceptable as in the example.
If you trust your input files enough for an eval:
$ cat tst.sh
eval $(
awk '
f {
if ( /end/ ) {
print "declare arr" ++cnt "=(" vals " )"
vals = ""
f = 0
}
else {
vals = vals OFS $0
}
}
/start/ { f = 1 }
' "$1"
)
printf "arr1:%s\n" "${arr1[#]}"
printf "arr2:%s\n" "${arr2[#]}"
$ ./tst.sh file
arr1:23
arr1:34
arr1:22
arr1:12
arr2:14
arr2:56
arr2:74
Check the quoting and all other shell gotchas...

Batch Convert IP Addresses into Decimals?

I have a large file that contains 2 IPs per line - and there's about 3 million lines total.
Here's an example of the file:
1.32.0.0,1.32.255.255
5.72.0.0,5.75.255.255
5.180.0.0,5.183.255.255
222.127.228.22,222.127.228.23
222.127.228.24,222.127.228.24
I need to convert each IP to an IP Decimal, like this:
18874368,18939903
88604672,88866815
95682560,95944703
3732923414,3732923415
3732923416,3732923416
I'd prefer a way to do this strictly via command line. I'm okay with perl or python being used, as long as it doesn't require extra modules to be installed.
I thought I had come across a way that someone converted IPs like this using sed but can't seem to find that tutorial anymore. Any help would be appreciated.
If you have gnu awk installed (for the RT variable), you could use this one-liner:
awk -F. -v RS='[\n,]' '{printf "%d%s", (($1*256+$2)*256+$3)*256+$4, RT}' file
18874368,18939903
88604672,88866815
95682560,95944703
3732923414,3732923415
3732923416,3732923416
Here it is python solution, that use only standard modules (re, sys):
import re
import sys
def multiplier_generator():
""" Cyclic generator of powers of 256 (from 256**3 down to 256**0)
The mulitpliers tupple could be replaced by inline calculation
of power, but this approach has better performance.
"""
multipliers = (
256**3,
256**2,
256**1,
256**0,
)
idx = 0
while 1 == 1:
yield multipliers[idx]
idx = (idx + 1) % 4
def replacer(match_object):
"""re.sub replacer for ip group"""
multiplier = multiplier_generator()
res = 0
for i in xrange(1,5):
res += multiplier.next()*int(match_object.group(i))
return str(res)
if __name__ == "__main__":
std_in = ""
if len(sys.argv) > 1:
with open(sys.argv[1],'r') as f:
std_in = f.read()
else:
std_in = sys.stdin.read()
print re.sub(r"([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)", replacer, std_in )
This solution replace every ip address, that can be found in text from standard input or from file passed as first parameter, i.e:
python convert.py < input_file.txt, or
python convert.py file.txt, or
echo "1.2.3.4, 5.6.7.8" | python convert.py.
With bash:
ip2dec() {
set -- ${1//./ } # split $1 with "." to $1 $2 $3 $4
declare -i dec # set integer attribute
dec=$1*256*256*256+$2*256*256+$3*256+$4
echo -n $dec
}
while IFS=, read -r a b; do ip2dec $a; echo -n ,; ip2dec $b; echo; done < file
Output:
18874368,18939903
88604672,88866815
95682560,95944703
3732923414,3732923415
3732923416,3732923416
With bash and using shift (one CPU instruction) instead of multiply (a lot of instructions):
ip2dec() { local IFS=.
set -- $1 # split $1 with "." to $1 $2 $3 $4
printf '%s' "$(($1<<24+$2<<16+$3<<8+$4))"
}
while IFS=, read -r a b; do
printf '%s,%s\n' "$(ip2dec $a)" "$(ip2dec $b)"
done < file

Sorting strings from array takes a long time

Reading a text file into an array, extracting elements and sorting them is taking a very long time.
The text file is ffmpeg console output for R128 audio analysis. I need to get the highest M and S values. Example:
[Parsed_ebur128_0 # 0x7fd32a60caa0] t: 4.49998 M: -22.2 S: -29.9 I: -27.0 LUFS LRA: 9.8 LU FTPK: -12.4 dBFS TPK: -9.7 dBFS
[Parsed_ebur128_0 # 0x7fd32a60caa0] t: 4.69998 M: -22.5 S: -28.6 I: -25.9 LUFS LRA: 11.3 LU FTPK: -12.7 dBFS TPK: -9.7 dBFS
The text file can be hundreds or thousands of lines long depending on the duration of the audio file being analysed
I want to find the highest M (-22.2) and S Values (-28.6) and assign them to variables M and S
This is what I am using currently:
ARRAY=()
while read LINE
do
ARRAY+=("$LINE")
done < $tempDir/text.txt
for LINE in "${ARRAY[#]}"
do
echo "$LINE" | sed -n ‘/B:/p' | sed 's/S:.*//' | sed -n -e 's/^.*M://p' | sed -n -e 's/-//p' >>/$tempDir/R128M.txt
done
for LINE in "${ARRAY[#]}"
do
echo "$LINE" | sed -n '/M:/p' | sed 's/I:.*//' | sed -n -e 's/^.*S://p' | sed -n -e 's/-//p' >>$tempDir/R128S.txt
done
cat $tempDir/R128M.txt
M=( $(sort $tempDir/R128M.txt) )
cat $tempDir/R128S.txt
S=( $(sort $tempDir/R128S.txt) )
Is there a faster way of doing this?
Rather than reading in the whole file in memory, writing bits of it out to separate file, and reading those in again, just parse it and pick out the largest values:
$ awk '$7 > m || m == "" { m = $7 } $9 > s || s == "" { s = $9 } END { print m, s }' data
-22.2 -28.6
In your data, field 7 and 9 contains the values of M and S. The awk script will update its m and s variables if it finds larger values in these fields and then print the largest found at the end. The m == "" and s == "" are needed to trigger initialization of the values if no values has been read yet.
Another way with awk, which may look cleaner:
$ awk 'FNR == 1 { m = $7; s = $9; next } $7 > m { m = $7 } $9 > s { s = $9 } END { print m, s }' data
To assign them to M and S in the shell:
$ declare $( awk 'FNR == 1 { m = $7; s = $9; next } $7 > m { m = $7 } $9 > s { s = $9 } END { printf("M=%f S=%f\n", m, s) }' data )
$ echo $M $S
-22.200000 -28.600000
Adjust the printf() format to use %s instead of %f if you want the original strings instead of float values, or set the number of decimals you might want with, e.g., %.2f in place of %f.
First of all, three-process pipe is a bit redundant for a single value extraction, especially taking into account you reinstantiate it anew for every line.
Next, you save all the values into a file and then sort that file, while all you need is the maximum value. You can easily find it during the very first (value extraction) loop, for additional O(N) running time, instead of I/O and sorting with all the I/O overhead and O(NlogN) sorting expenses. See ARITHMETIC EXPANSION and conditional expressions in bash manual.

Expand a range of numbers with grep

I work in telecoms and regularly need to expand number ranges.
For example, 6121234567X [note that there are 10 numbers preceeding the X] is shorthand for:
61212345670
61212345671
61212345672....... etc (a 10 number range)
and 612123456X [note that there are only 9 numbers preceeding the X] is shorthand for
61212345600
61212345601....... etc (a 100 number range)
So I need a grep command that...
reads how many characters in the line preceeding the X (to determine how many suffixes)
writes the appropriate amount of lines (10, 100, or 100) with ascending suffixes
hopefully removes the original line
Below is the Python script that does it, file-name is the expected first argument. Example usage: python script.py file.in > file.out
#!/usr/bin/env python
import sys
def generate(pattern):
p = pattern.lower().find('x')
ret = ""
for i in range(10**(10-p+1)):
ret += pattern[:p] + str(i).zfill(10-p+1) + " "
return ret
if __name__ == "__main__":
if len(sys.argv) <= 1:
print("Filename needed!")
else:
with open(sys.argv[1]) as f:
for ln in f:
print(generate(ln.rstrip()))
You can do this in awk quite quickly:
awk -v val=$a -v max=10
'BEGIN {
gsub("X","",val)
items=max - length(val)
for (i=0; i<=10^items; i++)
print val*(10^items)+i
}'
This works as an example. To do the same reading from a file, you just need to play with $1 (first field of the field) instead of val and move all the code from BEGIN into the main block.
Explanation
-v val=$a -v max=10 pass parameters: $a is the variable containing the string on the form 12345678X AND max contains the maximum amount of digits the number will have (10 in your case).
BEGIN {} perform all these actions [before/without] reading a file.
gsub("X","",val) remove X from val.
items=max - length(val) count the size of the variable without the X.
for (i=0; i<=10^items; i++) print val*(10^items)+i loop from 0 to 10^remaining_size. This means from 0 to 10 or from 0 to 100... depending on the result of 10 - size without X.
Test
With 9 as maximum:
$ a=12345678X
$ awk -v val=$a -v max=9 'BEGIN {gsub("X","",val); items=max - length(val); for (i=0; i<=10^items; i++) print val*(10^items)+i}'
123456780
123456781
123456782
123456783
123456784
123456785
123456786
123456787
123456788
123456789
123456790
echo 6121234567X | perl -nE 'm/(.*)X/;
say $1. $_ foreach (0..10**(11-length $1)-1)'
61212345670
61212345671
61212345672
61212345673
61212345674
61212345675
61212345676
61212345677
61212345678
61212345679
It's a little uglier to get the zero padded format:
echo 611234567X | perl -wne 'm/(.*)X/; $b=$1; $r=11 - length $b;
$fmt="%0" . $r . "s\n";
printf "$b$fmt", $_ foreach (0..10**$r-1) '

Resources