argument length is too big. How to chunk it up? - bash

I have this python code which will take a filename and set of offsets (comma separated) and will read the corresponding lines defined in the offsets.
do
python fileOffset.py /mnt/media1/file $offsets >> tmpfile
done
$offsets will provide the string which is comma separated which contain the filepointers ( 12,123,121134). This works fine until I get a very lengthy string of offsets which will throw a argument list too long error. As a solution I have written the following code which will split the offsets and call the fileOffset.py one for one offset.
IFS=', ' read -a array <<< $offsets
for element in "${array[#]}"
do
python fileOffset.py /mnt/media1/$file $element >> tmpfile
done
But this makes processing of the file very slow. How could I make it faster?

You can use xargs :
IFS=', ' xargs read -a array <<< $offset
However, I'm with #FrederikPihil's comment: Use python at all as you are already spawning a python process on each iteration.

Related

parse .csv data into matrix or 2 dimensional array bash/shell awk

I have a coma delimited csv file named 'itrs.csv' which I want to parse into a matrix or 2d array using a script bash or shell
Loads\PostDate,schedule,seta,eeta,2019-11-05,2019-11-06,2019-11-07,2019-11-08
BANAMEX,7,1:18:10,1:23:45,G,G,C,C
EMEA,5,0:21:00,1:01:00,G,G,G,C
I have tried the following:
declare -A matrix
eval matrix=($(awk -f, itrs.csv))
for ((i=0;i<=2;i++))
do
for ((j=0;j<=$6;j++))
do
echo ${matrix[$i,$j]}" "
done
echo
done
but above code is throwing errors. I also would like to know how to check the number of columns and rows while parsing data because csv file size may change.
Well, you can do this: Create an associative array, iterate over lines and keep the count of the current line, then iterate over the fields and create an associative array with indexes as requested.
i=0
declare -A matrix
while IFS=, read -r -a line; do
for ((j = 0; j < ${#line[#]}; ++j)); do
matrix[$i,$j]=${line[$j]}
done
((i++))
done < itrs.csv
After it declare -p matrix would output:
declare -A matrix=([1,5]="G" [1,4]="G" [1,7]="C" [1,6]="C" [1,1]="7" [1,0]="BANAMEX" [1,3]="1:23:45" [1,2]="1:18:10" [0,4]="2019-11-05" [0,5]="2019-11-06" [0,6]="2019-11-07"[0,7]="2019-11-08" [0,0]="Loads\\PostDate" [0,1]="schedule" [0,2]="seta" [0,3]="eeta" [2,6]="G" [2,7]="C" [2,4]="G"[2,5]="G" [2,2]="0:21:00" [2,3]="1:01:00" [2,0]="EMEA" [2,1]="5" )
See bashfaq How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?
Don't use eval. eval is evil. Don't eval arr=($(..)) unless you know what you are doing. In your case, using eval looks like it has little to zero sense.
The error comes from awk. awk works like awk [options] script [file], you could awk -F, '{print $0}' itrs.csv, but it would make no sense. The itrs.csv is parsed by awk as being the script - as it makes no sense as a awk script, the tool throws an error.
To read for example the first line only separated by comma into an array in bash you can IFS=, line=($(head -n1 itrs.csv)). The -F, affects how awk parses the file, not how bash creates array - for that use IFS.

Bash command to read a line based on the parameters I pass - perform column-based lookups

I have a file links.txt:
1 a.sh
3 b.sh
6 c.sh
4 d.sh
So, if i pass 1,4 as parameters to another file(master.sh), a.sh and d.sh should be stored in a variable.
sed '3!d' would print the 3rd line, but not the line that starts with 3. For that, you need sed '/^3 /!d'. The problem is you can't combine them for more lines, as this means "Delete everything that doesn't start with a 3", which means all other lines will be missed. So, use sed -n '/^3 /p' instead, i.e. don't print by default and tell sed what lines to print, not what lines to delete.
You can loop over the argument and create a sed script from them that prints the lines, then run sed using this output:
#!/bin/bash
file=$1
shift
for id in "$#" ; do
echo "/^$id /p"
done | sed -nf- "$file"
Run as script.sh filename 3 4.
If you want to remove the id from the output, you can either use
cut -f2 -d' '
or you can modify the generated sed script to do the work
echo "/^$id /s/.* //p"
i.e. only print if the substitution was successful.
This loops through each argument and greps for it in the links file. The result is piped into cut where we specify the delimiter as a space with -d flag and the field number as 2 with -f flag. Finally this is appended to the array called files.
links="links.txt"
files=()
for arg in $#; do
files=("${files[#]}" `grep "^$arg" "$links" | cut -d" " -f2`)
done;
echo ${files[#]}
Usage:
$ ./master.sh 1 4
a.sh d.sh
Edit:
As pointed out by mklement0, the solution above reads the file once per arg. The following first builds the pattern then reads the file just once.
links="links.txt"
pattern="^$1\s"
for arg in ${#:2}; do
pattern+="|^$arg\s"
done
files=$(grep -E "$pattern" "$links" | cut -d" " -f2)
echo ${files[#]}
Usage:
$ ./master.sh 1 4
a.sh d.sh
Here is another example with grep and cut:
#!/bin/bash
for line in $(grep "$1\|$2" links.txt|cut -d' ' -f2)
do
echo $line
done
Example of usage:
./master.sh 1 4
a.sh
d.sh
Why not just stores the values and call them at will:
items=()
while read -r num file
do
items[num]="$file"
done<links.txt
for arg
do
echo "${items[arg]}"
done
Now you can use the items array any time you like :)
The following awk solution:
preserves the argument order; that is, the results reflect the order in which the lookup values were specified (as opposed to the order in which the lookup values happen to occur in the file).
If that is not important (i.e., if outputting the results in file order is acceptable), the readarray technique below can be combined with this one-liner, which is a generalized variant of Panta's answer:
grep -f <(printf "^%s\n" "$#") links.txt | cut -d' ' -f2-
performs well, because the input file is only read once; the only requirement is that all key-value pairs fit into memory as a whole (as a single associative Awk array (dictionary)).
works with any lookup values that don't have embedded whitespace.
Similarly, the assumption is that the output column values (containing values such as a.sh in the sample input) have no embedded whitespace. awk doesn't handle quoted fields well, so more work would be needed.
#!/bin/bash
readarray -t files < <(
awk -v idList="$*" '
BEGIN { count=split(idList, idArr); for (i in idArr) idDict[idArr[i]]++ }
$1 in idDict { idDict[$1] = $2 }
END { for (i=1; i<=count; ++i) print idDict[idArr[i]] }
' links.txt
)
# Print results.
printf '%s\n' "${files[#]}"
readarray -t files reads stdin input (<) line by line into array variable files.
Note: readarray requires Bash v4+; on Bash 3.x, such as on macOS, replace this part with
IFS=$'\n' read -d '' -ra files
<(...) is a Bash process substitution that, loosely speaking, presents the output from the enclosed command as if it were (self-deleting) temporary file.
This technique allows readarray to run in the current shell (as opposed to a subshell if a pipeline had been used), which is necessary for the files variable to remain defined in the remainder of the script.
The awk command breaks down as follows:
-v idList="$*" passes the space-separated list of all command-line arguments as a single string to Awk variable idList.
Note that this assumes that the arguments have no embedded spaces, which is indeed the case here and also generally the case with identifiers.
BEGIN { ... } is only executed once, before the individual lines are processed:
split(idList, idArr) splits the input ID list into an array by whitespace and stores the result in idArr.
for (i in idArr) idDict[idArr[i]]++ } then converts the (conceptually regular) array into associative array idDict (dictionary), whose keys are the input IDs - this enables efficient lookup by ID later, and also allows storing the lookup result for each ID.
$1 in idDict { idDict[$1] = $2 } is processed for every input line:
Pattern $1 in idDict returns true if the line's first whitespace-separated field ($1) - e.g., 6 - is among the keys (in) of associative array idDict, and, if so, executes the associated action ({...}).
Action { idDict[$1] = $2 } then assigns the second field ($2) - e.g., c.sh - to the iDict entry for key $1.
END { ... } is executed once, after all input lines have been processed:
for (i=1; i<=count; ++i) print idDict[idArr[i]] loops over all input IDs in order and prints each ID's lookup result, which is the value of the dictionary entry with that ID.

Reading numeric values from grep output in bash

I have a file filled by rows of text, I'm interested about a group of these, every line starts with the same word, in each line there are two numbers i have to elaborate later, and they are always in the same position, for example:
Round trip time was 49.9721 milliseconds in repetition 5 tcp_ping received 128 bytes back
I was thinking about trying to use grep to grab the rows wanted into a new file, and then put the content of this new file into an array, to easily access it during the elaboration, but this isn't working, any tips?
#!/bin/bash
InputFile="../data/N.dat"
grep "Round" ../data/tcp_16.out > "$InputFile"
IFS=' ' read -a array <<< "$InputFile"
If they're all you care about, you can read only the numbers in.
I'd also strongly suggest extracting the values you're going to be analyzing into arrays, like so, rather than storing the full lines as strings:
ms_time_arr=( ) # array: map repetitions to ms_time
bytes_arr=( ) # array: map repetitions to bytes
while read -r ms_time repetition bytes_back _; do
# log to stderr to show that we read the data
echo "At $ms_time ms, repetition $repetition, got $bytes_back back" >&2
ms_time_arr[$repetition]=$ms_time
bytes_arr[$repetition]=$bytes_back
done < <(grep -e 'Round' <../data/N.dat | tr -d '[[:alpha:]]')
# more logging, to show that array contents survive the loop
declare -p ms_time_arr bytes_arr
This works by using tr to remove all alpha characters, leaving only numbers, punctuation and whitespace.

How can I set a variable = null in for loop?

I have this code in Elastix2.5 (CentOS):
for variable in $(while read line; do myarray[ $index]="$line"; index=$(($index+1)); echo "$line"; done < prueba);
This extract the values for each line from "prueba" file.
Prueba file contents passwords like this:
Admin1234
Hello543
Chicken5444
Dino6759
3434Cars4
Adminis5555
But, $variable only get values from lines where there are letters, I need that it get NULL values from blank lines. How can I do it?
Your problem is use of a for loop with a command substitution ($(...)); let's look at this simple example:
$ for v in $(echo 'line_1'; echo ''; echo 'line_3'); do echo "$v"; done
line_1
line_3
Note how the empty string produced by the 2nd echo command is effectively discarded.
Analogously, any empty lines produced by your while loop are discarded.
The solution is to avoid for loops altogether for parsing command output:
In your case, simply use only the while loop for iterating over the input file:
while read -r line; do
myarray[index++]="$line"
done < prueba
printf '%s\n' "${myarray[#]}"
-r was added to ensure that read doesn't modify the input (doesn't try to interpret \-prefixed sequences) - this is good practice in general.
Note how incrementing the index was moved directly into the array subscript (index++).
printf '%s\n' "${myarray[#]}" prints all array elements after the file's been read, demonstrating that empty lines were read as well.
You can use is_null function.
is_null($a)
http://php.net/manual/en/function.is-null.php

Save a newline separated list into several bash variables

I'm relatively new to shell scripting and am writing a script to organize my music library. I'm using awk to parse the id3 tag info and am generating a newline separated list like so:
Kanye West
College Dropout
All Falls Down
I want to store each field in a separate variable so I can easily compose some mkdir and mv commands. I've tried piping the output to IFS=$'\n' read artist album title but each variable remains empty. I'm open to producing a different output from awk, but I still want to know how to parse a newline separated list using bash.
Edit:
It turns out that by piping directly to read by doing:
id3info "$filename" | awk "$awkscript" | {read artist; read album; read title;}
WILL NOT WORK. It results in the variables existing in a different scope. I found that using a herestring works best:
{read artist; read album; read title;} <<< "$(id3info "$filename" | awk "$awkscript")"
read normally reads one line at a time. So, if your id3 info is in the file testfile.txt, you can read it in as follows:
{ read artist ; read album ; read song ; } <testfile.txt
echo "artist='$artist' album='$album' song='$song'"
# insert your mkdir and mv commands....
When run on your test file, the above outputs:
artist='Kanye West' album='College Dropout' song='All Falls Down'
You can just read the file into a bash array and loop through the array like so:
IFS=$'\r\n' content=($(cat ${filepath}))
for ((idx = 0; idx < ${#content[#]}; idx+=3)); do
artist=${content[idx]}
album=${content[idx+1]}
title=${content[idx+2]}
done
Or read three lines in a loop.
yourscript |
while read artist; do # read first line of input
read album # read second line of input
read song # read third line of input
: self-destruct if the genre is rap
done
This loop will consume input lines in groups of three. If there is not an even multiple of three lines of input, the reads after that inside the loop will simply fail and the variables will be empty.
You can read the output from awk into an array. E.g.
readarray -t array <<< "$(printf '%s\n' 'Kanye West' 'College Dropout' 'All Falls Down')"
for ((i=0; i<${#array[#]}; i++ )) ; do
echo "array[$i]=${array[$i]}"
done
Produces:
array[0]=Kanye West
array[1]=College Dropout
array[2]=All Falls Down

Resources