This question already has an answer here:
Bash - populate 2D array from file
(1 answer)
Closed 4 years ago.
I have a text file like this (separated by space-key):
10 18 12 14 1
11 45 11 34 2
I want it to to look like this:
1,1,10
2,1,18
3,1,12
4,1,14
5,1,1
1,2,11
2,2,45
3,2,11
4,2,34
5,2,2
In the new output first column is the column in the file and second one is the row. The third is value ... Do you have idea how to do it ?
Bash does not support multi-dimensional arrays. It only supports one-dimensional arrays.
You can use awk to create a col, row, val stream:
cat yourfile.txt | awk '{for(i=1;i<=NF;i++){print i ", " NR ", " $i}}'
Related
declare -A a
for((i=0;i<2;i++))
for((j=0;j<5;j++))
read a[$i,$j]
I want to take the inputs on same line , but this input
1 2 3 4 5
6 7 8 9 5
is not doing the work , I have to take all 10 integers on different line .
Can I read multiple variables on same line in Bash (if all are integers).
You can use -a to put multiple fields into an array:
#!/bin/bash
echo "Enter some numbers:"
read -ra myarray
echo "There were ${#myarray[#]} numbers and index 4 was ${myarray[4]}"
If you enter 4 8 15 16 23 42 the output is:
There were 6 numbers and index 4 was 23
The simple answer is we cannot do it, as there is no provision of 2d array in Bash.
Input :
1 2 3 4 5
6 7 8 9 5
The following code will take the desired input as string array(whole line as a single input) and convert the individual string to an array (which is a cubersome task as there are multiple integers in a string)
assuming the array dimension is 2x5 and then it prints the 2d array :
#!/bin/bash
declare -A b #Associative Array
for((i=0;i<2;i++))do
read a[$i]
done
for((i=0;i<2;i++))do
array=(${a[$i]}) # spliting the string into array
for((j=0;j<5;j++))do
b[$i,$j]=${array[$j]}
done
done
for((i=0;i<2;i++))do
for((j=0;j<5;j++))do
printf "${b[$i,$j]} "
done
echo
done
Hence we can conclude it is better to take input in multiple lines or else we have to follow these steps.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have a file delimited by pipes. I am not sure which bash tool would be most appropriate (I am thinking either awk or sed) to find the nearest number to those listed.
my file looks like this:
2|1 1 4 5
8|1 2 2 3 10 14
5|1 50 100
and I would like to get the output:
1
10
1
Explanation: In First Row, Nearest of 2 in {1 1 4 5} is 1. In Same way, For Second row Nearest of 8 in {1 2 2 3 10 14} is 10.For 3rd row Nearest for 5 will be 1.
$ awk -F'[| ]' '{
sq=($2-$1)*($2-$1);a=2;
for(i=3;i<=NF;i++){
sqi=($i-$1)*($i-$1)
if(sq<sqi){sq=sq}else{sq=sqi;a=i}
} print $a
}' file
1
10
1
Given:
$ echo "$ns"
2|1 1 4 5
8|1 2 2 3 10 14
5|1 50 100
It is easy in Ruby:
$ echo "$ns" | ruby -lne 'a=$_.split(/[| \t]/)
a.map!{|e| Integer(e)}
n=a.shift
p a.min_by {|e| (e-n).abs}'
1
10
1
It could be done similarly in gawk by defining a customer sort function based on the first value compared the rests, sort, take the first.
This is a way of doing it with awk:
awk -F"[ \t|]" '{
n=$2;m=($1-$2)*($1-$2)
for(i=3;i<=NF;i++){
d=($1-$i)*($1-$i)
if(d<m){n=$i;m=d}
} print n
}' input
This question already has answers here:
Summing values of a column using awk command
(2 answers)
Closed 5 years ago.
i have a file errorgot.log
1 23 23
2 22 42
3 12 2
4 5 26
5 14 45
i want to sum all the third number in a line with a shell script.
for the example, 23 + 42 + 2 + 26 + 45 = 138
thanks bfore
This should work:
awk '{sum += $3}END{print sum}' errorgot.log
How does it work?
awk reading file line by line, split each line by a separator (white space by default) and assign to numbered variables ($1 for the first column, $2 for the second and so on)
after that awk executing the code between braces ({sum += $3}). In our case, we're accumulating sum in the variable sum
after processing a file, awk executes code from END section where we're printing sum variable
I encountered a problem with bash, I started using it recently.
I realize that lot of magic stuff can be done with just one line, as my previous question was solved by it.
This time question is simple:
I have a file which has this format
2 2 10
custom
8 10
3 5 18
custom
1 5
some of the lines equal to string custom (it can be any line!) and other lines have 2 or 3 numbers in it.
I want a file which will sequence the line with numbers but keep the lines with custom (order also must be the same), so desired output is
2 4 6 8 10
custom
8 9 10
3 8 13 18
custom
1 2 3 4 5
I also wish to overwrite input file with this one.
I know that with seq I can do the sequencing, but I wish elegant way to do it on file.
You can use awk like this:
awk '/^([[:blank:]]*[[:digit:]]+){2,3}[[:blank:]]*$/ {
j = (NF==3) ? $2 : 1
s=""
for(i=$1; i<=$NF; i+=j)
s = sprintf("%s%s%s", s, (i==$1)?"":OFS, i)
$0=s
} 1' file
2 4 6 8 10
custom
8 9 10
3 8 13 18
custom
1 2 3 4 5
Explanation:
/^([[:blank:]]*[[:digit:]]+){2,3}[[:blank:]]*$/ - match only lines with 2 or 3 numbers.
j = (NF==3) ? $2 : 1 - set variable j to $2 if there are 3 columns otherwise set j to 1
for(i=$1; i<=$NF; i+=j) run a loop from 1st col to last col, increment by j
sprintf is used for formatting the generated sequence
1 is default awk action to print each line
This might work for you (GNU sed, seq and paste):
sed '/^[0-9]/s/.*/seq & | paste -sd\\ /e' file
If a line begins with a digit use the lines values as parameters for the seq command which is then piped to paste command. The RHS of the substitute command is evaluated using the e flag (GNU sed specific).
This question already has answers here:
Longest line using awk
(2 answers)
Longest line in a file
(14 answers)
Closed 9 years ago.
Using bash, how can I get the longest line in a file?
$ cat file
12
3241234
123
3775
874
62693289429834
8772168376123
I want to get 62693289429834.
sort -V file | tail -n1
works on your example input. I'm not completely sure it will work on other inputs as well but I think so.
awk ' { if ( length > x ) { x = length } }END{ print x }' file