read a file line by line and assign the values to variable as comma separated - bash

I have the following a.txt file:
abc,
def,
ghi
I want to read it line-by-line, and store in a varibale as comma seperated values
var1=abc,def,ghi
i am new to shell script pls help
My try:
name="file.txt"
while IFS=read -r line
do
names=`echo $line`
done < "name"
it is displaying only value ghi to varibale

You're not concatenating, you're replacing the names variable each time through the loop.
There's no need to use echo when assigning the variable.
name="file.txt"
names=
while IFS=read -r line
do
names="$names$line"
done < "name"

Related

how to assign each of multiple lines in a file as different variable?

this is probably a very simple question. I looked at other answers but couldn't come up with a solution. I have a 365 line date file. file as below,
01-01-2000
02-01-2000
I need to read this file line by line and assign each day to a separate variable. like this,
d001=01-01-2000
d002=02-01-2000
I tried while read commands but couldn't get them to work.It takes a lot of time to shoot one by one. How can I do it quickly?
Trying to create named variable out of an associative array, is time waste and not supported de-facto. Better use this, using an associative array:
#!/bin/bash
declare -A array
while read -r line; do
printf -v key 'd%03d' $((++c))
array[$key]=$line
done < file
Output
for i in "${!array[#]}"; do echo "key=$i value=${array[$i]}"; done
key=d001 value=01-01-2000
key=d002 value=02-01-2000
Assumptions:
an array is acceptable
array index should start with 1
Sample input:
$ cat sample.dat
01-01-2000
02-01-2000
03-01-2000
04-01-2000
05-01-2000
One bash/mapfile option:
unset d # make sure variable is not currently in use
mapfile -t -O1 d < sample.dat # load each line from file into separate array location
This generates:
$ typeset -p d
declare -a d=([1]="01-01-2000" [2]="02-01-2000" [3]="03-01-2000" [4]="04-01-2000" [5]="05-01-2000")
$ for i in "${!d[#]}"; do echo "d[$i] = ${d[i]}"; done
d[1] = 01-01-2000
d[2] = 02-01-2000
d[3] = 03-01-2000
d[4] = 04-01-2000
d[5] = 05-01-2000
In OP's code, references to $d001 now become ${d[1]}.
A quick one-liner would be:
eval $(awk 'BEGIN{cnt=0}{printf "d%3.3d=\"%s\"\n",cnt,$0; cnt++}' your_file)
eval makes the shell variables known inside your script or shell. Use echo $d000 to show the first one of the newly defined variables. There should be no shell special characters (like * and $) inside your_file. Remove eval $() to see the result of the awk command. The \" quoted %s is to allow spaces in the variable values. If you don't have any spaces in your_file you can remove the \" before and after %s.

Read multi variable csv bash build multi line file from it

I had what I thought was a simple concept which I could easily do as I did something similar.
I have an input file input.csv
1a,1b
2a,2b
I would like the following output
Output file 1
This is variable 1 named 1a ok
This is variable 2 named 1b ok
Output file 2
This is variable 1 named 2a ok
This is variable 2 named 2b ok
I thought I could do something similar to below
i=1
while IFS=, read var1 var2; do
echo This is variable 1 named "var1" > filenamei
echo This is variable 2 named "var2" >> filenamei
i=i+1
done </inputfile.csv
I previously wrote code to take a single variable from a long file and write output to a single file and it worked fine. Like below
Input file
a
b
Single output file
This is A
This is B
Script was
while read p;do
echo this is "$p" >>output file
done < input file
Been through lots of different errors but getting nowhere.
It will be easy by configuring double loop: the outer loop to iterate over lines and the inner one for comma-separated fields. Then how about:
#!/bin/bash
i=1
while read -r line; do
ifs_back="$IFS"
IFS=","
set -- $line
for ((j=1; j<=$#; j++)); do
echo This is variable "$j" named "${!j}" >> "filename${i}"
done
IFS="$ifs_back"
i=$((i+1))
done < "inputfile.csv"
Explanations:
In order to split the input line with commas, we temporarily set IFS to "," then assign the fields to positional parameters $1, $2.
The loop counter j for the inner loop starts with 1 and ends with $#1, number of fields.
We can access the value of the positional parameter via ${!j}.
As a clean up of the inner loop, we retrieve IFS and increment i for the next line.
The code above is flexible with #lines and #fields so would work with the input:
1a,1b
2a,2b
3a,3b
as wel as with:
1a,1b,1c
2a,2b,2c
3a,3b,3c
Hope this helps.

Read 2 lines at a time from a text file and assign it to variables in shell script

I have a text file with 50 numbers. I want to read 2 lines at at time assign it to variables
input.txt:
129260
129288
129356
129384
read input.txt ( 2 lines at at time)
$VALUE1=129260
$VALUE2=129288
Put a loop to keep on doing it till entire file is read.
This reads in two lines at a time from input.txt, assigns each line to a variable, and displays the values of the variables:
$ while read -r val1 && read -r val2; do echo "val1=$val1 and val2=$val2"; done <input.txt
val1=129260 and val2=129288
val1=129356 and val2=129384

How to iterate over text file having multiple-words-per-line using shell script?

I know how to iterate over lines of text when the text file has contents as below:
abc
pqr
xyz
However, what if the contents of my text file are as below,
abc xyz
cdf pqr
lmn rst
and I need to get values "abc" stored to one variable and"xyz" stored to another variable. How would I do that?
read splits the line by $IFS as many times as you pass variables to it:
while read var1 var2 ; do
echo "var1: ${var1} var2: ${var2}"
done
You see, if you pass var1 and var2 both columns go to separate variables. But note that if the line would contain more columns var2 would contain the whole remaining line, not just column2.
Type help read for more info.
If the delimiter is a space then you can do:
#!/bin/bash
ALLVALUES=()
while read line
do
ALLVALUES+=( $line )
done < "/path/to/your/file"
So after, you can just reference an element by ${ALLVALUES[0]} or ${ALLVALUES[1]} etc
If you want to read every word in a file into a single array you can do it like this:
arr=()
while read -r -a _a; do
arr+=("${a[#]}")
done < infile
Which uses -r to avoid read from interpreting backslashes in the input and -a to have it split the words (splitting on $IFS) into an array. It then appends all the elements of that array to the accumulating array while being safe for globbing and other metacharacters.
This awk command reads the input word by word:
awk -v RS='[[:space:]]+' '1' file
abc
xyz
cdf
pqr
lmn
rst
To populate a shell array use awk command in process substitution:
arr=()
while read -r w; do
arr+=("$w")
done < <(awk -v RS='[[:space:]]+' '1' file)
And print the array content:
declare -p arr
declare -a arr='([0]="abc" [1]="xyz" [2]="cdf" [3]="pqr" [4]="lmn" [5]="rst")'

Comment out line in multiple line string in bash script

I'm trying to write a script that exports an array to loop through in the following way:
export fields="
a,1
b,2
c,3
...
"
for i in $fields
do
IFS=","
set $i
...
done
Is there a way to only comment out a single line in the list of field "tuples" that I'm using? In other words, if I want to run this and skip "b,2" is there a way to comment this line out without deleting the line?
First, define a array that has one line per element (no need to export it):
fields=(
# a,1
b,2
c,3
)
Note you can intersperse comment lines with the rest of the elements.
Then, iterate over the contents of the array and use the read command to split each element into two fields:
for line in "${fields[#]}"; do
IFS=, read f1 f2 <<< "$line"
...
done

Resources