No such file or directory on bash assignment - bash

I'm trying to write an script that saves each line from "test" file in a variable line1, line2, and so on..
x=$(cat test | wc -l) #here i have how many lines
i="1"
while [ "$i" -lt "$x" ]
do
line$i=$(sed '$iq;d' test) #i try to get the number $i line one by one
i=$[$i+1]
done
Could you please help me?
Thanks!

To read each line of the file test into an array called lines, use:
mapfile -t lines <test
Example
Consider this file:
$ cat test
dogs and cats
lions and tigers
bears
Execute this statement:
$ mapfile -t lines <test
We can now see the value of lines using declare -p:
$ declare -p lines
declare -a lines2='([0]="dogs and cats" [1]="lions and tigers" [2]="bears")'
Each line of file test is now an element of array lines and we can access them by number:
$ echo "${lines[0]}"
dogs and cats

lines$i should be lines[$i] (or google eval) but chances are you're going about this completely wrong and you should be writing it in awk, e.g.:
awk '{lines[NR] = $0} END{print NR; for (i=1;i<=NR;i++) print i, lines[i]}' test

Let's take advantage of available tooling to see what's wrong:
$ shellcheck linereader
In linereader line 1:
x=$(cat test | wc -l) #here i have how many lines
^-- SC2148: Shebang (#!) missing. Assuming Bash.
^-- SC2002: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead.
In linereader line 6:
line$i=$(sed '$iq;d' test) #i try to get the number $i line one by one
^-- SC1067: For indirection, use (associative) arrays or 'read "var$n" <<< "value"'
^-- SC2034: line appears unused. Verify it or export it.
^-- SC2016: Expressions don't expand in single quotes, use double quotes for that.
In linereader line 7:
i=$[$i+1]
^-- SC2007: Use $((..)) instead of deprecated $[..]
That's a good start:
#!/bin/bash
# 1. Shebang added
# 2. Redirection for reading files as suggested (stylistic)
x=$(wc -l < test) #here i have how many lines
i="1"
while [ "$i" -lt "$x" ]
do
# 3. Using arrays as suggested
# 4. Using double quotes for sed as suggested
# Additionally, use ${i} so we reference $i and not $iq
line[$i]=$(sed "${i}q;d" test) #i try to get the number $i line one by one
# 5. Use $((..)) as suggested (stylistic)
i=$((i+1))
done
# Print a line to show that it works:
echo "Line #2 is ${line[2]}"
And now let's try:
$ cat test
foo
bar
baz
$ ./linereader
Line #2 is bar
We could also have done this more easily (and in O(n) time) with mapfile:
#!/bin/bash
mapfile line < test
echo "Line #2 (zero-based index 1) is: ${line[1]}"

Related

Detect double new lines with bash script

I am attempting to return the line number of lines that have a break. An input example:
2938
383
3938
3
383
33333
But my script is not working and I can't see why. My script:
input="./input.txt"
declare -i count=0
while IFS= read -r line;
do
((count++))
if [ "$line" == $'\n\n' ]; then
echo "$count"
fi
done < "$input"
So I would expect, 3, 6 as output.
I just receive a blank response in the terminal when I execute. So there isn't a syntax error, something else is wrong with the approach I am taking. Bit stumped and grateful for any pointers..
Also "just use awk" doesn't help me. I need this structure for additional conditions (this is just a preliminary test) and I don't know awk syntax.
The issue is that "$line" == $'\n\n' won't match a newline as it won't be there after consuming an empty line from the input, instead you can match an empty line with regex pattern ^$:
if [[ "$line" =~ ^$ ]]; then
Now it should work.
It's also match easier with awk command:
$ awk '$0 == ""{ print NR }' test.txt
3
6
As Roman suggested, line read by read terminates with a delimiter, and that delimiter would not show up in the line the way you're testing for.
If the pattern you are searching for looks like an empty line (which I infer is how a "double newline" always manifests), then you can just test for that:
while read -r; do
((count++))
if [[ -z "$REPLY" ]]; then
echo "$count"
fi
done < "$input"
Note that IFS is for field-splitting data on lines, and since we're only interested in empty lines, IFS is moot.
Or if the file is small enough to fit in memory and you want something faster:
mapfile -t -O1 foo < i
declare -p foo
for n in "${!foo[#]}"; do
if [[ -z "${foo[$n]}" ]]; then
echo "$n"
fi
done
Reading the file all at once (mapfile) then stepping through an array may be easier on resources than stepping through a file line by line.
You can also just use GNU awk:
gawk -v RS= -F '\n' '{ print (i += NF); i += length(RT) - 1 }' input.txt
By using FS = ".+", it ensures only truly zero-length (i.e. $0 == "") line numbers get printed, while skipping rows consisting entirely of [[:space:]]'s
echo '2938
383
3938
3
383
33333' |
{m,g,n}awk -F'.+' '!NF && $!NF = NR'
3
6
This sed one-liner should do the job at once:
sed -n '/^$/=' input.txt
Simply writes the current line number (the = command) if the line read is empty (the /^$/ matches the empty line).

How to instert in bash a char at specific line and position in line

I have a file, where I want to add a * char on specific line, and at a specific location in that line.
Is that possible?
Thank you
You can use a kind of external tool available to manipulate data such as sed or awk. You can use this tool directly from your command line or include it in your bash script.
Example:
$ a="This is a test program that will print
Hello World!
Test programm Finished"
$ sed -E '2s/(.{4})/&\*/' <<<"$a" #Or <file
#Output:
This is a test program that will print
Hell*o World!
Test programm Finished
In above test, we enter an asterisk after 4th char of line2.
If you want to operate on a file and make changes directly on the file then use sed -E -i '....'
Same result can also be achieved with gnu awk:
awk 'BEGIN{OFS=FS=""}NR==2{sub(/./,"&*",$4)}1' <<<"$a"
In pure bash you can achieve above output with something like this:
while read -r line;do
let ++c
[[ $c == 2 ]] && printf '%s*%s\n' "${line:0:4}" "${line:4}" || printf '%s\n' "${line}"
# alternative:
# [[ $c == 2 ]] && echo "${line:0:4}*${line:4}" || echo "${line}"
done <<<"$a"
#Alternative for file read:
# done <file >newfile
If your variable is just a single line, you don't need the loop. You can do it directly like:
printf '%s*%s\n' "${a:0:4}" "${a:4}"
# Or even
printf '%s\n' "${a:0:4}*${a:4}" #or echo "${a:0:4}*${a:4}"
I suggest to use sed. If you want to insert an asterisk at the 2nd line at the 5th column:
sed -r "2s/^(.{5})(.*)$/\1*\2/" myfile.txt
2s says you are going to perform a substitution on the 2nd line. ^(.{5})(.*)$ says you are taking 5 characters from the beginning of the line and all characters after it. \1*\2 says you are building the string from the first match (i.e. 5 beginning characters) then a * then the second match (i.e. characters until the end of the line).
If your line and column are in variables you can do something like that:
_line=5
_column=2
sed -r "${_line}s/^(.{${_column}})(.*)$/\1*\2/" myfile.txt

Trying to take input file and textline from a given file and save it to other, using bash

What I have is a file (let's call it 'xfile'), containing lines such as
file1 <- this line goes to file1
file2 <- this goes to file2
and what I want to do is run a script that does the work of actually taking the lines and writing them into the file.
The way I would do that manually could be like the following (for the first line)
(echo "this line goes to file1"; echo) >> file1
So, to automate it, this is what I tried to do
IFS=$'\n'
for l in $(grep '[a-z]* <- .*' xfile); do
$(echo $l | sed -e 's/\([a-z]*\) <- \(.*\)/(echo "\2"; echo)\>\>\1/g')
done
unset IFS
But what I get is
-bash: file1(echo "this content goes to file1"; echo)>>: command not found
-bash: file2(echo "this goes to file2"; echo)>>: command not found
(on OS X)
What's wrong?
This solves your problem on Linux
awk -F ' <- ' '{print $2 >> $1}' xfile
Take care in choosing field-separator in such a way that new files does not have leading or trailing spaces.
Give this a try on OSX
You can use the regex capabilities of bash directly. When you use the =~ operator to compare a variable to a regular expression, bash populates the BASH_REMATCH array with matches from the groups in the regex.
re='(.*) <- (.*)'
while read -r; do
if [[ $REPLY =~ $re ]]; then
file=${BASH_REMATCH[1]}
line=${BASH_REMATCH[2]}
printf '%s\n' "$line" >> "$file"
fi
done < xfile

Bash script get item from array

I'm trying to read file line by line in bash.
Every line has format as follows text|number.
I want to produce file with format as follows text,text,text etc. so new file would have just text from previous file separated by comma.
Here is what I've tried and couldn't get it to work :
FILENAME=$1
OLD_IFS=$IFSddd
IFS=$'\n'
i=0
for line in $(cat "$FILENAME"); do
array=(`echo $line | sed -e 's/|/,/g'`)
echo ${array[0]}
i=i+1;
done
IFS=$OLD_IFS
But this prints both text and number but in different format text number
here is sample input :
dsadadq-2321dsad-dasdas|4212
dsadadq-2321dsad-d22as|4322
here is sample output:
dsadadq-2321dsad-dasdas,dsadadq-2321dsad-d22as
What did I do wrong?
Not pure bash, but you could do this in awk:
awk -F'|' 'NR>1{printf(",")} {printf("%s",$1)}'
Alternately, in pure bash and without having to strip the final comma:
#/bin/bash
# You can get your input from somewhere else if you like. Even stdin to the script.
input=$'dsadadq-2321dsad-dasdas|4212\ndsadadq-2321dsad-d22as|4322\n'
# Output should be reset to empty, for safety.
output=""
# Step through our input. (I don't know your column names.)
while IFS='|' read left right; do
# Only add a field if it exists. Salt to taste.
if [[ -n "$left" ]]; then
# Append data to output string
output="${output:+$output,}$left"
fi
done <<< "$input"
echo "$output"
No need for arrays and sed:
while IFS='' read line ; do
echo -n "${line%|*}",
done < "$FILENAME"
You just have to remove the last comma :-)
Using sed:
$ sed ':a;N;$!ba;s/|[0-9]*\n*/,/g;s/,$//' file
dsadadq-2321dsad-dasdas,dsadadq-2321dsad-d22as
Alternatively, here is a bit more readable sed with tr:
$ sed 's/|.*$/,/g' file | tr -d '\n' | sed 's/,$//'
dsadadq-2321dsad-dasdas,dsadadq-2321dsad-d22as
Choroba has the best answer (imho) except that it does not handle blank lines and it adds a trailing comma. Also, mucking with IFS is unnecessary.
This is a modification of his answer that solves those problems:
while read line ; do
if [ -n "$line" ]; then
if [ -n "$afterfirst" ]; then echo -n ,; fi
afterfirst=1
echo -n "${line%|*}"
fi
done < "$FILENAME"
The first if is just to filter out blank lines. The second if and the $afterfirst stuff is just to prevent the extra comma. It echos a comma before every entry except the first one. ${line%|\*} is a bash parameter notation that deletes the end of a paramerter if it matches some expression. line is the paramter, % is the symbol that indicates a trailing pattern should be deleted, and |* is the pattern to delete.

Search and replace variables in a file using bash/sed

I am trying to write a bash script(script.sh) to search and replace some variables in input.sh file. But I need to modify only the variables which are present in variable_list file and leave others as it is.
variable_list
${user}
${dbname}
input.sh
username=${user}
password=${password}
dbname=${dbname}
Expected output file
username=oracle
password=${password} > This line won't be changed as this variable(${password}) is not in variable_list file
dbname=oracle
Following is the script I am trying to use but I am not able to find the correct sed expression
script.sh
export user=oracle
export password=oracle123
export dbname=oracle
variable='variable_list'
while read line ;
do
if [[ -n $line ]]
then
sed -i 's/$line/$line/g' input.sh > output.sh
fi
done < "$variable"
This could work:
#!/bin/bash
export user=oracle
export password=oracle123
export dbname=oracle
variable='variable_list'
while read line ;
do
if [[ -n $line ]]
then
exp=$(sed -e 's/\$/\\&/g' <<< "$line")
var=$(sed -e 's/\${\([^}]\+\)}/\1/' <<< "$line")
sed -i "s/$exp/${!var}/g" input.sh
fi
done < "$variable"
The first sed expression escapes the $ which is a regex metacharacter. The second extracts just the variable name, then we use indirection to get the value in our current shell and use it in the sed expression.
Edit
Rather than rewriting the file so many times, it's probably more efficient to do it like this, building the arguments list for sed:
#!/bin/bash
export user=oracle
export password=oracle123
export dbname=oracle
while read var
do
exp=$(sed -e 's/\$/\\&/g' <<< "$var")
var=$(sed -e 's/\${\([^}]\+\)}/\1/' <<< "$var")
args+=("-e s/$exp/${!var}/g")
done < "variable_list"
sed "${args[#]}" input.sh > output.sh
user=oracle
password=oracle123
dbname=oracle
variable_list=( '${user}' '${dbname}' )
while IFS="=$IFS" read variable value; do
for subst_var in "${variable_list[#]}"; do
if [[ $subst_var = $value ]]; then
eval "value=$subst_var"
break
fi
done
printf "%s=%s\n" "$variable" "$value"
done < input.sh > output.sh
Here is a script.sh that works:
#!/bin/bash
user=oracle
password=oracle123
dbname=oracle
variable='variable_list'
text=$(cat input.sh)
while read line
do
value=$(eval echo $line)
text=$(sed "s/$line/$value/g" <<< "$text")
done < "$variable"
echo "$text" > output.sh
Note that your original version contains single quotes around the sed string, which doesn't insert the value of $line. It is trying to look for the literal line after the end of the line $ (which will never find anything).
Since you are looking for the value of the variable in $line, you need to do an eval to get this.
Also, since there are multiple variables you are looping over, the intermediate text variable stores the result as it loops.
The export keyword is also unnecessary in this script, unless it is being used in some sub-process not shown.
TXR solution. Build a filter dynamically. The filter is implemented internally as a trie data structure, which gives us a lex-like state machine which matches the entire dictionary at once as the input is scanned. For simplicity, we include the ${ and } as part of the variable name.
#(bind vars (("${user}" "oracle")
("${dbname}" "oracle")
("${password}" "letme1n")))
#(next "variable_list")
#(collect)
#entries
#(end)
#(deffilter subst . #(mapcar (op list #1 (second [find vars #1 equal first]))
entries))
#(next "input.sh")
#(collect)
#line
# (output :filter subst)
#line
# (end)
#(end)
Run:
$ txr subst.txr
username=oracle
password=${password}
dbname=oracle
input.sh: (as given)
username=${user}
password=${password}
dbname=${dbname}
variable_list: (as given)
${user}
${dbname}

Resources