There is a file called test.txt that contains:
ljlkfjdslkfldjfdsajflkjf word:test1
dflkdjflkdfdjls word:test2
dlkdj word:test3
word:test4
word:NewYork
dljfldflkdflkdjf word:test7
djfkd word:young
dkjflke word:lisa
amazonwle word:NewYork
dlksldjf word:test10
Now all we want is to get the strings after colon and if the result is same, print the output, in this case it is "NewYork"
Here is the script which lists the elements but when tried to push into array and compare it is failing, Please let me know my mistakes.
#!/usr/bin/sh
input="test.txt"
cat $input | while read line; do output= $(echo $line | cut -d":" -f2); done
for (( i = 0 ; i < "${#output[#]}" ; i++ ))
{
echo ${output[i]}
}
Error obtained:
./compare.sh
./compare.sh: line 11: test1: command not found
./compare.sh: line 11: test2: command not found
./compare.sh: line 11: test3: command not found
./compare.sh: line 11: test4: command not found
./compare.sh: line 11: NewYork: command not found
./compare.sh: line 11: raghav: command not found
./compare.sh: line 11: young: command not found
./compare.sh: line 11: lisa: command not found
./compare.sh: line 11: NewYork: command not found
./compare.sh: line 11: test10: command not found
Please let me know my mistakes.
output= $(..) first excutes the command $(..) inside and grabs it's output. It then sets the variable output to an empty string as-if the same as output="" and exports the variable output and executes the output of $(..) as a command. Remove the space after =.
You are setting output on the right side of a pipe inside a subshell. The changes will not be visible outside - output is unset once the pipe terminates. Use redirection while .... done < file.
And output is not an array, but a normal variable. There is no ${output[i]} (well, except ${output[0]}) as it's not an array (and output is unset, as explained above). Append an element to an array output+=("$(...)").
#!/usr/bin/sh - is invalid, sh may not be bash and support bash arrays. To use bash extension specifically use bash and use a shebang with bash - #!/bin/bash,
Now stylistic:
The format for .... { ... } is supported in bash since forever, however it's a rather undocumented syntax rather to be deprecated to be readable by people used to programming in C. Prefer the standard do ... done.
The expansions $input and ${output[i]} are unquoted.
The read will ignore leading and trailing whitespaces and also interpret/ignore \ backslash sequences (ie. see -r option to read).
echo $line will make $line undergo word splitting - multiple whitespaces will be replaced by a single space.
And using cut after read can be simpler written as just read with IFS=: and splitting on read. Also, if you're using cut, you could just cut -d: -f2 file on the whole file instead of cutting one line at a time.
Re-read basic bash introduction - notice that bash is space aware, spaces around = count. Read bashfaq how to read a file field by field, read about subshells and environments, read bashfaq I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read? and an introduction to bash arrays.
Related
I'm writing a small script in which I want to set the value of a variable equal to the output of a command. However, the command in question is a call to another script with command-line arguments. I'm using backticks as you normally should in this scenario, but the problem is that the the computer gives an error, in which it tries to interpret the command-line arguments as commands.
#!/bin/bash
filename="$1"
while read p; do
echo "This is the gene we are looking at: ""$p"
lookIn= `./findGeneIn "$p" burgdorferi afzelii garinii hermsii miyamotoi parkeri`
echo "$lookIn"
#grep "$p" "$lookIn""/""prokka_""$lookIn""/*.tsv" | awk '{print $1}'
done < $filename
I'm trying to set variable lookIn equal to the output of ./findGeneIn "$p" burgdorferi afzelii garinii hermsii miyamotoi parkeri, where ./findGeneIn is a script, and the words burgdorferi,...,parkeri are command line arguments for ./findGeneIn.
The issue, is that I get an error saying "burgdorferi: command not found". So it's trying to interpret those arguments as commands. How do I get it to not do that?
lookIn= `./findGeneIn "$p" burgdorferi afzelii garinii hermsii miyamotoi parkeri`
^
Delete the extra space. Assignments must not have spaces around the equal sign.
With the space there, Bash parses the line as var=value command, which runs a command with $var temporarily set to "value". Or in this case, it interprets result of the backticks as a command name and lookIn= as an empty variable assignment.
Here's my bash function to get a command's output as parameter and then return an array of output lines.
function get_lines {
while read -r line; do
echo $line
done <<< $1
}
SESSIONS=`loginctl list-sessions`
get_lines "$SESSIONS"
Actual output of loginctl list-sessions is:
SESSION UID USER SEAT
c2 1000 asif seat0
c7 1002 sadia seat0
But the while loop only runs once printing all output in a single line. How can I get an array of lines and return it?
You could use readarray and avoid the get_lines function:
readarray SESSIONS < <(loginctl --no-legend list-sessions)
this create the array SESSIONS with each line of the output of the command mapped to an element of the array.
The value of this answer is in explaining the problem with the OP's code.
- The other answers show the use of Bash v4+ builtin mapfile (or its effective alias, readarray) for directly reading input line by line into the elements of an array, without the need for a custom shell function.
- In Bash v3.x, you can use IFS=$'\n' read -r -d '' -a lines < <(...),, but note that empty lines will be ignored.
Your primary problem is that unquoted (non-double-quoted) use of $1 makes the shell apply word-splitting to its contents, which effectively normalizes all runs of whitespace - including newlines - to a single space each, resulting in a single input line to the while loop.
Secondarily, using $input unquoted applies this word-splitting again on output with echo.
Finally, by using read without setting $IFS, the internal field separator, to the empty string - via IFS= read -r line - leading and trailing whitespace is trimmed from each input line.
That said, you can simplify your function to read directly from stdin rather than taking arguments:
function get_lines {
while IFS= read -r line; do
printf '%s\n' "$line"
done
}
which you can then invoke as follows, using a process substitution:
get_lines < <(loginctl list-sessions)
Using a pipeline would work too, but get_lines would then run in a subshell, which means that it can't set variables visible to the current shell:
loginctl list-sessions | get_lines
here's a way in bash v4+:
SESSIONS=`loginctl list-sessions`
mapfile -t myArray <<< "$SESSIONS"
ref:
Creating an array from a text file in BASH with mapfile
I intend to read the lines of a short .txt file and assign each line to variables containing the line number in the variable name.
File example.txt looks like this:
Line A
Line B
When I run the following code:
i=1
while read line; do
eval line$i="$line"
echo $line
((i=i+1))
done < example.txt
What I would expect during execution is:
Line A
Line B
and afterwards being able to call
$ echo $line1
Line A
$ echo $line2
Line B
However, the code above results in the error:
-bash: A: command not found
Any ideas for a fix?
Quote-removal happens twice with eval. Your double-quotes are getting removed before eval even runs. I'm not even going to directly answer that part, because there are better ways to do this:
readarray line < example.txt # bash 4
echo "${line[0]}"
Or, to do exactly what you were doing, with a different variable for each line:
i=1
while read line$((i++)); do
:
done < example.txt
Also check out printf -v varname "%s" value for a better / safer way to assign by reference.
Check out the bash-completion code if you want to see some complicated call-by-reference bash shenanigans.
Addressing your comment: if you want to process lines as they come in, but still save previous lines, I'd go with this construct:
lines=()
while read l;do
lines+=( "$l" )
echo "my line is $l"
done < "$infile"
This way you don't have to jump through any syntactic hoops to access the current line (vs. having to declare a reference-variable to line$i, or something.)
Bash arrays are really handy, because you can access a single element by value, or you can do "${#lines[#]}" to get the line count. Beware that unset lines[4] leaves a gap, rather than renumbering lines[5:infinity]. See the "arrays" section in the bash man page. To find the part of the manual that documents $# expansion, and other stuff, search in the manual for ##. The Parameter Expansion section is the first hit for that in the bash 4.3 man page.
eval line$i="$line" tells bash to evaluate the string "line1=Line A", which attempts to invoke a command named A with the environment variable "line1" set to the value of Line. You probably want to do eval "line$i='$line'"
I am trying to read up 3 similar files with different names to different arrays. Because i didn't want to use unnecessary code i am trying to create functions that would accept array names as params, but i am getting error 'command not found'.
hello.sh file code:
#!/bin/bash
declare -a row_1
declare -a row_2
declare -a row_3
load_array()
{
ROW="$2"
let i=0
while read line; do
for word in $line; do
$ROW[$i]=$word
((++i))
done
done < $1
}
load_array $1 row_1
load_array $2 row_2
load_array $3 row_3
Calling this file from terminal with: sh hello.sh 1.txt 2.txt 3.txt
List of errors i am getting:
hello.sh: line 13: row_1[0]=9: command not found
hello.sh: line 13: row_1[1]=15: command not found
hello.sh: line 13: row_1[2]=13: command not found
hello.sh: line 13: row_2[0]=12: command not found
hello.sh: line 13: row_2[1]=67: command not found
hello.sh: line 13: row_2[2]=63: command not found
hello.sh: line 13: row_3[0]=75: command not found
hello.sh: line 13: row_3[1]=54: command not found
hello.sh: line 13: row_3[2]=23: command not found
In the assignment syntax, what is to the left of the equal sign must be either a variable name (when assigning to a scalar), or a variable name followed by a word in square brackets (when assigning to an array element). In your code, $ROW[$i]=$word doesn't match this syntax (there's a $ at the beginning, so it can't possibly be an assignment); it's just a word that happens to contain the character =.
In bash, you can use the declare builtin to assign to a variable whose name is the result of some computation such as a variable expansion. Note that unlike for a straight assignment, you do need double quotes around the value part to prevent word splitting and filename expansion on $word. Pass the option -g if you're doing the assignment in a function and you want the value to remain after the function returns.
declare -g $ROW[$i]="$word"
If you're running an old version of bash that doesn't have declare -g, you can use eval instead. This would be the method to use to assign to a dynamically-named variable in plain sh. Take care of quoting things properly: if ROW is e.g. row_1, then the string passed as the argument to eval must be row_1[$i]=$word (the shell command to parse and execute).
eval "$ROW[\$i]=\$word"
The ideal way to do this with modern (bash 4.3+) syntax is thus:
load_array() {
declare -n _load_array__row=$2
declare -a _load_array__line
_load_array__row=( )
while read -r -a _load_array__line; do
_load_array__row+=( "${_load_array__line[#]}" )
done <"$1"
}
(The variable names are odd to reduce the chance of collisions with the calling function; the other answers you're given will have trouble if asked to load content into a variable named ROW or line, for instance, as they'll be referring to local variables rather than global ones in describing their destinations).
A similar mechanism compatible with bash 3.2 (the ancient release shipped by Apple), avoiding the performance hit associated with inner loops and the bugs associated with glob expansion (see what your original code does to a line containing *!) follows:
load_array() {
local -a _load_array__array=( )
local -a _load_array__line
while read -r -a _load_array__line; do
_load_array__array+=( "${_load_array__line[#]}" )
done <"$1"
eval "$2=( \"\${_load_array__array[#]}\" )"
}
Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?