Passing values of one file into another file as inputs - ruby

I have a file that contains following information in a tab separated manner:
abscdfr 2 5678
bgbhjgy 7 8756
ptxfgst 5 6783
lets call this file A and it contains 2000 lines
and I have another file B written in ruby
that takes these values as command line input:
f_id = ARGV[0]
lane = ARGV[1].to_i
sample_id = ARGV[2].to_i
puts " #{f_id}_#{lane}_#{sample_id}.bw"
I execute the file B in ruby by providing the information in file A
./fileB.rb abscdfr 2 5678
I want to know how can I pass the values of file A as input to file B in a recursive manner.
If it was one value it was easy but I am confused with three values.
Kindly help me in writing a wrapper around these two file either in bash or ruby.
Thank you

The following command will do the job in bash:
while read line; do ./fileB.rb $line; done < fileA
This reads each lines into line. Then it runs ./fileB.rb $line for each line. $line gets replaced before the command line is evaluated, thus each word in every line is passed as its own argument, it is important that there is no quotation like "$line". read reads from STDIN and would usually wait for user input, but with < fileA the content of fileA is redirected to STDIN so that read takes its input from there.

You could use a little bash script to loop through each line in the file and output the contents as arguments to another script.
while read line; do
eval "./fileB.rb $line"
done < fileA
This will evaluate the line in the quotes as if you typed it into the shall yourself.

Also you can use one liner ruby :
ruby -ne 'system( "./fileB.rb #{$_}" )' < fileA
Explanation :
-e Which allow us to specifies script from command-line
-n The other useful flags are -n (somewhat like sed -n or awk) , the flag tell ruby to read input or input file line by line like while loop.
$_ Default ruby save current line stored in $_ variable

Related

During a while loop file read, where is the first line of stdin lost?

Assume we have a file with the numbers 1 to 5 written down line by line.
When I open a file for reading as standard input and use 'while read,' commands which can read stdin are unable to read the first line of that file.
$ while read x; do sed ''; done<file
2
3
4
5
It makes no difference which command you use: sed, awk, cat, etc. That problem occurs if the command is able to read from stdin.There is also no difference between the shells I use. I try the same thing in sh, bash, and zsh, and the results are identical.
It's worth noting that the loop iterates five times, once for each line. For example:
$ while read x; do printf 'something\n'; done<file
something
something
something
something
something
I understand that if I want to read all lines correctly, I must specify a variable in the read command and then pass it to the command. But I'm trying to figure out what's going on here. Why does this problem occur when I do not specify input for a command directly?
Perhaps it is a side effect with no functional purpose.
I couldn't find any information about this behavior of the 'while read' statement, and neither did I find anyone who had a similar problem.
Your code only iterates once.
while read x; do sed ''; done<file
...behaves as follows:
file is opened and attached to stdin
read consumes the first line of the file from stdin and puts it into $x
sed '' consumes the entire rest of the file from stdin and prints it to stdout without changes.
read sees there's no more data (because sed consumed it all), and the loop ends.
If you want sed to operate on only the one line that read x consumed, and to safeguard against other bugs, you might instead write:
while IFS= read -r x; do printf '%s\n' "$x" | sed ''; done <file
The changes:
Using IFS= prevents leading or trailing whitespace from being deleted by read.
Using the -r argument prevents backslashes from being consumed by read.
Piping from printf '%s\n' "$x" into sed changes sed's stdin, such that instead of containing the rest of the file, it only contains the one line. Thus, this ensures that sed is processing the line that was consumed by read, instead of ignoring that line and processing the entire rest of the file. (Using printf instead of echo is a correctness concern; see Why is printf better than echo? on UNIX & Linux Stack Exchange).
the first line of stdin is not lost, but rather it is consumed by the shell when the redirection operator '<' is used to redirect the contents of the file to the while loop. The first line is used as the input to initialize the while loop, and subsequent lines are read inside the loop. This is why the first line is not processed by the commands inside the loop. To avoid this, you can redirect the file to a new file descriptor using '<&', as follows:
$ while read x; do sed ''; done <&3 3<file

WHILE loop - read line of a file one by one -- Not working the No. of times the file has lines in it

I'm using a "while" loop within a shell script (BASH) to read line of a file (one by one) -- "Fortunately", its not working the No. of times the file has lines in it.
Here's the summary:
$ cat inputfile.txt
1
2
3
4
5
Now, the shell script content is pretty simple as shown below:
#!/bin/bash
while read line
do
echo $line ----------;
done < inputfile.txt;
The above script code works just fine..... :). It shows all the 5 lines from inputfile.txt.
Now, I have another script whose code is like:
#!/bin/bash
while read line
do
echo $line ----------;
somevariable="$(ssh sshuser#sshserver "hostname")";
echo $somevariable;
done < inputfile.txt;
Now, in this script, while loop just shows only line "1 ---------" and exits out from the loop after showing valid value for "$somevariable"
Any idea, what I'm missing here. I didn't try using some number N < inputfile.txt and using done <&N way (i.e. to change the input redirector by using a file pointed by N descriptor)
.... but I'm curious why this simple script is not working for N no. of times, when I just added a simple variable declaration which is doing a "ssh" operation in a child shell.
Thanks.
You might want to add the -n option to the ssh command. This would prevent it to "swallow" your inputfile.txt as its standard input.
Alternatively, you might just redirect ssh stdin from /dev/null, eg:
somevariable="$(ssh sshuser#sshserver "hostname" </dev/null)";

How do I iterate over each line in a file with Bash?

Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?

How to read entire line from bash

I have a file file.txt with contents like
i love this world
I hate stupid managers
I love linux
I have MS
When I do the following:
for line in `cat file.txt`; do
echo $line
done
It gives output like
I
love
this
world
I
..
..
But I need the output as entire lines like below — any thoughts ?
i love this world
I hate stupid managers
I love linux
I have MS
while read -r line; do echo "$line"; done < file.txt
As #Zac noted in the comments, the simplest solution to the question you post is simply cat file.txt so i must assume there is something more interesting going on so i have put the two options that solve the question as asked as well:
There are two things you can do here, either you can set IFS (Internal Field Separator) to a newline and use existing code, or you can use the read or line command in a while loop
IFS="
"
or
(while read line ; do
//do something
done) < file.txt
I believe the question was how to read in an entire line at a time. The simple script below will do this. If you don't specify a variable name for "read" it will stuff the entire line into the variable $REPLY.
cat file.txt|while read; do echo $REPLY; done
Dave..
You can do it by using read if the file is coming into stdin. If you need to do it in the middle of a script that already uses stdin for other purposes, you can temporarily reassign the stdin file descriptor.
#!/bin/bash
file=$1
# save stdin to usually unused file descriptor 3
exec 3<&0
# connect the file to stdin
exec 0<"$file"
# read from stdin
while read -r line
do
echo "[$line]"
done
# when done, restore stdin
exec 0<&3
Try
(while read l; do echo $l; done) < temp.txt
read: Read a line from the standard
input and split it into fields.
Reads a single line from the standard input, or from file
descriptor FD
if the -u option is supplied. The line is split into fields as with word
splitting, and the first word is assigned to the first NAME, the second
word to the second NAME, and so on, with any leftover words assigned
to
the last NAME. Only the characters found in $IFS are
recognized as word
delimiters.

Error in bash script while reading a file

The following is a script I wrote to run an executable ./runnable on argument/input file input.
It takes standard input from another file called final_file and outputs it to a file called outfile. There are 91 lines in final_file (i.e., 91 different standard space delimited inputs) and therefore the bash script should call the ./runnable input 91 times.
But, I am not sure why it is calling it only one time. Any suggestions on what's going on wrong?
#!/bin/bash
OUTFILE=outfile
(
a=0
while read line
do
./runnable input
echo "This is line number: $a"
a='expr $a+ 1'
done<final_file
) >$OUTFILE
To clarify, the final_file looks like
_ _DATA_ _
2,9,2,9,10,0,38
2,9,2,10,11,0,0
2,9,2,11,12,0,0
2,9,2,12,13,0,0
2,9,2,13,0,1,4
2,9,2,13,3,2,2
and so on. One line, at a time, is the standard input. Number of lines in final_file correspond to number of times the standard input is given. So in the above case, the script should run six times as there are six lines.
I'll hazard that ./runnable seeks all the way through stdin. With no input left to read, the while loop ends after one iteration.
Reasoning: your example Works For Me (TM), substituting a file I happen to have (/etc/services) for final_file and commenting out the line that invokes ./runnable.
On the other hand, if I replace the ./runnable invocation with a one-liner that simply seeks and discards standard input (e.g., cat - > /dev/null or perl -ne 1), I get the behavior you describe.
(Note that you want backticks or $() around the call to expr.)
Run your shell script with the -x
option for some debugging output.
Add echo $line after your while read line; do
Note that while read line; do echo $line; done does not read space separated input, it reads line separated input.

Resources