Ruby Switch Between File and Standard Input - ruby

How would you create a variable that could be read. It would read from a certain file if it exists, otherwise it would read from standard input. Something like:
input = File.open("file.txt") || in
This doesn't work, but I think this should be done pretty often, but I can't find a nice way to do it.

This this works for you?
input = File.exist?("file.txt") ? File.open("file.txt") : STDIN

See: ...run against stdin if no arg; otherwise input file =ARGV

I think ruby has the ability to treat arguments that aren't used before STDIN is first used as if it were filenames for files piped into standard input.

Related

Difference between cat file_name | sort, sort < file_name, sort file_name in bash

Although they do give the same results, I wonder if there is some difference between them and which is the most appropriate way to sort something contained in a file.
Another thing which intrigues me is the use of delimiters, I noticed that the sort filter only works if you separate the strings with a new line, are there any ways to do this without having to write the new strings in a separate line
The sort(1) command reads lines of text, analyzes and sorts them, and writes out the result. The command is intended to read lines, and lines in unix/linux are terminated by a new line.
The command takes its first non-option argument as the file to read; if there is no specification it reads standard input. So:
sort file_name
is a command line with such argument. The other two examples, "... | sort" and "sort < ..." do not specify the file to read directly to sort(1), but use its standard input. The effect, for what sort(1) is concerned, is the same.
ways to do this without having to write the new strings in a separate line
Ultimately no. But if you want you can feed sort using another filter (a program), which reads the file non-linefeed-separated and creates lines to pass to sort. If such program exists and is named "myparse", you can do:
myparse non-linefeed-separated-file | sort
The solution using cat involves creating a second process unnecessarily. This could be a performance issue if you perform many of such operation in a loop.
When doing input redirection to your file, the shell is setting up the association of file with std input. If the file would not exist, the shell complains about the file being missing.
When passing the file name as explicit argument, the sort process has to care about opening the file and to report an error if there is an accessability problem with it.

How to read a file from command line using < operator and read user input afterwards?

I am writing a program in which I am taking in a csv file via the < operator on the command line. After I read in the file I would also like to ask the user questions and have them input their response via the command line. However, whenever I ask for user input, my program skips right over it.
When I searched stack overflow I found what seems to be the python version here, but it doesn't really help me since the methods are obviously different.
I read my file using $stdin.read. And I have tried to use regular gets, STDIN.gets, and $stdin.gets. However, the program always skips over them.
Sample input ruby ./bin/kata < items.csv
Current File
require 'csv'
n = $stdin.read
arr = CSV.parse(n)
input = ''
while true
puts "What is your choice: "
input = $stdin.gets.to_i
if input.zero?
break
end
end
My expected result is to have What is your choice: display in the command and wait for user input. However, I am getting that phrase displayed over and over in an infinite loop. Any help would be appreciated!
You can't read both file and user input from stdin. You must choose. But since you want both, how about this:
Instead of piping the file content to stdin, pass just the filename to your script. The script will then open and read the file. And stdin will be available for interaction with the user (through $stdin or STDIN).
Here is a minor modification of your script:
arr = CSV.parse(ARGF) # the important part.
input = ''
while true
puts "What is your choice: "
input = STDIN.gets.to_i
if input.zero?
break
end
end
And you can call it like this:
ruby ./bin/kata items.csv
You can read more about ARGF in the documentation: https://ruby-doc.org/core-2.6/ARGF.html
This has nothing to do with Ruby. It is a feature of the shell.
A file descriptor is connected to exactly one file at any one time. The file descriptor 0 (standard input) can be connected to a file or it can be connected to the terminal. It can't be connected to both.
So, therefore, what you want is simply not possible. And it is not just not possible in Ruby, it is fundamentally impossible by the very nature of how shell redirection works.
If you want to change this, there is nothing you can do in your program or in Ruby. You need to modify how your shell works.

Bash difference between pipeline and parameters

I need to write a script which gets a file from stdin and run over the lines of it.
My question is can I do something like that :
TheFile= /dev/stdin
while read line; do
{
....
}
done<"$(TheFile)"
or can I write --done<"$1"
or in that case the minute I send a parameter to the function which is a file it will be sent to the while function ?
Where to start... Are you sure're up for this?
What are you trying to do with the lines of the file? You might be better off not iterating like your example, just using sed, awk, or grep on it like this example:
sed -e 's/apple/banana/' $TheFile
That will output the contents of $TheFile, replacing all occurrences of "apple" with "banana". That's a trivial example, but you could do much more.
If you really want to loop, then remove the $() from your example. Also, you cannot have a space after = in your code.

Is it possible to do standard input AND pass a command line argument in the same line?

I have a program stored in programfile in which I want to pass command line arguments (with the contents of the file of varargs). I also want to take input on stdin from the contents of file p. I then want to store the final output into variable output.
This is what I have:
"$programfile" "${varargs}" < "${p}" > "$output"
I'm not sure if this is correct or not as I think my syntax is off somewhere?
Looks fine to me, as long as you meant that you are storing the final output into a file whose name is in the variable output. If you wanted to put the output into a variable you should use backticks or $().
As you have it, your output would go to a file named after the value of $output, not the variable itself. You could do something like:
output=$("$programfile" "${varargs}" < "${p}")
The redirector > is usually used to redirect the output to a file or device. For example,
ls > list.txt
But to store the result as a variable you will need to do:
result=`ls`
The usage of < is correct.

Simple map for pipeline in shell script

I'm dealing with a pipeline of predominantly shell and Perl files, all of which pass parameters (paths) to the next. I decided it would be better to use a single file to store all the paths and just call that for every file. The issue is I am using awk to grab the files at the beginning of each file, and it's turning out to be a lot of repetition.
My question is: I do not know if there is a way to store key-value pairs in a file so shell can natively do something with the key and return the value? It needs to access an external file, because the pipeline uses many scripts and a map in a specific file would result in parameters being passed everywhere. Is there some little quirk I do not know of that performs a map function on an external file?
You can make a file of env var assignments and source that file as need, ie.
$ cat myEnvFile
path1=/x/y/z
path2=/w/xy
path3=/r/s/t
otherOpt1="-x"
Inside your script you can source with either . myEnvFile or the more versbose version of the same feature sourc myEnvFile (assuming bash shell) , i.e.
$cat myScript
#!/bin/bash
. /path/to/myEnvFile
# main logic below
....
# references to defined var
if [[ -d $path2 ]] ; then
cd $path2
else
echo "no pa4h2=$path2 found, can't continue" 1>&1
exit 1
fi
Based on how you've described your problem this should work well, and provide a-one-stop-shop for all of your variable settings.
IHTH
In bash, there's mapfile, but that reads the lines of a file into a numerically-indexed array. To read a whitespace-separated file into an associative array, I would
declare -A map
while read key value; do
map[$key]=$value
done < filename
However this sounds like an XY problem. Can you give us an example (in code) of what you're actually doing? When I see long piplines of grep|awk|sed, there's usually a way to simplify. For example, is passing data by parameters better than passing via stdout|stdin?
In other words, I'm questioning your statement "I decided it would be better..."

Resources