Use standard input in a shellscript - shell

We have to make a script that interacts with the standard input, wich is a file and we put a keystring on it, but the difficulty of this exercise is that the file can't be found in some cases, so we have to save the filename to a variable
key.sh (keystring) < (filename)
how can i save the filename into a variable?

In key.sh, you want to have a script like this:
#!/bin/sh
# assign the input string to the variable filename
filename="$1"
Then you would actually call the script with key.sh filename

Related

How to make my script use a CSV file that was given in the terminal as a parameter

I tried to google this, but cant really find "good words" to get to my solution. So maybe someone here can help me out.
I have a script (lets call it script.rb) that uses File.read to read a csv file called somefile.csv and i have another csv file called somefileV2.csv.
Script.rb
csv_text = File.read('/home/XXX/XXX/XXX/somefile.csv')
Right now it uses somefile.csv as default, but I would like to know, if it is posseble to make my script use a CSV file that was given in the terminal as a parameter like:
Terminal
home$ script.rb somefileV2
so instead of it reading the file that is in the script, it reads the other csv file (somefileV2.csv) that is in the directory. It is kinda annoying to change the file manually everytime in the script itself.
You can access the parameters (arguments) using the ARGV array.
So your program could be like:
default = "/home/XXX/XXX/XXX/somefile.csv"
csv_text = File.read(ARGV[0] || default)
which gives you the possibility to supply a filename or, if not supplied, use the default value.
ARGV[0] refers to the first, ARGV[1] to the second argument and so on.
ruby myscript.rb foo bar baz would result in ARGV being
´["foo", "bar", "baz"]´. Note that the elements will always be strings. So if you want anything else (Numbers, Date, ...) you need to process it accordingly in your program.

Execute bash script on multiple input files, ignoring other input variables

I'm using Mac's Automator to perform a bash script on files that are dropped onto a droplet. Inputs are gathered from Automator actions and are passed to a bash script as arguments. The script works with a single input file, but I'm having trouble handling multiple files.
My ultimate goal is to accept several dropped video files, prompt for a number, and extract that number of frames from each video file (using FFmpeg).
I can loop through inputs by using the special variable $#, like so:
for f in "$#"; do
# perform an action
done
However, my script also prompts the user for text input that I don't want included in this loop.
I can access each input file individually by using $1, $2, etc. But I'd like to use a loop instead of referencing each file individually. Also, the quantity of input files is unpredictable and I'm not sure how to distinguish between input files and input text.
How can I loop through only the file inputs without including the text input?
Here is a description of my current workflow:
Get Specified Movies
one.mov
two.mov
Set Value of Variable (accepts input)
source_files
Ask For Text (ignores input)
Enter a Number:
Set Value of Variable (accepts input)
number
Get Value of Variable (ignores input)
source_files
Get Value of Variable (accepts input)
number
Run Shell Script (accepts input, pass "as arguments")
#/bin/bash
for f in "$#"; do
echo $f
done
OUTPUT:
/folder/one.mov
/folder/two.mov
8
I was hoping to have one variable set to the multiple inputs (so I could loop through it) and another variable set to the number, but it doesn't seem to work that way.
How can I loop through each input file without referencing the text input?
Just change the order of the args: get value of variable number first,
then get value of variable source_files,
so that number will be the first parameter,
then:
echo number: $1
for f in "${#:2}"; do
echo file: $f
done

Is it possible to do standard input AND pass a command line argument in the same line?

I have a program stored in programfile in which I want to pass command line arguments (with the contents of the file of varargs). I also want to take input on stdin from the contents of file p. I then want to store the final output into variable output.
This is what I have:
"$programfile" "${varargs}" < "${p}" > "$output"
I'm not sure if this is correct or not as I think my syntax is off somewhere?
Looks fine to me, as long as you meant that you are storing the final output into a file whose name is in the variable output. If you wanted to put the output into a variable you should use backticks or $().
As you have it, your output would go to a file named after the value of $output, not the variable itself. You could do something like:
output=$("$programfile" "${varargs}" < "${p}")
The redirector > is usually used to redirect the output to a file or device. For example,
ls > list.txt
But to store the result as a variable you will need to do:
result=`ls`
The usage of < is correct.

Using bash wildcards with prefix

I am trying to write a bash script that takes a variable number of file names as arguments.
The script is processing those files and creating a temporary file for each of those files.
To access the arguments in a loop I am using
for filename in $*
do
...
generate t_$(filename)
done
After the loop is done, I want to do something like cat t_$* .
But it's not working. So, if the arguments are a b c, it is catting t_a, b and c.
I want to cat the files t_a, t_b and t_c.
Is there anyway to do this without having to save the list of names in another variable?
You can use the Parameter expansion:
cat "${#/#/t_}"
/ means substitute, # means at the beginning.

Simple map for pipeline in shell script

I'm dealing with a pipeline of predominantly shell and Perl files, all of which pass parameters (paths) to the next. I decided it would be better to use a single file to store all the paths and just call that for every file. The issue is I am using awk to grab the files at the beginning of each file, and it's turning out to be a lot of repetition.
My question is: I do not know if there is a way to store key-value pairs in a file so shell can natively do something with the key and return the value? It needs to access an external file, because the pipeline uses many scripts and a map in a specific file would result in parameters being passed everywhere. Is there some little quirk I do not know of that performs a map function on an external file?
You can make a file of env var assignments and source that file as need, ie.
$ cat myEnvFile
path1=/x/y/z
path2=/w/xy
path3=/r/s/t
otherOpt1="-x"
Inside your script you can source with either . myEnvFile or the more versbose version of the same feature sourc myEnvFile (assuming bash shell) , i.e.
$cat myScript
#!/bin/bash
. /path/to/myEnvFile
# main logic below
....
# references to defined var
if [[ -d $path2 ]] ; then
cd $path2
else
echo "no pa4h2=$path2 found, can't continue" 1>&1
exit 1
fi
Based on how you've described your problem this should work well, and provide a-one-stop-shop for all of your variable settings.
IHTH
In bash, there's mapfile, but that reads the lines of a file into a numerically-indexed array. To read a whitespace-separated file into an associative array, I would
declare -A map
while read key value; do
map[$key]=$value
done < filename
However this sounds like an XY problem. Can you give us an example (in code) of what you're actually doing? When I see long piplines of grep|awk|sed, there's usually a way to simplify. For example, is passing data by parameters better than passing via stdout|stdin?
In other words, I'm questioning your statement "I decided it would be better..."

Resources