Passing arguments to a shell script via stdin multiple times - bash

I have a script StartProcess.sh that accepts two options in stdin - 3 and a filename test.xml.
If I run the below script, it executes correctly, and waits again for the input.
I want someway to pass 3 and test.xml n times to StartProcess.sh. How do I achieve this.
./StartProcess.sh << STDIN -o other --options
3
test.xml
STDIN

You can run a loop to pass the arguments as many times in a loop and run a script over a pipe-line. That way, the script is just launched once and the arguments gets sent over stdin any number of times of your choice
count=3
for (( iter = 0; iter < 3; iter++ )); do
echo "3" "test.xml"
done | StartProcess.sh
But I'm not fully sure if you wanted to pass the literal string test.xml as an argument or the content of the file.

Related

How to get output values in bash array by calling other program from bash?

I am stuck with a peculiar situation, where in from python I am printing two strings one by one and reading it in bash script (which calls the python code piece)
I am expecting array size to be 2, but somehow, bash considers spaces also as a element separator and return me size of 3.
Example scripts
multi_line_return.py file has following content
print("foo bar")
print(5)
multi_line_call.sh has following content
#!/bin/bash
PYTHON_EXE="ABSOLUTE_PATH TO PYTHON EXECUTABLE IN LINUX"
CURR_DIR=$(cd $(dirname ${BASH_SOURCE[0]}) && pwd)/
array=()
while read line ; do
array+=($line)
done < <(${PYTHON_EXE} ${CURR_DIR}multi_line_return.py)
echo "array length --> ${#array[#]}"
echo "each variable in new line"
for i in "${array[#]}"
do
printf $i
printf "\n"
done
Now keep both of the above file in same directory and make following call to see result.
bash multi_line_call.sh
As you can see in result,
I am getting
array length = 3
1.foo, 2.bar & 3. 5
The expectation is
One complete line of python output (stdout) as one element of bash array
array length = 2
1. foo bar & 2. 5
Put quotes around $line to prevent it from being split:
array+=("$line")
You can also do it without a loop using readarray:
readarray array < <(${PYTHON_EXE} ${CURR_DIR}multi_line_return.py)

Synchronized Output With Bash's Process Substitution

I have to multiply call an inflexible external tool that takes as arguments some input data and an output file to which it will write the processed data, for example:
some_prog() { echo "modified_$1" > "$2"; }
For varying input, I want to call some_prog, filter the output and write the output of all calls into the same file "out_file". Additionally, I want to add a header line to the output file before each call of some_prog. Given the following dummy filter:
slow_filter() {
read input; sleep "0.000$(($RANDOM % 10))"; echo "filtered_$input"
}
I wrote the following code:
rm -f out_file
for input in test_input{1..8}; do
echo "#Header_for_$input" >> "out_file"
some_prog $input >( slow_filter >> "out_file" )
done
However, this will produce an out_file like this:
#Header_for_test_input1
#Header_for_test_input2
#Header_for_test_input3
#Header_for_test_input4
#Header_for_test_input5
#Header_for_test_input6
#Header_for_test_input7
#Header_for_test_input8
filtered_modified_test_input4
filtered_modified_test_input1
filtered_modified_test_input2
filtered_modified_test_input5
filtered_modified_test_input6
filtered_modified_test_input3
filtered_modified_test_input8
filtered_modified_test_input7
The output I expected was:
#Header_for_test_input1
filtered_modified_test_input1
#Header_for_test_input2
filtered_modified_test_input2
#Header_for_test_input3
filtered_modified_test_input3
#Header_for_test_input4
filtered_modified_test_input4
#Header_for_test_input5
filtered_modified_test_input5
#Header_for_test_input6
filtered_modified_test_input6
#Header_for_test_input7
filtered_modified_test_input7
#Header_for_test_input8
filtered_modified_test_input8
I realized that the >( ) process substitution forks the shell. Is there a way to synchronize the output of the subshells? Or is there another elegant solution to this problem? I want to avoid the obvious approach of writing to different files in each iteration because, in my code, the for loop has a few 100,000 iterations.
Write the header inside the process substitution, specifically in a command group with the filter so that the concatenated output is written to out_file as one stream.
rm -f out_file
for input in test_input{1..8}; do
some_prog "$input" >( { echo "#Header_for_$input"; slow_filter; } >> "out_file" )
done
As process substitution is truly asynchronous and there doesn't appear to be a way to wait for it to complete before executing the next iteration of the loop, I would use an explicit named pipe.
rm -f out_file pipe
mkfifo pipe
for input in test_input{1..8}; do
some_prog "$input" pipe &
echo "#Header_for_$input" >> out_file
slow_filter < pipe >> out_file
done
(If some_prog doesn't work with a named pipe for some reason, you can use a regular file. In that case, you shouldn't run the command in the background.)
Since chepner's approach using a named pipe seems to be very slow in my "real world script" (about 10 times slower than this solution), the easiest and safest way to achieve what I want seems to be a temporary file:
rm -f out_file
tmp_file="$(mktemp --tmpdir my_temp_XXXXX.tmp)"
for input in test_input{1..8}; do
some_prog "$input" "$tmp_file"
{
echo "#Header_for_$input"
slow_filter < "$tmp_file"
} >> out_file
done
rm "$tmp_file"
This way, the temporary file tmp_file gets overwritten in each iteration such that it can be kept in memory if the system's temp directory is a RAM disk.

How do I prepend to a stream in Bash?

Suppose I have the following command in bash:
one | two
one runs for a long time producing a stream of output and two performs a quick operation on each line of that stream, but two doesn't work at all unless the first value it reads tells it how many values to read per line. one does not output that value, but I know what it is in advance (let's say it's 15). I want to send a 15\n through the pipe before the output of one. I do not want to modify one or two.
My first thought was to do:
echo "$(echo 15; one)" | two
That gives me the correct output, but it doesn't stream through the pipe at all until the command one finishes. I want the output to start streaming right away through the pipe, since it takes a long time to execute (months).
I also tried:
echo 15; one | two
Which, of course, outputs 15, but doesn't send it through the pipe to two.
Is there a way in bash to pass '15\n' through the pipe and then start streaming the output of one through the same pipe?
You just need the shell grouping construct:
{ echo 15; one; } | two
The spaces around the braces and the trailing semicolon are required.
To test:
one() { sleep 5; echo done; }
two() { while read line; do date "+%T - $line"; done; }
{ printf "%s\n" 1 2 3; one; } | two
16:29:53 - 1
16:29:53 - 2
16:29:53 - 3
16:29:58 - done
Use command grouping:
{ echo 15; one; } | two
Done!
You could do this with sed:
Example 'one' script, emits one line per second to show it's line buffered and running.
#!/bin/bash
while [ 1 ]; do
echo "TICK $(date)"
sleep 1
done
Then pipe that through this sed command, note that for your specific example 'ArbitraryText' will be the number of fields. I used ArbitraryText so that it's obvious that this is the inserted text. On OSX, -l is unbuffered with GNU Sed I believe it's -u
$ ./one | sed -l '1i\
> ArbitraryText
> '
What this does is it instructs sed to insert one line before processing the rest of your file, everything else will pass through untouched.
The end result is processed line-by-line without chunk buffering (or, waiting for the input script to finish)
ArbitraryText
TICK Fri Jun 28 13:26:56 PDT 2013
...etc
You should be able to then pipe that into 'two' as you would normally.

Bash - Redirection with wildcards

I'm testing to do redirection with wildcards. Something like:
./TEST* < ./INPUT* > OUTPUT
Anyone have any recommendations? Thanks.
Say you have the following 5 files: TEST1, TEST1, INPUT1, INPUT2, and OUTPUT. The command line
./TEST* < ./INPUT* > OUTPUT
will expand to
./TEST1 ./TEST2 < ./INPUT1 ./INPUT2 > OUTPUT.
In other words, you will run the command ./TEST1 with 2 arguments (./TEST2, ./INPUT2), with its input redirected from ./INPUT1 and its output redirected to OUTPUT.
To address what you are probably trying to do, you can only specify a single file using input redirection. To send input to TEST from both of the INPUT* files, you would need to use something like the following, using process substitution:
./TEST1 < <(cat ./INPUT*) > OUTPUT
To run each of the programs that matches TEST* on all the input files that match INPUT*, use the following loop. It collects the output of all the commands and puts them into a single file OUTPUT.
for test in ./TEST*; do
cat ./INPUT* | $test
done > OUTPUT
There is a program called TEST* that has to get various redirection into into called INPUT*, but the thing is there are many TEST programs and they all have a different number, e.g. TEST678. What I'm trying to do is push all the random INPUT files into all the all TEST programs.
You can write:
for program in TEST* # e.g., program == 'TEST678'
do
suffix="${program#TEST}" # e.g., suffix == '678'
input="INPUT$suffix" # e.g., input == 'INPUT678'
"./$program" < "$input" # e.g., run './TEST678 < INPUT678'
done > OUTPUT
for test in ./TEST*; do
for inp in ./INPUT*; do
$test < $inp >> OUTPUT
done
done

bash: calling a scripts with double-quote argument

I have a bash scripts which an argument enclosed with double quotes, which creates a shape-file of map within the given boundries, e.g.
$ export_map "0 0 100 100"
Within the script, there are two select statements:
select ENCODING in UTF8 WIN1252 WIN1255 ISO-8859-8;
...
select NAV_SELECT in Included Excluded;
Naturally, these two statements require the input to enter a number as an input. This can by bypassed by piping the numbers, followed by a newline, to the script.
In order to save time, I would like to have a script that would create 8 maps - for each combination of ENCODING (4 options) and NAV_SELECT (2 options).
I have written another bash script, create_map, to server as a wrapper:
#!/bin/bash
for nav in 1 2 3 4;
do
for enc in 1 2;
do
printf "$nav\n$enc\n" | /bin/bash -c "./export_map.sh \"0 0 100 100\""
done
done
**This works (thanks, Brian!), but I can't find a way to have the numeric argument "0 0 100 100" being passed from outside the outer script. **
Basically, I'm looking for way to accept an argument within double quotes to a wrapper bash script, and pass it - with the double quotes - to an inner script.
CLARIFICATIONS:
export_map is the main script, being called from create_map 8 times.
Any ideas?
Thanks,
Adam
If I understand your problem correctly (which I'm not sure about; see my comment), you should probably add another \n to your printf; printf does not add a trailing newline by default the way that echo does. This will ensure that the second value will be read properly by the select command which I'm assuming appears in export_map.sh.
printf "$nav\n$enc\n" | /bin/bash -c "./export_map.sh \"100 200 300 400\""
Also, I don't think that you need to add the /bin/bash -c and quote marks. The following should be sufficient, unless I'm missing something:
printf "$nav\n$enc\n" | ./export_map.sh "100 200 300 400"
edit Thanks for the clarification. In order to pass an argument from your wrapper script, into the inner script, keeping it as a single argument, you can pass in "$1", where the quotes indicate that you want to keep this grouped as one argument, and $1 is the first parameter to your wrapper script. If you want to pass all parameters from your outer script in to your inner script, each being kept as a single parameter, you can use "$#" instead.
#!/bin/bash
for nav in 1 2 3 4;
do
for enc in 1 2;
do
printf "$nav\n$enc\n" | ./export_map.sh "$1"
done
done
Here's a quick example of how "$#" works. First, inner.bash:
#!/bin/bash
for str in "$#"
do
echo $str
done
outer.bash:
#!/bin/bash
./inner.bash "$#"
And invoking it:
$ ./outer.bash "foo bar" baz "quux zot"
foo bar
baz
quux zot

Resources