I need to execute a command line tool within a shell script that requires user input. Usually, i would write something like:
echo $input | command
however, $input must be read from the standard error of command which I'm able to get e. g. using:
input=$(command 2>&1 > /dev/null | awk '/some.*regexp$/ {print $2}')
But I don't want to execute the command twice (once for extracting/saving the content of $input and once for feeding command with that input).
So, how can i get around that circular dependency and combine these two things to a single bash call with only one execution of command?
Many thanks in advance!
Related
I have a script that stores the output of commands, functions, and other scripts in a log file.
I want to avoid capturing user input.
The line that is in charge of storing the output of the commands to a logfile is this one:
$command 2>&1 | tee /dev/tty | ruby -pe 'print Time.now.strftime("[%s] ") if !$stdin.tty?' >> "$tempfile"
If the command is a function or a script that asks for user input and prints out those data, that input is stored in temporary file. I would like to avoid that since I don't want to capture sensible data.
I can't modify the commands, functions that I'm wrapping.
Your command only saves program output, not user input. The problem you're seeing is that the command has chosen to output whatever the user inputs, merging it into its own output that is then obviously logged.
There is no good way around this. Please fix your command.
Anyways. Here's a bad, fragile, hacky way around it:
tempfile=test.txt
command='read -ep Enter_some_input: '
$command 2>&1 |
tee /dev/tty |
python3 -c $'import os\nwhile s:=os.read(0, 1024):\n if len(s) > 3: os.write(1, s)' |
ruby -pe 'print Time.now.strftime("[%s] ") if !$stdin.tty?' >> "$tempfile"
The Python command drops all reads of 3 bytes or less. This aims to remove character by character echo as would happen in the most basic cases of a user typing into readline and similar, while hopefully not removing too much intentional output.
I am trying to build shell script. One of the commands used in this script is supposedly using read command demanding param to complete its execution. Now i want to pass same argument everytime for this. Can i automate this ?
In short, how to automate read command by shell script ?
Because of some reasons i can not share actual script.
If read is reading from standard input, you can just redirect from a file containing the necessary data:
$ cat foo.txt
a
b
$ someScript.sh < foo.txt
or pipe the data from another command:
$ printf 'a\nb\n' | someScript.sh
I have a compiled program which i run from the shell; as i run it, it asks me for an input file in stdin. I want to run that program in a bash loop, with predefined input file, such as
for i in $(seq 100); do
input.txt | ./myscript
done
but of course this won't work. How can I achieve that? I cannot edit the source code.
Try
for i in $(seq 100); do
./myscript < input.txt
done
Pipes (|) are inter-process. That is, they stream between processes. What you're looking for is file redirection (e.g. <, > etc.)
Redirection simply means capturing output from a file, command,
program, script, or even code block within a script and sending it as
input to another file, command, program, or script.
You may see cat used for this e.g. cat file | mycommand. Given the above, this usage is redundant and often the winner of a 'Useless use of cat' award.
You can use:
./myscript < input.txt
to send content of input.txt on stdin of myscript
Based on your comments, it looks like myscript prompts for a file name and you want to always respond with input.txt. Did you try this?
for i in $(seq 100); do
echo input.txt | ./myscript
done
You might want to just try this first:
echo input.txt | ./myscript
just in case.
I'm writing a bash script called 'run' that tests programs with pre-defined inputs.
It takes in a file as the first parameter, then a program as a second parameter.
The call would look like
./run text.txt ./check
for example, the program 'run' would run 'check' with text.txt as the input. This will save me lots of testing time with my programs.
right now I have
$2 < text.txt > text.created
So it takes the text.txt and redirects it as input in to the program specified, which is the second argument. Then dumps the result in text.created.
I have the input in text.txt and I know what the output should look like, but when I cat text.created, it's empty.
Does anybody know the proper way to run a program with a file as the input? This seems intuitive to me, but could there be something wrong with the 'check' program rather than what I'm doing in the 'run' script?
Thanks! Any help is always appreciated!
EDIT: the file text.txt contains multiple lines of files that each have an input for the program 'check'.
That is, text.txt could contain
asdf1.txt
asdf2.txt
asdf3.txt
I want to test check with each file asdf1.txt, asdf2.txt, asdf3.txt.
A simple test with
#!/bin/sh
# the whole loop reads $1 line by line
while read
do
# run $2 with the contents of the file that is in the line just read
xargs < $REPLY $2
done < $1
works fine. Call that file "run" and run it with
./run text.txt ./check
I get the program ./check executed with text.txt as the parameters. Don't forget to chmod +x run to make it executable.
This is the sample check program that I use:
#!/bin/sh
echo "This is check with parameters $1 and $2"
Which prints the given parameters.
My file text.txt is:
textfile1.txt
textfile2.txt
textfile3.txt
textfile4.txt
and the files textfile1.txt, ... contain one line each for every instance of "check", for example:
lets go
or
one two
The output:
$ ./run text.txt ./check
This is check with parameters lets and go
This is check with parameters one and two
This is check with parameters uno and dos
This is check with parameters eins and zwei
The < operator redirects the contents of the file to the standard input of the program. This is not the same as using the file's contents for the arguments of the file--which seems to be what you want. For that do
./program $(cat file.txt)
in bash (or in plain old /bin/sh, use
./program `cat file.txt`
).
This won't manage multiple lines as separate invocations, which your edit indicates is desired. For that you probably going to what some kind scripting language (perl, awk, python...) which makes parsing a file linewise easy.
I have used time command to execute program (like "xxx$ time ./a.out"), with output as follows,
real 0m7.250s
usr 0m10.395s
sys 0m0.026s
What I want to get is 0 and 7.250 as in 0m7.250s. I have tried "awk '{print $2}'", but without success; nothing output there.
PS: I have tried put output of time command to a file by using ">", also without success.
Try:
pax$ tm=$((time sleep 1) 2>&1 | awk '/^real/{print $2}') ; echo $tm
0m1.002s
(substituting your own a.out command of course, sleep 1 was just used for an example).
It creates a subshell for the time command and ensures that its standard error is sent to standard output instead (time specifically outputs its information to standard error so that it's kept separate from normal output of the program it's timing).
The awk command the captures the line starting real and outputs the second argument (the time).
The time command sends its output to standard error, so as not to interfere with normal program output. You will want to use 2>&1 to redirect it where it can be captured.