I have a program that can output its results only to files, with -o option. This time I need to output it to console, i.e. stdout. Here is my first try:
myprog -o /dev/stdout input_file
But it says:
/dev/ not writable
I've found this question that's similar to mine, but /dev/stdout is obviously not going to work without some additional magic.
Q: How to redirect output from file to stdout?
P.S. Conventional methods without any specialized software are preferable.
Many tools interpret a - as stdin/stdout depending on the context of its usage. Though, this is not part of the shell and therefore depends on the program used.
In your case the following could solve your problem:
myprog -o - input_file
If the program can only write to a file, then you could use a named pipe:
pipename=/tmp/mypipe.$$
mkfifo "$pipename"
./myprog -o "$pipename" &
while read line
do
echo "output from myprog: $line"
done < "$pipename"
rm "$pipename"
First we create the pipe, we put it into /tmp to keep it out of the way of backup programs. The $$ is our PID, and makes the name unique at runtime.
We run the program in background, and it should block trying to write to the pipe. Some programs use a technique called "memory mapping" in which case this will fail, because a pipe cannot be memory mapped (a good program would check for this).
Then we read the pipe in the script as we would any other file.
Finally we delete the pipe.
You can cat the contents of the file written by myprog.
myprog -o tmpfile input_file && cat tmpfile
This would have the described effect -- allowing you to pipe the output of myprog to some subsequent command -- although it is a different approach than you had envisioned.
In the circumstance that the output of myprog (perhaps more aptly notmyprog) is too big to write to disk, this approach would not be good.
A solution that cleans up the temp file in the same line and still pipes the contents out at the end would be this
myprog -o tmpfile input_file && contents=`cat tmpfile` && rm tmpfile && echo "$contents"
Which stores the contents of the file in a variable so that it may be accessed after deleting the file. Note the quotes in the argument of the echo command. These are important to preserve newlines in the file contents.
Related
I want to read text file contents into a Bash variable, suppressing the error message if that file does not exist. In POSIX, I would do
var=$(cat file 2>/dev/null)
But I read (e.g. at How to read a file into a variable in shell) that it's a Useless Use of Cat in Bash. So, I'm trying those:
var=$(< file 2>/dev/null)
var=$(< file) 2>/dev/null
But it the first doesn't read an existing file, and both print -bash: file: No such file or directory if the file does not exist. Why doesn't this work? (Especially: What completely breaks the first one?)
What does work is this:
{ var=$(< file); } 2>/dev/null
But it's ugly and cumbersome. So, is there a nicer syntax, or is this a valid use of cat after all?
With BASH it is easy to test if file exists before reading it in
[ -e file ] && var=$(< file )
As Inian points in the comments the file might exist but the user might not have sufficient rights for reading it. The -r test would take care of that.
[ -r file ] && var=$(< file )
var=$(cat file 2>/dev/null)
This is a Useful Use of Cat. It enforces an order of operations: stderr is redirected before file is opened. It serves a purpose. Don't feel bad about using it.
Redirection applies to the output of a command line. When the cat command produces an error, you can redirect the stderr as you've done.
But when you generate the error within the command line itself, through input redirection from a file that doesn't exist, THAT shell is the one producing the error, and you're not redirecting its output to anywhere special. You've successfully discovered the "ugly and cumbersome" workaround, which mimics a subshell using curly braces.
Personally, I'd use an if construct, but if you really prefer shorter code over fewer calls to external programs, this is a perfectly valid use of cat.
Define a commands block with curly braces and redirect the stderr to /dev/null for the whole block.
{ IFS= read -rd '' var <file;} 2>/dev/null
I am trying to read contents of a file given from standard input into a script. Any ideas how to do that?
Basically what I want is:
someScript.ksh < textFile.txt
Inside the ksh, I am using a binary which will read data from "textFile.txt" if the file is given on the standard input.
Any ideas how do I "pass" the contents of the given input file, if any, to another binary inside the script?
You haven't really given us enough information to answer the question, but here are a few ideas.
If you have a script that you want to accept data on stdin, and that script calls something else that expects data to be passed in as a filename on the command line, you can take stdin and dump it to a temporary file. Something like:
#!/bin/sh
tmpfile=$(mktemp tmpXXXXXX)
cat > $tmpfile
/some/other/command $tmpfile
rm -f $tmpfile
(In practice, you would probably use trap to clean up the temporary file on exit).
If instead the script is calling another command that also expects input on stdin, you don't really have to do anything special. Inside your script, stdin of anything you call will be connected to stdin of the calling script, and as long as you haven't previously consumed the input you should be all set.
E.g., given a script like this:
#!/bin/sh
sed s/hello/goodbye/
I can run:
echo hello world | sh myscript.sh
And get:
goodbye world
I need a simple way to capture both the issued command (from a script) and the resultant output to a log file.
Here's a simple example:
Command:
grep '^#PermitRootLogin' /etc/ssh/sshd_config
Output:
#PermitRootLogin no
Required result:
grep '^#PermitRootLogin' /etc/ssh/sshd_config
PermitRootLogin no
By redirecting stdin I seem to be stomping on stdout; it shouldn't be so difficult but it's eluding me for some reason.
Using tee just creates a log file with extraneous noise; and I'd like to use the file for a report at the end (no noise).
Thanks in advance,
TT
Wrap your desired behaviour in a function, i.e.
function stomp {
echo $#
eval $#
}
then call it like so
stomp grep '^#PermitRootLogin' /etc/ssh/sshd_config
There's the script utility, that will record everything you type plus what any program outputs to stdout in a file named typescript. However, it is very thorough and also records all the newlines and line feeds plus all the prompts by the shell, so most likely you want to post process the typescript.
Maybe it's easier to just
echo "grep '^#PermitRootLogin' /etc/ssh/sshd_config" > file
grep '^#PermitRootLogin' /etc/ssh/sshd_config >> file
Then you have the command and its output in file.
I have a proprietary command-line program that I want to call in a bash script. It has several options in a .conf file that are not available as command-line switches. I can point the program to any .conf file using a switch, -F.
I would rather not manage a .conf file separate from this script. Is there a way to create a temporary document to use a .conf file?
I tried the following:
echo setting=value|my_prog -F -
But it does not recognize the - as stdin.
You can try /dev/stdin instead of -.
You can also use a here document:
my_prog -F /dev/stdin <<OPTS
opt1 arg1
opt2 arg2
OPTS
Finally, you can let bash allocate a file descriptor for you (if you need stdin for something else, for example):
my_prog -F <(cat <<OPTS
opt1 arg1
opt2 arg2
OPTS
)
When writing this question, I figured it out and thought I would share:
exec 3< <(echo setting=value)
my_prog -F /dev/fd/3
It reads the #3 file descriptor and I don't need to manage any permissions or worry about deleting it when I'm done.
You can use process substitution for this:
my_prog -F <(command-that-generates-config)
where command-that-generates-config can be something like echo setting=value or a function.
It sounds like you want to do something like this:
#!/bin/bash
MYTMPFILE=/tmp/_myfilesettings.$$
cat <<-! > $MYTMPFILE
somekey=somevalue
someotherkey=somevalue
!
my_prog -F $MYTMPFILE
rm -f $MYTMPFILE
This uses what is known as a "here" document, in that all the contents between the "cat <<-!" up to ! is read in verbatim as stdin. The '-' in that basically tells the shell to remove all leading tabs.
You can use anything as the "To here" marker, e.g., this would work as well:
cat <<-EOF > somewhere
stuff
more stuff
EOF
I am looking for a way to dump input into my terminal from a file, but when EOF is reached I would like input returned back to my keyboard. Is there a way to do this with Bash (or any other commonly-available *nix shell)?
Details:
I am debugging a server program which executes a fork to start a child process. Every time I start a debugging session with gdb I have to type set follow-fork-mode child. I would like to use some sort of input redirection to have this pre-populated. There are other uses as well that I can think of, so I'd prefer a general solution - hence the reason this question is not about gdb.
Solution:
start-server.sh
#!/bin/bash
cat run-server.txt - |/bin/bash
run-server.txt
gdb ./Server
set follow-fork-mode child
run
You can do this:
cat input_file - | program
That will concatenate input_file followed by stdin to program, which I think is what you want.
maybe expect is what you want
Maybe use an intermediate file? Assuming you want to run the script myscript.sh:
INPUT_FILE=input.txt
TEMP_FILE=`mktemp -t input`
myscript.sh < $TEMP_FILE &
cat $INPUT_FILE >> $TEMP_FILE
cat >> $TEMP_FILE