I have a proprietary command-line program that I want to call in a bash script. It has several options in a .conf file that are not available as command-line switches. I can point the program to any .conf file using a switch, -F.
I would rather not manage a .conf file separate from this script. Is there a way to create a temporary document to use a .conf file?
I tried the following:
echo setting=value|my_prog -F -
But it does not recognize the - as stdin.
You can try /dev/stdin instead of -.
You can also use a here document:
my_prog -F /dev/stdin <<OPTS
opt1 arg1
opt2 arg2
OPTS
Finally, you can let bash allocate a file descriptor for you (if you need stdin for something else, for example):
my_prog -F <(cat <<OPTS
opt1 arg1
opt2 arg2
OPTS
)
When writing this question, I figured it out and thought I would share:
exec 3< <(echo setting=value)
my_prog -F /dev/fd/3
It reads the #3 file descriptor and I don't need to manage any permissions or worry about deleting it when I'm done.
You can use process substitution for this:
my_prog -F <(command-that-generates-config)
where command-that-generates-config can be something like echo setting=value or a function.
It sounds like you want to do something like this:
#!/bin/bash
MYTMPFILE=/tmp/_myfilesettings.$$
cat <<-! > $MYTMPFILE
somekey=somevalue
someotherkey=somevalue
!
my_prog -F $MYTMPFILE
rm -f $MYTMPFILE
This uses what is known as a "here" document, in that all the contents between the "cat <<-!" up to ! is read in verbatim as stdin. The '-' in that basically tells the shell to remove all leading tabs.
You can use anything as the "To here" marker, e.g., this would work as well:
cat <<-EOF > somewhere
stuff
more stuff
EOF
Related
I want to create a script which can be run both in verbose and silent mode if preferred. My idea was to create a variable which contains "&> /dev/null" and if the script is runned silently I just clear it. The silent mode works fine but if I want to pass this as the last "argument" to something (see my example down) it does not work.
Here is an example:
I want to zip something (I know there is a -q option, the question is more like theorethical) and if I write this it works as I intended:
$ zip bar.zip foo
adding: foo (stored 0%)
$ zip bar.zip foo &> /dev/null
$
But when I declare the &> /dev/null as a variable I get this:
$ var="&> /dev/null"
$ zip bar.zip foo "$var"
zip warning: name not matched: &> /dev/null
updating: foo (stored 0%)
I saw other solutions to redirect parts of the script by intention but I'm curious if my idea can work and if not, why not?
It would be more convenient to use something like this in my script because I only need to redirect some lines or commands but too much to surround each with an if:
I'm trying something like this to be clear:
if verbose
verbose="&> /dev/null"
else
verbose=
command1 $1 $2
command2 $1
command3 $1 $2 $3 "$verbose"
command4
bash will recognize a file name of /dev/stdout as standard output, when used with a redirection, whether or not your file system has such an entry.
# If an argument is given your script, use that value
# Otherwise, use /dev/stdout
out=${1:-/dev/stdout}
zip bar.zip foo > "$out"
Your issue is the order of expansion performed by the shell. The variable is replaced by your stdout redirect AFTER passing the commands to the command zip
You need the shell to expand that variable BEFORE invoking zip command. TO do this you can use eval which takes a first pass at expanding variables and such.
eval zip bar.zip foo "$var"
However be weary of using eval, as unchecked values can lead to exploits, and its use is generally discouraged.
You could consider setting up your own file descriptor and set that to be /dev/null, a real file, or even stdout. ANd then each command that should go to one of those would be:
zip bar.zip foo > 3
http://www.tldp.org/LDP/abs/html/io-redirection.html#FDREF
I'm writing a very small bash script to merge some files in a directory.
Say I have a directory full of files:
deb_1
deb_2
deb_3
deb_4
...
I want to write a small bash script to merge them all into a file, and delete the originals
So I would run, mrg deb* outputfile, and the resulting directory would look like:
outputfile
Containing all of the deb files merged. The way I do it normally is cat deb* > outputfile && rm deb* -f
However trying to convert this to a bash script doesn't quite work out:
#!bin/bash
cat $1 > $2 && rm $1 -f
The wildcard expansion replaces $1-> deb_1,$2-> deb_2
Keep your script as is:
#!bin/bash
cat $1 > $2 && rm $1 -f
But apply single quotes to the first argument when calling it:
bash myscript.sh 'deb*' outputfile
Along the line that #Eugeniu mentioned in his comment, something like
$ myscript outputfile files*
would be possible, given the following definition of myscript:
#!/bin/bash
OUTPUT="$1";shift
cat "$#" > "$OUTPUT" && rm "$#" -f
$# is a list of all command line arguments,
and "$#" is a list of separately-quoted command line arguments: "deb01" "deb02" "deb03".
shift is used to remove the output file from the list of parameters so that it does not appear in the expansion of $#.
cat concatenates the files together into your ouput file,
and rm removes the originals.
Usage
The general form of the command becomes:
myscript OUTPUT [INPUT_FILE]...
In your case, you'd want to call this as
$ myscript outputfile deb_{1..4}/*
which grabs every file inside of each directory as command line arguments to myscript
Alternate implementation
Using the last argument as the output file is probably possible (using $#), but requires more work to remove the last parameter from $#.
A simple way to overcome this — with the obvious modification to the script to use stdout — would be:
$ myscript input files... > outputfile
Since redirections are applied by the shell (truncating and opening outputfile for input) before the command is run, the rm would still be safe in exactly the same sense that it is now — which is questionable, IMO.
I have a command command that takes an input file as argument. Is there a way to call command without actually creating a file?
I would like to achieve the following behavior
$ echo "content" > tempfile
$ command tempfile
$ rm tempfile
if possible:
as a one-liner,
without creating a file,
using either a bash (or sh) feature or a "well-known" command (as standard as xargs)
It feels like there must be an easy way to do it but I can't find it.
Just use a process substitution.
command <(echo "content")
Bash will create a FIFO or other type of temporary file in /dev for the standard output of whatever happens in the process. For example:
$ echo <(echo hi)
/dev/fd/63
I have a program that can output its results only to files, with -o option. This time I need to output it to console, i.e. stdout. Here is my first try:
myprog -o /dev/stdout input_file
But it says:
/dev/ not writable
I've found this question that's similar to mine, but /dev/stdout is obviously not going to work without some additional magic.
Q: How to redirect output from file to stdout?
P.S. Conventional methods without any specialized software are preferable.
Many tools interpret a - as stdin/stdout depending on the context of its usage. Though, this is not part of the shell and therefore depends on the program used.
In your case the following could solve your problem:
myprog -o - input_file
If the program can only write to a file, then you could use a named pipe:
pipename=/tmp/mypipe.$$
mkfifo "$pipename"
./myprog -o "$pipename" &
while read line
do
echo "output from myprog: $line"
done < "$pipename"
rm "$pipename"
First we create the pipe, we put it into /tmp to keep it out of the way of backup programs. The $$ is our PID, and makes the name unique at runtime.
We run the program in background, and it should block trying to write to the pipe. Some programs use a technique called "memory mapping" in which case this will fail, because a pipe cannot be memory mapped (a good program would check for this).
Then we read the pipe in the script as we would any other file.
Finally we delete the pipe.
You can cat the contents of the file written by myprog.
myprog -o tmpfile input_file && cat tmpfile
This would have the described effect -- allowing you to pipe the output of myprog to some subsequent command -- although it is a different approach than you had envisioned.
In the circumstance that the output of myprog (perhaps more aptly notmyprog) is too big to write to disk, this approach would not be good.
A solution that cleans up the temp file in the same line and still pipes the contents out at the end would be this
myprog -o tmpfile input_file && contents=`cat tmpfile` && rm tmpfile && echo "$contents"
Which stores the contents of the file in a variable so that it may be accessed after deleting the file. Note the quotes in the argument of the echo command. These are important to preserve newlines in the file contents.
When I create a Automator action in XCode using Bash, all files and folder paths are printed to stdin.
How do I get the content of those files?
Whatever I try, I only get the filename in the output.
If I just select "Run shell script" I can select if I want everything to stdin or as arguments. Can this be done for an XCode project to?
It's almost easier to use Applescript, and let that run the Bash.
I tried something like
xargs | cat | MyCommand
What's the pipe between xargs and cat doing there? Try
xargs cat | MyCommand
or, better,
xargs -R -1 -I file cat "file" | MyCommand
to properly handle file names with spaces etc.
If, instead, you want MyComand invoked on each file,
local IFS="\n"
while read filename; do
MyCommand < $filename
done
may also be useful.
read will read lines from the script's stdin; just make sure to set $IFS to something that won't interfere if the pathnames are sent without backslashes escaping any spaces:
OLDIFS="$IFS"
IFS=$'\n'
while read filename ; do
echo "*** $filename:"
cat -n "$filename"
done
IFS="$OLDIFS"