I want to set a command as configuration option of a program.
There are parameters passed to the command.
But I want the output of the command to be replaced with sed.
In principle I want to to do something like this:
option = cmd <arg1> <arg2> | sed s/x/y/
But the the option can only set this way:
option = cmd
and the arguments are automatically append by the program.
So I want to "reverse" the pipe like this:
option = sed s/x/y/ < <(cmd)
But since the program appends the arguments the following would
be exectuted:
sed s/x/y/ < <(cmd) <arg1> <arg2>
but I wanted
sed s/x/y/ < <(cmd <arg1> <arg2>)
Since there are no placeholders in the option for the arguments I am not able to close the parenthese properly.
Is there any way around this without writing a wrapper script?
If you're careful with quoting, you can do it like this:
option = bash -c 'cmd "$1" "$2" | sed "s/x/y/"' sub
The word "sub" at the end is arbitrary but necessary; it's there because $1 in the subshell created with bash -c is actually the second positional argument, not the first one.
Related
I have a bash script which uses one multi-line sed command to change a file. From the command line, this line works:
sed -e '1a\' -e '/DELIMITER="|" \' -e '/RESTRICT_ADD_CHANGE=1 \' -e '/FIELDS=UPDATE_MODE|PRIMARYKEYVALUE|PATRONFLAGS' -e '1 d' general_update_01 > general_update_01.mp
I use the same bash script for a variety of files. So I need to pass all of the sed commands from the sending application to the bash script as a single parameter. However, when it passes in from the application, I get only -e.
In the bash script, I have tried a variety of ways to receive the variable as a complete string. None of these store the variable.
sed_instructions=$(echo $6)
sed_instructions=$6
sed_instructions=$(eval "$6")
and a few other configurations.
My command line would use the variable like this:
sed $sed_instructions $filename > $filename.mp
I assume you invoke your shell script like this
script.sh -e '1a' -e '/DELIMITER="|" ' -e '/RESTRICT_ADD_CHANGE=1 ' -e '/FIELDS=UPDATE_MODE|PRIMARYKEYVALUE|PATRONFLAGS' -e '1 d' general_update_01
What you want to do here is store the nth parameter as the filename, and the first n-1 parameters in an array
#!/usr/bin/env
n=$#
filename=${!n}
sed_commands=("${#:1:n-1}")
# Now, call sed
sed "${sed_commands[#]}" "$filename" > "${filename}.mp"
To demonstrate that code in action:
$ set -- one two three four
$ n=$#
$ filename=${!n}
$ args=("${#:1:n-1}")
$ declare -p filename args
declare -- filename="four"
declare -a args=([0]="one" [1]="two" [2]="three")
Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)
I am working with bash. I have a file F containing the command-line arguments for a Java program, and I need to store both outputs of the Java programs, i.e., output on standard output and the exit value. Storing the standard output works via
cat F | xargs java program > Output
But xargs does not give access to the exit-code of the Java program.
So well, I split it, running the program twice, once for standard output, once for the exit code --- but getting the exit code and running it correctly seems impossible. One might try
java program $(cat F)
but that doesn't work if F contains for example " ", namely one command-line argument for program which is a space. The problem is the expansion of the argument $(cat F).
Now I don't see a way to get around that problem? I don't want "$(cat F)", since I want that $(cat F) expands into many strings --- but I don't want further expansion of these strings.
If on the other hand there would be a better xargs, giving access to the original exit value, that would solve the problem, but I am not aware of that.
Does this do what you want?
cat F | xargs bash -c 'java program "$#"; echo "Returned: $?"' - > Output
Or, as #rici correctly points out, avoid the UUOC
xargs bash -c 'java program "$#"; echo "Returned: $?"' - < F > Output
Alternatively something like (though I haven't thought through all the ramifications of doing this so there may be a reason this is a bad idea).
{ sed -e 's/^/java program /' F | bash -s; echo "Returned $?"; } > Output
This lets you store the return code in a variable the xargs versions do not (at least not outside the xargs-spawned shell.
sed -e 's/^/java program /' F | bash -s > Output; ret=$?
To use a ${program} shell variable just expand it directly.
xargs bash -c 'java '"${program}"' "$#"; echo "Returned: $?"' - < F > Output
sed -e 's/^/java '"${program}"' /' F | bash -s > Output; ret=$?
Just beware of characters that are "magic" in the replacement of the s/// command.
I'm afraid the question is really not very clear, so I will make the assumptions here explicit:
The file F has one argument per line, with all whitespace other then newline characters being significant, and with no need to replace backslash escapes such as \t.
You only need to invoke the java program once, with all of the arguments.
You need the exit status to be preserved.
This can be done quite easily in bash by reading F into an array with mapfile:
# Invoked as: do_the_work program < F > output
do_the_work() {
local -a args
mapfile -t args
java "$#" "${args[#]}"
}
The status return of that function is precisely the status return of the java executable, so you could capture it immediately after the call:
do_the_work my_program
rc=$?
For convenience, the function allows you to also specify arguments on the command line; it uses "$#" to pass all the command-line arguments before passing the arguments read from stdin.
If you have GNU Parallel (and do not mind extra output on STDERR):
cat F | parallel -Xj1 --halt 1 java program > Output
echo $?
I have a bash script that I wish to read from a file to get it's arguments set. Basically my script reads arguments positionally ($1, $2, $3, etc.)
while test $# -gt 0; do
case $1 in
-h | --help)
echo "Help cruft"
exit 0
;;
esac
shift
done
One of the options I was hoping could be a config file that reads in arguments (for simple and easy config) so I was hoping the set -- command would work (-- to over ride the arguments). However, since they are defined in a file I have to read it in and use xargs to pass them:
-c | --config)
cat $2 | xargs set --
continue
;;
The trouble is that xargs buggers up the -- so I don't know how to accomplish this.
Note: I realize I could use source config_file and have it set variable; might be the final option. I wanted to know if I could do it like above and simplify the documentation.
A simplified example script:
# foo.sh
echo "x y z" | xargs set --
echo $*
# Command line
$ bash foo.sh a b c
xargs: set: No such file or directory
a b c
xargs can't execute set because:
set is a shell built-in, not an external command. xargs only knows how to execute commands. (Some shell built-ins shadow commands with the same name, such as printf, true, and [. So xargs can execute those commands, but the semantics might not be identical to the built-in.)
Even if xargs could execute set, it would have no effect because xargs does not run inside of the shell's environment; every command executed by xargs is a separate process. So you will get no error if you do this:
echo a b c | xargs bash -c 'set -- "${#}"' _
But it also won't do anything useful. (Substitute set with echo and you'll see that it does actually invoke the command.)
How to read arguments from a file.
First, you need to answer the question: what does it mean to have arguments in a file? Are they individual whitespace-separated words with no mechanism to include whitespace in any argument? (That would also be required for xargs to work in its default mode, so it is not a totally unreasonable assumption, although it is almost certainly going to get you into trouble at some point.)
In that case you don't need xargs at all; you can just use command substitution:
set -- $(<file)
While that will work fine, this won't:
echo a b c | set -- $(</dev/stdin)
because the pipeline (created by the | operator) causes the processes on either side to be run in subshells, and consequently the set doesn't modify the current shell's environment variables.
A more robust solution
Suppose that each argument is in a single line in the file, which makes it possible to include whitespace in an argument, but not a newline. Then we could use the useful mapfile built-in to read the arguments into an array, and set the positional arguments from the array. (Or just use the array directly, but that would be a different question.)
mapfile -t args < file
set -- "${args[#]}"
Again, watch out for piping into mapfile; it won't work, for the same reason that it didn't work with set.
When I create a Automator action in XCode using Bash, all files and folder paths are printed to stdin.
How do I get the content of those files?
Whatever I try, I only get the filename in the output.
If I just select "Run shell script" I can select if I want everything to stdin or as arguments. Can this be done for an XCode project to?
It's almost easier to use Applescript, and let that run the Bash.
I tried something like
xargs | cat | MyCommand
What's the pipe between xargs and cat doing there? Try
xargs cat | MyCommand
or, better,
xargs -R -1 -I file cat "file" | MyCommand
to properly handle file names with spaces etc.
If, instead, you want MyComand invoked on each file,
local IFS="\n"
while read filename; do
MyCommand < $filename
done
may also be useful.
read will read lines from the script's stdin; just make sure to set $IFS to something that won't interfere if the pathnames are sent without backslashes escaping any spaces:
OLDIFS="$IFS"
IFS=$'\n'
while read filename ; do
echo "*** $filename:"
cat -n "$filename"
done
IFS="$OLDIFS"