Is it possible to pass a here document as a bash function argument, and in the function have the parameter preserved as a multi-lined variable?
Something along the following lines:
function printArgs {
echo arg1="$1"
echo -n arg2=
cat <<EOF
$2
EOF
}
printArgs 17 <<EOF
18
19
EOF
or maybe:
printArgs 17 $(cat <<EOF
18
19
EOF)
I have a here document that I want to feed to ssh as the commands to execute, and the ssh session is called from a bash function.
The way to that would be possible is:
printArgs 17 "$(cat <<EOF
18
19
EOF
)"
But why would you want to use a heredoc for this? heredoc is treated as a file in the arguments so you have to (ab)use cat to get the contents of the file, why not just do something like:
print Args 17 "18
19"
Please keep in mind that it is better to make a script on the machine you want to ssh to and run that then trying some hack like this because bash will still expand variables and such in your multiline argument.
If you're not using something that will absorb standard input, then you will have to supply something that does it:
$ foo () { while read -r line; do var+=$line; done; }
$ foo <<EOF
a
b
c
EOF
Building on Ned's answer, my solution allows the function to take its input as an argument list or as a heredoc.
printArgs() (
[[ $# -gt 0 ]] && exec <<< $*
ssh -T remotehost
)
So you can do this
printArgs uname
or this
printArgs << EOF
uname
uptime
EOF
So you can use the first form for single commands and the long form for multiple commands.
xargs should do exactly what you want. It convert standard input to argument for a command (notice -0 allow to preserve newlines)
$ xargs -0 <<EOF printArgs 17
18
19
EOF
But for you special case, I suggest you to send command on standard input of ssh:
$ ssh host <<EOF
ls
EOF
One way to feed commands to ssh through a here doc and a function is as so:
#!/bin/sh
# define the function
printArgs() {
echo "$1"
ssh -T remotehost
}
# call it with a here document supplying its standard input
printArgs 17 <<EOF
uname
uptime
EOF
The results:
17
Linux remotehost 2.6.32-5-686 ...
Last login: ...
No mail.
Linux
16:46:50 up 4 days, 17:31, 0 users, load average: 0.06, 0.04, 0.01
Related
It is possible to extract any payload from if you a shell script file with the following technique (see this):
#!/bin/sh
tail -n +4 > package.tgz
exec tar zxvf package.tgz
# payload comes here...
This needs a file so tail can seek the file to the right place.
In my particular situation, to automate things further, I'm using the | sh - pattern, but it breaks payload extraction, because pipes are not seekable.
I also tried to embed binary payload into a heredoc so I could make something like:
cat >package.tgz <<END
# payload comes here
END
tar zxvf package.tgz
But it makes shells (both bash and NetBSD's /bin/sh) confused and it just doesn't work.
I could use uuencode or base64 within the heredoc but I just wanted to know if there is some shell wizardry that could be used to receive both the script and binary data from stdin and extract the binary data out of the the data received from stdin.
Edit:
When I mean the shell gets confused, I mean it can just ignore null bytes or have undefined behaviour, even within the heredoc. Try:
cat > /tmp/out <<EOF
$(echo 410041 | xxd -p -r)
EOF
xxd -p /tmp/out
Bash complains: line 2: warning: command substitution: ignored null byte in input.
If I literally embed hex bytes 410041 into the shell script and use quoted heredoc, the result is different, but bash just drops null bytes.
echo '#!/bin/sh' > foo.sh
echo "cat > /tmp/out <<'EOF'" >> foo.sh
echo 410041 | xxd -p -r >> foo.sh
echo >> foo.sh
echo EOF >> foo.sh
echo 'xxd -p /tmp/out' >> foo.sh
bash /tmp/foo.sh
41410a
bash (and other shells) tend to "think" in C-strings, which are null-terminated, and hence cannot contain nulls (that's what indicates the end of the string). To produce nulls, you pretty much have to run some program/command that takes some safely-encoded content and produces nulls, and have its output sent directly to a file or pipe without the shell looking at it in between.
The simplest way to do this will be to encode the file with something like base64, then pipe the output from base64 -D. Something like this:
base64 -D <<'EOF' | tar xzv
H4sIAOzIHV8AA+y9DVxVVbowvs/hgAc8sY+Jhvl1VCoJBVQsETVgOIgViin2pSkq
....
EOF
If you don't want to use base64, another option would be to use bash's printf builtin to print null-containing or otherwise weird output to a pipe. It might look something like this:
LC_ALL=C
printf '\037\213\010\000\354\310\035_\000\003\354\275\015\\UU....' | tar xzv
In the above, example, I converted everything that wasn't printable ASCII to \octal codes. It should actually be ok to include almost everything as literal characters, except null, single-quote (cannot be included in a single-quoted string, probably simplest to octal-encode), backslash (just double it), and percent-sign (also double it). I don't think it'll be a problem, but it might be safest to set LC_ALL=C first, so it doesn't freak out about non-valid-UTF-8 in input strings.
Here's a quick & dirty C program to do the encoding. Note that it sends output to stdout, and it may contain junk that'll mess up your Terminal; so be sure to direct output somewhere.
#include <stdio.h>
#include <stdlib.h>
int main( int argc, char *argv[] ) {
int ch;
FILE *fp;
if ( argc != 2 ) {
fprintf(stderr, "Usage: %s infile\n", argv[0]);
return 1;
}
fp = fopen(argv[1], "r");
if (fp == NULL) {
fprintf(stderr, "Error opening %s", argv[1]);
return 1;
}
printf("#!/bin/bash\nLC_ALL=C\nprintf '");
while((ch = fgetc(fp)) != EOF) {
switch(ch) {
case '\000':
printf("\\000");
break;
case '\047':
printf("\\047");
break;
case '%':
case '\\':
printf("%c%c", ch, ch);
break;
default:
printf("%c", ch);
}
}
fclose(fp);
printf("' | tar xzv\n");
return 0;
}
if there is some shell wizardry that could be used to receive both the script and binary data from stdin and extract the binary data out of the the data received from stdin.
Having such script:
cat <<'EOF' >script.sh
#!/bin/sh
hostname
echo "What is you age?"
if ! IFS= read -r ans; then
echo "Read failed!"
else
echo "You are $ans years old."
fi
xxd -p
EOF
You can pipe to remote ssh shell via process substitution with here document followed by any data you want:
{
echo 123
echo "This is the input"
echo 001122 | xxd -r -p
} | {
u=$(uuidgen)
# Remove shell is started with a process subtitution
# terminated with a unique mark
echo "bash <(cat <<'$u'"
cat script.sh
# Note - script.sh may not read all input
# which will then executed as commands
# read it here and make sure nothing leaks
echo 'cat >/dev/null'
echo "$u"
echo ")"
# the process substitution is followed by input
# note that because the upper bash "eats" all input
# it will not execute.
cat
} | ssh host
sample execution:
host
What is you age?
You are 123 years old.
546869732069732074686520696e7075740a001122
u
No as to:
My real problem is: I'm dinamically generating shell script for remote configuration management, so I do createsh | ssh host. I'm embedding binary data to the shell script so it can be extracted to the remote host.
While you could separate two streams with a separator:
u=$(uuidgen); cat script.sh; echo; echo $u; cat binarydata.txt | ssh host bash -c 'sed "/$1/{d;q}" >script.sh; cat > binarydata.txt' _ "$u"
that is just reinventing the wheel - it already exists and is called tar:
tar -cf - script.sh binarydata.txt | ssh host bash -c 'cd /tmpdir; <unpack tar>; ./script.sh binarydata.txt; rm /tmpdir'
I want to replace the normal echo function in ubuntu bash with a function that additionally uses espeak to say something everytime echo is used.
I came up with an alias for my .bashrc
alias ghostTalk='espeak -v +whisper -s 80 -p 100 "$(myFun)"& /bin/echo $1'
(in my final version I would replace ghostTalk with echo)
But this gives as output:
~$ ghostTalk 123
[2] 5685
123
[1] Done espeak -v +whisper -s 80 -p 100 "$(myFun)"
How can I avoid this and have the normal echo output e.g. only 123 while its talking in the background?
Backgrounding notifications can be suppressed with a double-fork:
ghostTalk() {
( espeak -v +whisper -s 80 -p 100 "$(myFun)" & )
builtin echo "$#"
}
I want to use the variables of ssh in shell script.
suppose I have some variable a whose value I got inside the ssh and now I want to use that variable outside the ssh in the shell itself, how can I do this ?
ssh my_pc2 <<EOF
<.. do some operations ..>
a=$(ls -lrt | wc -l)
echo \$a
EOF
echo $a
In the above example first echo print 10 inside ssh prints 10 but second echo $a prints nothing.
I would refine the last answer by defining some special syntax for passing the required settings back, e.g. "#SET var=value"
We could put the commands (that we want to run within the ssh session) in a cmdFile file like this:
a=`id`
b=`pwd`
echo "#SET a='$a'"
echo "#SET b='$b'"
And the main script would look like this:
#!/bin/bash
# SSH, run the remote commands, and filter anything they passed back to us
ssh user#host <cmdFile | grep "^#SET " | sed 's/#SET //' >vars.$$
# Source the variable settings that were passed back
. vars.$$
rm -f vars.$$
# Now we have the variables set
echo "a = $a"
echo "b = $b"
If you're doing this for lots of variables, you can add a function to cmdFile, to simplify/encapsulate your special syntax for passing data back:
passvar()
{
var=$1
val=$2
val=${val:-${!var}}
echo "#SET ${var}='${val}'"
}
a=`id`
passvar a
b=`pwd`
passvar b
You might need to play with quotes when the values include whitespace.
A script like this could be used to store all the output from SSH into a variable:
#!/bin/bash
VAR=$(ssh user#host << _EOF
id
_EOF)
echo "VAR=$VAR"
it produces the output:
VAR=uid=1000(user) gid=1000(user) groups=1000(user),4(adm),10(wheel)
I have some scripts that work with parameters, they work just fine but i would like them to be able to read from stdin, from a pipe for example, an example, suppose this is called read:
#!/bin/bash
function read()
{
echo $*
}
read $*
Now this works with read "foo" "bar", but I would like to use it as:
echo "foo" | read
How do I accomplish this?
It's a little tricky to write a function which can read standard input, but works properly when no standard input is given. If you simply try to read from standard input, it will block until it receives any, much like if you simply type cat at the prompt.
In bash 4, you can work around this by using the -t option to read with an argument of 0. It succeeds if there is any input available, but does not consume any of it; otherwise, it fails.
Here's a simple function that works like cat if it has anything from standard input, and echo otherwise.
catecho () {
if read -t 0; then
cat
else
echo "$*"
fi
}
$ catecho command line arguments
command line arguments
$ echo "foo bar" | catecho
foo bar
This makes standard input take precedence over command-line arguments, i.e., echo foo | catecho bar would output foo. To make arguments take precedence over standard input (echo foo | catecho bar outputs bar), you can use the simpler function
catecho () {
if [ $# -eq 0 ]; then
cat
else
echo "$*"
fi
}
(which also has the advantage of working with any POSIX-compatible shell, not just certain versions of bash).
You can use <<< to get this behaviour. read <<< echo "text" should make it.
Test with readly (I prefer not using reserved words):
function readly()
{
echo $*
echo "this was a test"
}
$ readly <<< echo "hello"
hello
this was a test
With pipes, based on this answer to "Bash script, read values from stdin pipe":
$ echo "hello bye" | { read a; echo $a; echo "this was a test"; }
hello bye
this was a test
To combine a number of other answers into what worked for me (this contrived example turns lowercase input to uppercase):
uppercase() {
local COMMAND='tr [:lower:] [:upper:]'
if [ -t 0 ]; then
if [ $# -gt 0 ]; then
echo "$*" | ${COMMAND}
fi
else
cat - | ${COMMAND}
fi
}
Some examples (the first has no input, and therefore no output):
:; uppercase
:; uppercase test
TEST
:; echo test | uppercase
TEST
:; uppercase <<< test
TEST
:; uppercase < <(echo test)
TEST
Step by step:
test if file descriptor 0 (/dev/stdin) was opened by a terminal
if [ -t 0 ]; then
tests for CLI invocation arguments
if [ $# -gt 0 ]; then
echo all CLI arguments to command
echo "$*" | ${COMMAND}
else if stdin is piped (i.e. not terminal input), output stdin to command (cat - and cat are shorthand for cat /dev/stdin)
else
cat - | ${COMMAND}
Here is example implementation of sprintf function in bash which uses printf and standard input:
sprintf() { local stdin; read -d '' -u 0 stdin; printf "$#" "$stdin"; }
Example usage:
$ echo bar | sprintf "foo %s"
foo bar
This would give you an idea how function can read from standard input.
Late to the party here. Building off of #andy's answer, here's how I define my to_uppercase function.
if stdin is not empty, use stdin
if stdin is empty, use args
if args are empty, do nothing
to_uppercase() {
local input="$([[ -p /dev/stdin ]] && cat - || echo "$#")"
[[ -n "$input" ]] && echo "$input" | tr '[:lower:]' '[:upper:]'
}
Usages:
$ to_uppercase
$ to_uppercase abc
ABC
$ echo abc | to_uppercase
ABC
$ to_uppercase <<< echo abc
ABC
Bash version info:
$ bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin17)
I've discovered that this can be done in one line using test and awk...
test -p /dev/stdin && awk '{print}' /dev/stdin
The test -p tests for input on a pipe, which accepts input via stdin. Only if input is present do we want to run the awk since otherwise it will hang indefinitely waiting for input which will never come.
I've put this into a function to make it easy to use...
inputStdin () {
test -p /dev/stdin && awk '{print}' /dev/stdin && return 0
### accepts input if any but does not hang waiting for input
#
return 1
}
Usage...
_stdin="$(inputStdin)"
Another function uses awk without the test to wait for commandline input...
inputCli () {
local _input=""
local _prompt="$1"
#
[[ "$_prompt" ]] && { printf "%s" "$_prompt" > /dev/tty; }
### no prompt at all if none supplied
#
_input="$(awk 'BEGIN {getline INPUT < "/dev/tty"; print INPUT}')"
### accept input (used in place of 'read')
### put in a BEGIN section so will only accept 1 line and exit on ENTER
### WAITS INDEFINITELY FOR INPUT
#
[[ "$_input" ]] && { printf "%s" "$_input"; return 0; }
#
return 1
}
Usage...
_userinput="$(inputCli "Prompt string: ")"
Note that the > /dev/tty on the first printf seems to be necessary to get the prompt to print when the function is called in a Command Substituion $(...).
This use of awk allows the elimination of the quirky read command for collecting input from keyboard or stdin.
Yet another version that:
works by passing text through a pipe or from arguments
easy to copy and paste by changing command in last line
works in bash, zsh
# Prints a text in a decorated ballon
function balloon()
{
(test -p /dev/stdin && cat - || echo $#) figlet -t | cowsay -n -f eyes | toilet -t --gay -f term
}
Usage:
# Using with a pipe
$ fortune -s | balloon
# Passing text as parameter
balloon "$(fortune -s )"
I have a bash script containing multiple echo calls:
#!bin/bash
echo 'a'
echo 'b'
echo 'c'
I want to prepend a default text to all of these echo calls to get an output like this:
default_text: a
default_text: b
default_text: c
Is there a way to do this globally inside the script without adding the default text to each one of the echo calls?
Note: Below there are 2 very good answers to this question. The one accepted resolves the problem specifically for echo commands. The second one resolves the problem globally inside the script for any command that outputs to stdout.
Define a function:
function echo {
builtin echo 'default_text: ' "$#" ;
}
The builtin is needed, otherwise the function would be recursive.
This bash technique will work for any command that emits text to stdout:
exec 1> >(sed 's/^/default text: /')
$ echo foo
default text: foo
$ date
default text: Wed Jul 24 07:43:38 EDT 2013
$ ls
default text: file1
default text: file2
Try this:
shopt -s expand_aliases
alias echo="echo 'default_text: '"