How to pass argument in bash pipe from terminal - bash

i have a bash script show below in a file called test.sh
#!/usr/bin/env bash
echo $1
echo "execution done"
when i execute this script using
Case-1
./test.sh "started"
started
execution done
showing properly
Case-2
If i execute with
bash test.sh "started"
i'm getting the out put as
started
execution done
But i would like to execute this using a cat or wget command with arguments
For example like.
Q1
cat test.sh |bash
Or using a command
Q2
wget -qO - "url contain bash" |bash
So in Q1 and Q2 how do i pass argument
Something simlar to this shown in this github
https://github.com/creationix/nvm
Please refer installation script

$ bash <(curl -Ls url_contains_bash_script) arg1 arg2
Explanation:
$ echo -e 'echo "$1"\necho "done"' >test.sh
$ cat test.sh
echo "$1"
echo "done"
$ bash <(cat test.sh) "hello"
hello
done
$ bash <(echo -e 'echo "$1"\necho "done"') "hello"
hello
done

You don't need to pipe to bash; bash runs as standard in your terminal.
If I have a script and I have to use cat, this is what I'll do:
cat script.sh > file.sh; chmod 755 file.sh; ./file.sh arg1 arg2 arg3
script.sh is the source script. You can replace that call with anything you want.
This has security implications though; just running an arbitrary code in your shell - especially with wget where the code comes from a remote location.

Related

Argument evaluation

Look at this bash script (script.sh):
#!/bin/bash
echo "aaa"
echo "bbb"
...
echo $1
...
Now, i am trying to run this script this way:
./script.sh $(cat file1)
I have a problem:
the "cat file1" is run before script.sh. Bash is evaluating all arguments before running script.sh
I would like to run "cat file1" inside script.sh, on the "echo $1" line.
How can i do this ?
I have tried this:
./script.sh $(eval 'cat file1')
But it gives me the same result...
Thanks
I think, there is no way unless you can modify the script script.sh.
If you can modify script.sh, you can write your script like this:
#!/bin/bash
echo "$($1)"
Then call it in this way:
./script.sh "cat file1"
This means, you pass the command to be executed to the script and you echo the result of the executed command in the script.
But the above script is a little cumbersome. This one is simpler and would do the same:
#!/bin/bash
$1

Redirect copy of stdin to file from within bash script itself

In reference to https://stackoverflow.com/a/11886837/1996022 (also shamelessly stole the title) where the question is how to capture the script's output I would like to know how I can additionally capture the scripts input. Mainly so scripts that also have user input produce complete logs.
I tried things like
exec 3< <(tee -ia foo.log <&3)
exec <&3 <(tee -ia foo.log <&3)
But nothing seems to work. I'm probably just missing something.
Maybe it'd be easier to use the script command? You could either have your users run the script with script directly, or do something kind of funky like this:
#!/bin/bash
main() {
read -r -p "Input string: "
echo "User input: $REPLY"
}
if [ "$1" = "--log" ]; then
# If the first argument is "--log", shift the arg
# out and run main
shift
main "$#"
else
# If run without log, re-run this script within a
# script command so all script I/O is logged
script -q -c "$0 --log $*" test.log
fi
Unfortunately, you can't pass a function to script -c which is why the double-call is necessary in this method.
If it's acceptable to have two scripts, you could also have a user-facing script that just calls the non-user-facing script with script:
script_for_users.sh
--------------------
#!/bin/sh
script -q -c "/path/to/real_script.sh" <log path>
real_script.sh
---------------
#!/bin/sh
<Normal business logic>
It's simpler:
#! /bin/bash
tee ~/log | your_script
The wonderful thing is your_script can be a function, command or a {} command block!

Replacing 'source file' with its content, and expanding variables, in bash

In a script.sh,
source a.sh
source b.sh
CMD1
CMD2
CMD3
how can I replace the source *.sh with their content (without executing the commands)?
I would like to see what the bash interpreter executes after sourcing the files and expanding all variables.
I know I can use set -n -v or run bash -n -v script.sh 2>output.sh, but that would not replace the source commands (and even less if a.sh or b.sh contain variables).
I thought of using a subshell, but that still doesn't expand the source lines. I tried a combination of set +n +v and set -n -v before and after the source lines, but that still does not work.
I'm going to send that output to a remote machine using ssh.
I could use <<output.sh to pipe the content into the ssh command, but I can't log as root onto the remote machine, but I am however a sudoer.
Therefore, I thought I could create the script and send it as a base64-encoded string (using that clever trick )
base64 script | ssh remotehost 'base64 -d | sudo bash'
Is there a solution?
Or do you have a better idea?
You can do something like this:
inline.sh:
#!/usr/bin/env bash
while read line; do
if [[ "$line" =~ (\.|source)\s+.+ ]]; then
file="$(echo $line | cut -d' ' -f2)"
echo "$(cat $file)"
else
echo "$line"
fi
done < "$1"
Note this assumes the sourced files exist, and doesn't handle errors. You should also handle possible hashbangs. If the sourced files contain themselves source, you need to apply the script recursively, e.g. something like (not tested):
while egrep -q '^(source|\.)' main.sh; do
bash inline.sh main.sh > main.sh
done
Let's test it
main.sh:
source a.sh
. b.sh
echo cc
echo "$var_a $var_b"
a.sh:
echo aa
var_a="stack"
b.sh:
echo bb
var_b="overflow"
Result:
bash inline.sh main.sh
echo aa
var_a="stack"
echo bb
var_b="overflow"
echo cc
echo "$var_a $var_b"
bash inline.sh main.sh | bash
aa
bb
cc
stack overflow
BTW, if you just want to see what bash executes, you can run
bash -x [script]
or remotely
ssh user#host -t "bash -x [script]"

Bash script: how to get the whole command line which ran the script

I would like to run a bash script and be able to see the command line used to launch it:
sh myscript.sh arg1 arg2 1> output 2> error
in order to know if the user used the "std redirection" '1>' and '2>', and therefore adapt the output of my script.
Is it possible with built-in variables ??
Thanks.
On Linux and some unix-like systems, /proc/self/fd/1 and /proc/self/fd/2 are symlinks to where your std redirections are pointing to. Using readlink, we can query if they were redirected or not by comparing them to the parent process' file descriptor.
We will however not use self but $$ because $(readlink /proc/"$$"/fd/1) spawns a new shell so self would no longer refer to the current bash script but to a subshell.
$ cat test.sh
#!/usr/bin/env bash
#errRedirected=false
#outRedirected=false
parentStderr=$(readlink /proc/"$PPID"/fd/2)
currentStderr=$(readlink /proc/"$$"/fd/2)
parentStdout=$(readlink /proc/"$PPID"/fd/1)
currentStdout=$(readlink /proc/"$$"/fd/1)
[[ "$parentStderr" == "$currentStderr" ]] || errRedirected=true
[[ "$parentStdout" == "$currentStdout" ]] || outRedirected=true
echo "$0 ${outRedirected:+>$currentStdout }${errRedirected:+2>$currentStderr }$#"
$ ./test.sh
./test.sh
$ ./test.sh 2>/dev/null
./test.sh 2>/dev/null
$ ./test.sh arg1 2>/dev/null # You will lose the argument order!
./test.sh 2>/dev/null arg1
$ ./test.sh arg1 2>/dev/null >file ; cat file
./test.sh >/home/camusensei/file 2>/dev/null arg1
$
Do not forget that the user can also redirect to a 3rd file descriptor which is open on something else...!
Not really possible. You can check whether stdout and stderr are pointing to a terminal: [ -t 1 -a -t 2 ]. But if they do, it doesn't necessarily mean they weren't redirected (think >/dev/tty5). And if they don't, you can't distinguish between stdout and stderr being closed and them being redirected. And even if you know for sure they are redirected, you can't tell from the script itself where they point after redirection.

script doesn't see arg in '$ ssh bash script arg'

I'd like to see both commands print hello
$ bash -l -c "/bin/echo hello"
hello
$ ssh example_host bash -l -c /bin/echo hello
$
How can hello be passed as a parameter in the ssh command?
The bash -l -c is needed, so login shell startup scripts are executed.
Getting ssh to start a login shell would solve the problem too.
When you pass extra args after -c, they're put into the argv of the shell while that command is executing. You can see that like so:
bash -l -c '/bin/echo "$0" "$#"' hello world
...so, those arguments aren't put on the command line of echo (unless you go out of your way to make it so), but instead are put on the command line of the shell which you're telling to run echo with no arguments.
That is to say: When you run
bash -l -c /bin/echo hello
...that's the equivalent of this:
(exec -a hello bash -c /bin/echo)
...which puts hello into $0 of a bash which runs only /bin/echo. Since running /bin/echo doesn't look at $0, of course it's not going to print hello.
Now, because executing things via ssh means you're going through two steps of shell expansion, it adds some extra complexity. Fortunately, you can have the shell handle that for you automatically, like so:
printf -v cmd_str '%q ' bash -l -c '/bin/echo "$0" "$#"' hello world
ssh remote_host "$cmd_str"
This tells bash (printf %q is a bash extension, not available in POSIX printf) to quote your command such that it expands to itself when processed by a shell, then feeds the result into ssh.
All that said -- treating $0 as a regular parameter is bad practice, and generally shouldn't be done absent a specific and compelling reason. The Right Thing is more like the following:
printf -v cmd '%q ' /bin/echo hello world # your command
printf -v cmd '%q ' bash -l -c "$cmd" # your command, in a login shell
ssh remotehost "$cmd" # your command, in a login shell, in ssh

Resources