Pass args for script when going thru pipe - bash

I have this example shell script:
echo /$1/
So I may call
$ . ./script 5
# output: /5/
I want to pipe the script into sh(ell), but can I pass the arg too ?
cat script | sh
# output: //

You can pass arguments to the shell using the -s option:
cat script | bash -s 5

Use bash -s -- <args>
e.g, install google cloud sdk
~ curl https://sdk.cloud.google.com | bash -s -- --disable-prompts

cat script | sh -s -- 5
The -s argument tells sh to take commands from standard input and not to require a filename as a positional argument. (Otherwise, without -s, the next non-flag argument would be treated as a filename.)
The -- tells sh to stop processing further arguments so that they are only picked up by the script (rather than applying to sh itself). This is useful in situations where you need to pass a flag to your script that begins with - or -- (e.g.: --dry-run) that must be ignored by sh. Also note that a single - is equivalent to --.
cat script | sh -s - --dry-run

Related

Running for loop using some other user

I am trying to execute a command using some other user. Here is my code
sudo -i -u someuser bash -c 'for i in 1 2 3; do echo $i; done'
I am expecting output as 1 2 3 but executed with someuser. Above code printing blank lines. I tried to add some other commands
sudo -i -u someuser bash -c 'for i in 1 2 3; do ls; done'
somefile1.txt somefile2.txt
somefile1.txt somefile2.txt
somefile1.txt somefile2.txt
If I try loop with the current user it gives expected output
for i in 1 2 3; do echo $i; done
1
2
3
Looks like bash is unable to resolve variable $i inside for loop. I tried escape character \ but not helping.
TL;DR: Don't use sudo -i with bash -c
The usual way to use sudo -i is without any arguments, in which case it simply starts an interactive login shell.
If you really must have a login shell for some reason (which isn't good practice for running scripts), it's much saner to simply add the extra arguments needed to make your shell a login shell to the bash command itself, and keep sudo out of the business of changing the arguments you pass it:
sudo -u someuser bash -lic 'for i in 1 2 3; do echo "$i"; done'
...or...
sudo -u someuser -i <<'EOF'
for i in 1 2 3; do echo "$i"; done
EOF
The Gory Details
When you use sudo -i with arguments, it rewrites the argument list given to concatenate the arguments together into a single command that can be put into the argument after -c, so you get something like {"sh", "-c", "bash -c ..."}. In concatenating arguments together, sudo uses the logic from parse_args handling for MODE_LOGIN_SHELL, adding an escape character before all characters that are not alphanumeric, _, - or $; keeping $ out of this list was introduced in commitish 6484574f, tagged as a fix for bug #564 (which was introduced by the fix to bug #413 -- personally, I think we would all be better off if bug 413 had been left in place rather than making any attempt to fix it).
See also sh -c does not expand positional parameters if I run it from sudo --login over at Unix & Linux Stack Exchange.
Since this behavior was deliberately put in place in 2013, I doubt there's any fixing it at this point -- any change to sudo's escaping behavior has the potential to modify the security properties of existing scripts.

output redirection inside bsub command

Is it possible to use output redirection inside bsub command such as:
bsub -q short "cat <(head -2 myfile.txt) > outputfile.txt"
Currently this bsub execution fails. Also my attempts to escape the redirection sign and the parenthesis were all failed, such as:
bsub -q short "cat \<\(head -2 myfile.txt\) > outputfile.txt"
bsub -q short "cat <\(head -2 myfile.txt\) > outputfile.txt"
*Note, I'm well aware that the redirection in this simple command is not necessary as the command could easily be written as:
bsub -q short "head -2 myfile.txt > outputfile.txt"
and then it would indeed be executed properly (without errors). I am however interested in implementing the redirection of output '<' within the context of a more composed command, and am bringing this simple command here as an example only.
<(...) is process substitution -- a bash extension not available on baseline POSIX shells. system(), subprocess.Popen(..., shell=True) and similar calls use /bin/sh, which is not guaranteed to have such extensions.
As a mechanism that works with any possible command without needing to worry about how to correctly escape it into a string, you can export that function and any variables it uses through the environment:
# for the sake of example, moving filenames out-of-band
in_file=myfile.txt
out_file=outputfile.txt
mycmd() { cat <(head -2 <"$in_file") >"$out_file"; }
export -f mycmd # export the function into the environment
export in_file out_file # and also any variables it uses
bsub -q short 'bash -c mycmd' # ...before telling bsub to invoke bash to run the function
<(...) is a bash feature while your command runs with sh.
Invoke bash explicitly to handle your bash-only features:
bsub -q short "bash -c 'cat <(head -2 myfile.txt) > outputfile.txt'"

What does bash -s do?

I'm new to bash and trying to understand what the script below is doing, i know -e is exit but i'm not sure what -se or what the $delimiter is for?
$delimiter = 'EOF-MY-APP';
$process = new SSH(
"ssh $target 'bash -se' << \\$delimiter".PHP_EOL
.'set -e'.PHP_EOL
.$command.PHP_EOL
.$delimiter
);
The -s options is usually used along with the curl $script_url | bash pattern. For example,
curl -L https://chef.io/chef/install.sh | sudo bash -s -- -P chefdk
-s makes bash read commands (the "install.sh" code as downloaded by "curl") from stdin, and accept positional parameters nonetheless.
-- lets bash treat everything which follows as positional parameters instead of options.
bash will set the variables $1 and $2 of the "install.sh" code to -P and to chefdk, respectively.
Reference: https://www.experts-exchange.com/questions/28671064/what-is-the-role-of-bash-s.html
From man bash:
-s If the -s option is present, or if no arguments remain after
option processing, then commands are read from the standard
input. This option allows the positional parameters to be
set when invoking an interactive shell.
From help set:
-e Exit immediately if a command exits with a non-zero status.
So, this tells bash to read the script to execute from Standard Input, and to exit immediately if any command in the script (from stdin) fails.
The delimiter is used to mark the start and end of the script. This is called a Here Document or a heredoc.

How execute bash script line by line?

If I enter bash -x option, it will show all the line. But the script will execute normaly.
How can I execute line by line? Than I can see if it do the correct thing, or I abort and fix the bug. The same effect is put a read in every line.
You don't need to put a read in everyline, just add a trap like the following into your bash script, it has the effect you want, eg.
#!/usr/bin/env bash
set -x
trap read debug
< YOUR CODE HERE >
Works, just tested it with bash v4.2.8 and v3.2.25.
IMPROVED VERSION
If your script is reading content from files, the above listed will not work. A workaround could look like the following example.
#!/usr/bin/env bash
echo "Press CTRL+C to proceed."
trap "pkill -f 'sleep 1h'" INT
trap "set +x ; sleep 1h ; set -x" DEBUG
< YOUR CODE HERE >
To stop the script you would have to kill it from another shell in this case.
ALTERNATIVE1
If you simply want to wait a few seconds before proceeding to the next command in your script the following example could work for you.
#!/usr/bin/env bash
trap "set +x; sleep 5; set -x" DEBUG
< YOUR CODE HERE >
I'm adding set +x and set -x within the trap command to make the output more readable.
The BASH Debugger Project is "a source-code debugger for bash that follows the gdb command syntax."
If your bash script is really a bunch of one off commands that you want to run one by one, you could do something like this, which runs each command one by one when you increment a variable LN, corresponding to the line number you want to run. This allows you to just run the last command again super easy, and then you just increment the variable to go to the next command.
Assuming your commands are in a file "it.sh", run the following, one by one.
$ cat it.sh
echo "hi there"
date
ls -la /etc/passwd
$ $(LN=1 && cat it.sh | head -n$LN | tail -n1)
"hi there"
$ $(LN=2 && cat it.sh | head -n$LN | tail -n1)
Wed Feb 28 10:58:52 AST 2018
$ $(LN=3 && cat it.sh | head -n$LN | tail -n1)
-rw-r--r-- 1 root wheel 6774 Oct 2 21:29 /etc/passwd
Have a look at bash-stepping-xtrace.
It allows stepping xtrace.
xargs: can filter lines
cat .bashrc | xargs -0 -l -d \\n bash
-0 Treat as raw input (no escaping)
-l Separate each line (Not by default for performances)
-d \\n The line separator

How to remove inherited functions in sh (posix)

How do I ensure that there are no unexpected functions inherited from the parent when my script is run? If using bash,
#!/bin/bash -p
will do the trick, as will invoking the script through env -i. But I cannot rely on the user to invoke env, I don't want to rely on bash, I don't want to do an exec-hack and re-exec the script, and
#!/usr/bin/env -i sh
does not work.
So I'm looking for a portable way (portable == posix) to ensure that the user hasn't defined functions that will unexpectedly modify the behavior of the script. My current solution is:
eval $( env | sed -n '/\([^=]*\)=(.*/s//\1/p' |
while read -r name; do echo unset -f $name\;; done )
but that's pretty ugly and of dubious robustness. Is there a good way to get the functionality that 'unset -f -a' should provide?
edit
Slightly less ugly, but no better (I don't like parsing the output of env):
unset -f $( env | sed -n '/\([^=]*\)=(.*/s//\1/p' | tr \\012 \ )
#!/bin/bash --posix
results in:
SHELLOPTS=braceexpand:hashall:interactive-comments:posix
same as:
#!/bin/sh
SHELLOPTS=braceexpand:hashall:interactive-comments:posix
and "sh" is posix...
EDIT:
tested a few functions - unset was not required in my case...
EDIT2:
compare output of "set", not just "env"
EDIT3:
the following example - output of both "set|wc" also gives same results:
#!/bin/sh
set
set|wc
unset -f $( env | sed -n '/\([^=]*\)=(.*/s//\1/p' | tr \\012 \ )
set
set|wc
How about using the following env shebang line that sets a reasonable PATH variable to invoke the sh interpreter:
#!/usr/bin/env -i PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/xpg4/bin sh

Resources