Passing arguments to tclsh via bash heredoc - bash

I have following testcase:
#!/bin/bash
tclsh <<EOF
puts "argv=$argv"
EOF
How I can pass arguments to the tclsh? The arguments must be after the file (as per man page of tclsh)
SYNOPSIS
tclsh ?-encoding name? ?fileName arg arg ...?
Update:
First I will take bash command flags and use them to make arguments for tclsh:
tclarg1="....."
tclarg2="....."
Then I will have string variable with tcl:
SCRIPT='
proc test{arg1 arg2} {
some tcl commands
}
test ???? ????
'
And lastly I execute that string:
tclsh <<-HERE
${POPUPSCRIPT}
HERE
How I pass "tclarg1" and "tclarg2" to the tcl script?
The string could come from other sources (by sourcing another file) and also bash script can execute that string from multiple locations/functions.

Heredocs are sent to the program's standard input, so your command:
tclsh <<EOF
puts "argv=$argv"
EOF
invokes tclsh with no arguments — not even a filename — and writes puts "argv=" to tclsh's standard input. (Note that the $argv gets processed by Bash, so tclsh never sees it. To fix that, you need to write <<'EOF' instead of <<EOF.)
So in order to pass arguments to your tclsh script, you need to pass tclsh a filename argument, so that your arguments can go after that filename argument.
Since heredocs are sent to the program's standard input, the filename to use is just /dev/stdin:
tclsh /dev/stdin "$tclarg1" "$tclarg2" <<'EOF'
puts "argv=$argv"
EOF
Note that with this approach, tclsh won't implicitly run your .tclshrc at the start of your script anymore (because it only does that when it defaults to reading from standard input due to not being given any arguments). If you need anything from your .tclshrc, then you'll need to explicitly source it:
tclsh /dev/stdin "$tclarg1" "$tclarg2" <<'EOF'
source ~/.tclshrc
puts "argv=$argv"
EOF

#!/bin/bash
tclsh <<EOF
puts "argv=$#"
EOF

This is a tricky little question, because heredocs are finicky about where they appear on a command line. Also, they end up being delivered to commands as file descriptors, so a little trickery is required.
#!/bin/bash
# Get the script into a variable. Note the backticks and the single quotes around EOF
script=`cat <<'EOF'
puts "argv=$argv"
EOF`
# Supply the script to tclsh as a file descriptor in the right place in the command line
tclsh <(echo $script) "$#"
That seems to do the right thing.
bash$ /tmp/testArgPassing.sh a 'b c' d
argv=a {b c} d
However, I'd definitely always use a separate .tcl file at the point where this sort of thing would otherwise be contemplated. Argument manipulation is at least as easy in Tcl as in Bash, and doing so enables various editors to provide sane syntax highlighting too.
And locating the right tclsh on the PATH is easy with the help of /usr/bin/env:
#!/usr/bin/env tclsh
puts "argv=$argv"

Related

Reading input while also piping a script via stdin

I have a simple Bash script:
#!/usr/bin/env bash
read X
echo "X=$X"
When I execute it with ./myscript.sh it works. But when I execute it with cat myscript.sh | bash it actually puts echo "X=$X" into $X.
So this script prints Hello World executed with cat myscript.sh | bash:
#!/usr/bin/env bash
read X
hello world
echo "$X"
What's the benefit of executing a script with cat myscript.sh | bash? Why doesn't do it the same things as if I execute it with ./myscript.sh?
How can I avoid Bash to execute line by line but execute all lines after the STDIN reached the end?
Instead of just running
read X
...instead replace it with...
read X </dev/tty || {
X="some default because we can't read from the TTY here"
}
...if you want to read from the console. Of course, this only works if you have a /dev/tty, but if you wanted to do something robust, you wouldn't be piping from curl into a shell. :)
Another alternative, of course, is to pass in your value of X on the command line.
curl https://some.place/with-untrusted-code-only-idiots-will-run-without-reading \
| bash -s "value of X here"
...and refer to "$1" in your script when you want X.
(By the way, I sure hope you're at least using SSL for this, rather than advising people to run code they download over plain HTTP with no out-of-band validation step. Lots of people do it, sure, but that's making sites they download from -- like rvm.io -- big targets. Big, easy-to-man-in-the-middle-or-DNS-hijack targets).
When you cat a script to bash the code to execute is coming from standard input.
Where does read read from? That's right also standard input. This is why you can cat input to programs that take standard input (like sed, awk, etc.).
So you are not running "a script" per-se when you do this. You are running a series of input lines.
Where would you like read to read data from in this setup?
You can manually do that (if you can define such a place). Alternatively you can stop running your script like this.

What does !# (reversed shebang) means?

From this link: http://scala.epfl.ch/documentation/getting-started.html
#!/bin/sh
exec scala "$0" "$#"
!#
object HelloWorld extends App {
println("Hello, world!")
}
HelloWorld.main(args)
I know that $0 is for the script name, and $# for all argument passed to the execution, but what does !# means (google bash "!#" symbols seems to show no result)?
does it mean exit from script and stdin comes from remaining lines?
This is part of scala itself, not bash. Note what's happening: the exec command replaces the process with scala, which then reads the file given as "$0", i.e., the bash script file itself. Scala ignores the part between #! and !# and interprets the rest of the text as the scala program. They chose the "reverse shebang" as an appropriate counterpart to the shebang.
To see what I mean about exec replacing the process, try this simple script:
#!/bin/sh
exec ls
echo hello
It will not print "hello" since the process will be replaced by the ls process when exec is executed.
Reference: http://www.scala-lang.org/files/archive/nightly/docs-2.10.2/manual/html/scala.html
A side comment, consider multiline script,
#!/bin/sh
SOURCE="$LIB1/app.jar:$LIB2/app2.jar"
exec scala -classpath $SOURCE -savecompiled "$0" "$#"
!#
Also note -savecompiled which can speed up reexecutions notably.

Is it possible to get the uninterpreted command line used to invoke a ruby script?

As per the title, is it possible to get the raw command line used to invoke a ruby script?
The exact behaviour I'm after is similar to SSH when invoking a command directly:
ssh somehost -- ls -l
SSH will run "ls -l" on the server. It needs to be unparsed because if the shell has already interpreted quotes and performed expansions etc the command may not work correctly (if it contains quotes and such). This is why ARGV is no good; quotes are stripped.
Consider the following example:
my-command -- sed -e"s/something/something else/g"
The ARGV for this contains the following:
--
sed
-es/something/something else/g
The sed command will fail as the quotes will have been stripped and the space in the substitution command means that sed will not see "else/g".
So, to re-iterate, is it possible to get the raw command line used to invoke a ruby script?
No, this is at the OS level.
You could try simply quoting the entire input:
my-command -- "sed -e\"s/something/something else/g\""
In Ruby, this could be used like this:
ruby -e "puts ARGV[0]" -- "sed -e\"s/something/something else/g\""
(output) sed -e"s/something/something else/g"
Or, in a file putsargv1.rb (with the contents puts ARGV[1]):
ruby -- "putsargv1.rb" "sed -e\"s/something/something else/g\""
(output) sed -e"s/something/something else/g"
Your example is misguided. ssh somehost -- ls * will expand * on localhost (into e.g. ls localfile1 localfile2 localfile3), then execute that on the remote host, with the result of lots and lots of ls: cannot access xxx: No such file or directory errors. ssh does not see the uninterpreted command line.
As you said, you would get -es/something/something else/g as a single parameter. That is exactly what sed would get, too. This is, in fact, identical to what you get if you write -e"s/something/something else/g" and to "-es/something/something else/g", and to -es/something/something\ else.
Using this fact, you can use Shellwords.shellescape to "protect" the spaces and other unmentionables before handing them off to an external process. You can't get the original line, but you can make sure that you preserve the semantics.
Shellescape on the args worked but didn't quite mimic SSH. Take the following example (see below for test.rb contents):
ruby test.rb -- ls -l / \| sed -e's/root/a b c/g'
This will fail using the shellescape approach but succeed with SSH. I opted for manually escaping quotes and spaces. There may be some edge cases this doesn't capture but it seems to work for the majority of cases.
require 'shellwords'
unparsed = if ARGV.index('--')
ARGV.slice(ARGV.index('--') + 1, ARGV.length)
end || []
puts "Unparsed args: #{unparsed}"
exit if unparsed.empty?
shellescaped = unparsed.map(&Shellwords.method(:shellescape)).join(" ")
quoted = unparsed.map do |arg|
arg.gsub(/(["' ])/) { '\\' + $1 }
end.join(" ")
puts "Shellescaped: #{shellescaped}"
puts `bash -c #{shellescaped.shellescape}`
puts "Quoted: #{quoted}"
puts `bash -c #{quoted.shellescape}`
Thanks for your answers :)

Open a shell in the second process of a pipe

I'm having problems understanding what's going on in the following situation. I'm not familiar with UNIX pipes and UNIX at all but have read documentation and still can't understand this behaviour.
./shellcode is an executable that successfully opens a shell:
seclab$ ./shellcode
$ exit
seclab$
Now imagine that I need to pass data to ./shellcode via stdin, because this reads some string from the console and then prints "hello " plus that string. I do it in the following way (using a pipe) and the read and write works:
seclab$ printf "world" | ./shellcode
seclab$ hello world
seclab$
However, a new shell is not opened (or at least I can't see it and iteract with it), and if I run exit I'm out of the system, so I'm not in a new shell.
Can someone give some advice on how to solve this? I need to use printf because I need to input binary data to the second process and I can do it like this: printf "\x01\x02..."
When you use a pipe, you are telling Unix that the output of the command before the pipe should be used as the input to the command after the pipe. This replaces the default output (screen) and default input (keyboard). Your shellcode command doesn't really know or care where its input is coming from. It just reads the input until it reaches the EOF (end of file).
Try running shellcode and pressing Control-D. That will also exit the shell, because Control-D sends an EOF (your shell might be configured to say "type exit to quit", but it's still responding to the EOF).
There are two solutions you can use:
Solution 1:
Have shellcode accept command-line arguments:
#!/bin/sh
echo "Arguments: $*"
exec sh
Running:
outer$ ./shellcode foo
Arguments: foo
$ echo "inner shell"
inner shell
$ exit
outer$
To feed the argument in from another program, instead of using a pipe, you could:
$ ./shellcode `echo "something"`
This is probably the best approach, unless you need to pass in multi-line data. In that case, you may want to pass in a filename on the command line and read it that way.
Solution 2:
Have shellcode explicitly redirect its input from the terminal after it's processed your piped input:
#!/bin/sh
while read input; do
echo "Input: $input"
done
exec sh </dev/tty
Running:
outer$ echo "something" | ./shellcode
Input: something
$ echo "inner shell"
inner shell
$ exit
outer$
If you see an error like this after exiting the inner shell:
sh: 1: Cannot set tty process group (No such process)
Then try changing the last line to:
exec bash -i </dev/tty

"< <(command-here)" shell idiom resulting in "redirection unexpected"

This command works fine:
$ bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
However, I don't understand how exactly stable is passed as a parameter to the shell script that is downloaded by curl. That's the reason why I fail to achieve the same functionality from within my own shell script - it gives me ./foo.sh: 2: Syntax error: redirection unexpected:
$ cat foo.sh
#!/bin/sh
bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
So, the questions are: how exactly this stable param gets to the script, why are there two redirects in this command, and how do I change this command to make it work inside my script?
Regarding the "redirection unexpected" error:
That's not related to stable, it's related to your script using /bin/sh, not bash. The <() syntax is unavailable in POSIX shells, which includes bash when invoked as /bin/sh (in which case it turns off nonstandard functionality for compatibility reasons).
Make your shebang line #!/bin/bash.
Understanding the < <() idiom:
To be clear about what's going on -- <() is replaced with a filename which refers to the output of the command which it runs; on Linux, this is typically a /dev/fd/## type filename. Running < <(command), then, is taking that file and directing it to your stdin... which is pretty close the behavior of a pipe.
To understand why this idiom is useful, compare this:
read foo < <(echo "bar")
echo "$foo"
to this:
echo "bar" | read foo
echo "$foo"
The former works, because the read is executed by the same shell that later echoes the result. The latter does not, because the read is run in a subshell that was created just to set up the pipeline and then destroyed, so the variable is no longer present for the subsequent echo.
Understanding bash -s stable:
bash -s indicates that the script to run will come in on stdin. All arguments, then, are fed to the script in the $# array ($1, $2, etc), so stable becomes $1 when the script fed in on stdin is run.

Resources