From this link: http://scala.epfl.ch/documentation/getting-started.html
#!/bin/sh
exec scala "$0" "$#"
!#
object HelloWorld extends App {
println("Hello, world!")
}
HelloWorld.main(args)
I know that $0 is for the script name, and $# for all argument passed to the execution, but what does !# means (google bash "!#" symbols seems to show no result)?
does it mean exit from script and stdin comes from remaining lines?
This is part of scala itself, not bash. Note what's happening: the exec command replaces the process with scala, which then reads the file given as "$0", i.e., the bash script file itself. Scala ignores the part between #! and !# and interprets the rest of the text as the scala program. They chose the "reverse shebang" as an appropriate counterpart to the shebang.
To see what I mean about exec replacing the process, try this simple script:
#!/bin/sh
exec ls
echo hello
It will not print "hello" since the process will be replaced by the ls process when exec is executed.
Reference: http://www.scala-lang.org/files/archive/nightly/docs-2.10.2/manual/html/scala.html
A side comment, consider multiline script,
#!/bin/sh
SOURCE="$LIB1/app.jar:$LIB2/app2.jar"
exec scala -classpath $SOURCE -savecompiled "$0" "$#"
!#
Also note -savecompiled which can speed up reexecutions notably.
Related
I have a bash script that runs a bunch of other commands (e.g. docker). I want the script to be able to capture all the output into a variable and then echo out a custom return at the end.
Example:
#!/usr/bin/env bash
set -euo pipefail
# Capture into this (PSEUDO CODE)
declare CapturedOutput
$(Capture Output > CapturedOutput)
# Run some commands like...
docker-compose ... up -d
# Stop capturing (PSEUDO CODE)
$(Stop Capture Output > CapturedOutput)
echo "something"
So if someone called my script like ./runit.sh and the docker command had output, they wouldn't see it but would only see:
> ./runit.sh
something
The most straightforward way to capture output into a variable is to use command substitution. You can easily wrap that around a large chunk of script:
#!/usr/bin/env bash
set -euo pipefail
# To capture standard error, too:
# exec 3>&2 2>&1
CapturedOutput=$(
# Run some commands like...
docker-compose ... up -d
)
# To restore standard error:
# exec 2>&3-
echo "something"
The caveat is that the commands from which output is being captured run in a subshell, but I'm having trouble coming up with an alternative for capturing into a variable in the same shell in which the commands themselves run.
I think you want to look at the "Command Substitution" section on bash's man page.
To translate the pseudocode in your question to bash's format, something like:
CapturedOutput=$(docker-compose ... up -d)
CapturedOutput=$CapturedOutput $(docker-compose ... up -d)
...
I have following testcase:
#!/bin/bash
tclsh <<EOF
puts "argv=$argv"
EOF
How I can pass arguments to the tclsh? The arguments must be after the file (as per man page of tclsh)
SYNOPSIS
tclsh ?-encoding name? ?fileName arg arg ...?
Update:
First I will take bash command flags and use them to make arguments for tclsh:
tclarg1="....."
tclarg2="....."
Then I will have string variable with tcl:
SCRIPT='
proc test{arg1 arg2} {
some tcl commands
}
test ???? ????
'
And lastly I execute that string:
tclsh <<-HERE
${POPUPSCRIPT}
HERE
How I pass "tclarg1" and "tclarg2" to the tcl script?
The string could come from other sources (by sourcing another file) and also bash script can execute that string from multiple locations/functions.
Heredocs are sent to the program's standard input, so your command:
tclsh <<EOF
puts "argv=$argv"
EOF
invokes tclsh with no arguments — not even a filename — and writes puts "argv=" to tclsh's standard input. (Note that the $argv gets processed by Bash, so tclsh never sees it. To fix that, you need to write <<'EOF' instead of <<EOF.)
So in order to pass arguments to your tclsh script, you need to pass tclsh a filename argument, so that your arguments can go after that filename argument.
Since heredocs are sent to the program's standard input, the filename to use is just /dev/stdin:
tclsh /dev/stdin "$tclarg1" "$tclarg2" <<'EOF'
puts "argv=$argv"
EOF
Note that with this approach, tclsh won't implicitly run your .tclshrc at the start of your script anymore (because it only does that when it defaults to reading from standard input due to not being given any arguments). If you need anything from your .tclshrc, then you'll need to explicitly source it:
tclsh /dev/stdin "$tclarg1" "$tclarg2" <<'EOF'
source ~/.tclshrc
puts "argv=$argv"
EOF
#!/bin/bash
tclsh <<EOF
puts "argv=$#"
EOF
This is a tricky little question, because heredocs are finicky about where they appear on a command line. Also, they end up being delivered to commands as file descriptors, so a little trickery is required.
#!/bin/bash
# Get the script into a variable. Note the backticks and the single quotes around EOF
script=`cat <<'EOF'
puts "argv=$argv"
EOF`
# Supply the script to tclsh as a file descriptor in the right place in the command line
tclsh <(echo $script) "$#"
That seems to do the right thing.
bash$ /tmp/testArgPassing.sh a 'b c' d
argv=a {b c} d
However, I'd definitely always use a separate .tcl file at the point where this sort of thing would otherwise be contemplated. Argument manipulation is at least as easy in Tcl as in Bash, and doing so enables various editors to provide sane syntax highlighting too.
And locating the right tclsh on the PATH is easy with the help of /usr/bin/env:
#!/usr/bin/env tclsh
puts "argv=$argv"
I'm trying to understand -c option for bash better. The man page says:
-c: If the -c option is present, then commands are read from the first non-option argument command_string. If there are arguments after the command_string, they are assigned to the positional parameters, starting with $0.
I'm having trouble understanding what this means.
If I do the following command with and without bash -c, I get the same result (example from http://www.tldp.org/LDP/abs/html/abs-guide.html):
$ set w x y z; IFS=":-;"; echo "$*"
w:x:y:z
$ bash -c 'set w x y z; IFS=":-;"; echo "$*"'
w:x:y:z
bash -c isn't as interesting when you're already running bash. Consider, on the other hand, the case when you want to run bash code from a Python script:
#!/usr/bin/env python
import subprocess
fileOne='hello'
fileTwo='world'
p = subprocess.Popen(['bash', '-c', 'diff <(sort "$1") <(sort "$2")',
'_', # this is $0 inside the bash script above
fileOne, # this is $1
fileTwo, # and this is $2
])
print p.communicate() # run that bash interpreter, and print its stdout and stderr
Here, because we're using bash-only syntax (<(...)), you couldn't run this with anything that used POSIX sh by default, which is the case for subprocess.Popen(..., shell=True); using bash -c thus provides access to capabilities that wouldn't otherwise be available without playing with FIFOs yourself.
Incidentally, this isn't the only way to do that: One could also use bash -s, and pass code in on stdin. Below, that's being done not from Python but POSIX sh (/bin/sh, which likewise is not guaranteed to have <(...) available):
#!/bin/sh
# ...this is POSIX sh code, not bash code; you can't use <() here
# ...so, if we want to do that, one way is as follows:
fileOne=hello
fileTwo=world
bash -s "$fileOne" "$fileTwo" <<'EOF'
# the inside of this heredoc is bash code, not POSIX sh code
diff <(sort "$1") <(sort "$2")
EOF
The -c option finds its most important uses when bash is launched by another program, and especially when the code to be executed may or does include redirections, pipelines, shell built-ins, shell variable assignments, and / or non-trivial lists. On POSIX systems that have /bin/sh being an alias for bash, it specifically supports the C library's system() function.
Equivalent behavior is much trickier to implement on top of fork / exec without using -c, though not altogether impossible.
How to execute BASH code from outside the BASH shell?
The answer is, using the -c option, which makes BASH execute whatever that has been passed as an argument to -c.
So, yeah, this is the purpose of this option, to execute BASH code arbitrarily, but just in another way.
In short, I'd like to abstract this shebang so I can literally copy and paste it into other .ML files without having to specify the filename each time:
#!/usr/bin/env ocamlscript -o hello
print_endline "Hello World!"
I realize I could just drop the -o hello bit, but I'd like all the binaries to have UNIX names (hello), instead of Windows names (hello.ml.exe).
You need a complex shebang to do this. A Clojure example that has the desired behavior:
":";exec clj -m `basename $0 .clj` $0 ${1+"$#"}
":";exit
Clojure is Java-based, which is why clj needs the basename of the file (something, not something.clj). In order to get the basename, you need a multiline shebang, because a single line shebang can only handle a single, simple, static command line argument. In order to do multiline shebangs, you need a syntax which simultaneously:
Sends shell commands to the shell
Hides the shell commands from the main language
Does anyone know of OCaml trickery to do this? I've tried the following with no success:
(*
exec ocamlscript -o `basename $0 .ml` $0 ${1+"$#"}
exit
*)
let rec main = print_endline "Hello World!"
What you're looking for is a shell and Objective Caml polyglot (where the shell part invokes an ocaml interpreter to perform the real work). Here's a relatively simple one. Adapt to use ocamlscript if necessary, though I don't see the point.
#!/bin/sh
"true" = let exec _ _ _ = "-*-ocaml-*- vim:set syntax=ocaml: " in
exec "ocaml" "$0" "$#"
;;
(* OCaml code proper starts here *)
print_endline "hello"
After some trials, I found this shebang:
#!/bin/sh
"true" = let x' = "" in (*'
sh script here
*) x'
It is sort of an improvement of Gilles’ proposal, as it permits to write a full shell script inside the OCaml comment, without being bothered at all with syntax incompatibilities.
The script must terminate (eg. with exec or exit) without reaching the end of the comment, otherwise a syntax error will occur. This can be fixed easily, but it is not very useful regarding the intended use of such a trick.
Here is a variant that entails zero runtime overhead on the OCaml side, but declares a new type name (choose it arbitrarily complicated if this is bothering):
#!/bin/sh
type int' (*' >&- 2>&-
sh script here
*)
For example, here is a script that executes the OCaml code with modules Str and Unix, and can also compile it when passed the parameter --compile:
#!/bin/sh
type int' (*' >&- 2>&-
if [ "$1" = "--compile" ]; then
name="${0%.ml}"
ocamlopt -pp 'sed "1s/^#\!.*//"' \
str.cmxa unix.cmxa "$name.ml" -o "$name" \
|| exit
rm "$name".{cm*,o}
exit
else
exec ocaml str.cma unix.cma "$0" "$#"
fi
*)
I do not think that ocamlscript supports this. It may be worth submitting a feature request to the author to allow customization of the compiled binary's extension without specifying the full output basename.
Is there a better way to save a command line before it it executed?
A number of my /bin/bash scripts construct a very long command line. I generally save the command line to a text file for easier debugging and (sometimes) execution.
My code is littered with this idiom:
echo >saved.txt cd $NEW_PLACE '&&' command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
cd $NEW_PLACE && command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
Obviously updating code in two places is error-prone. Less obvious is that Certain parts need to be quoted in the first line but not the next. Thus, I can not do the update by simple copy-and-paste. If the command includes quotes, it gets even more complicated.
There has got to be a better way! Suggestions?
How about creating a helper function which logs and then executes the command? "$#" will expand to whatever command you pass in.
log() {
echo "$#" >> /tmp/cmd.log
"$#"
}
Use it by simply prepending log to any existing command. It won't handle && or || though, so you'll have to log those commands separately.
log cd $NEW_PLACE && log command.py --flag $FOO $LOTS $OF $OTHER $VARIABLES
are you looking for set -x (or bash -x)? This writes every command to standard out after executing.
use script and you will get archived everything.
use -x for tracing your script, e.g. run them as bash -x script_name args....
use set -x in your current bash (you will get echoed your commands with substitued globs and variables
combine 2 and 3 with the 1
If you just execute the command file immediately after creating it, you will only need to construct the command once, with one level of escapes.
If that would create too many discrete little command files, you could create shell procedures and then run an individual one.
(echo fun123 '()' {
echo echo something important
echo }
) > saved.txt
. saved.txt
fun123
It sounds like your goal is to keep a good log of what your script did so that you can debug it when things go bad. I would suggest using the -x parameter in your shebang like so:
#!/bin/sh -x
# the -x above makes bash print out every command before it is executed.
# you can also use the -e option to make bash exit immediately if any command
# returns a non-zero return code.
Also, see my answer on a previous question about redirecting all of this debug output to a log when --log is passed into your shell script. This will redirect all stdout and stderr. Occasionally, you'll still want to write to the terminal to give the user feedback. You can do this by saving stdout to a new file descriptor and using that with echo (or other programs):
exec 3>&1 # save stdout to fd 3
# perform log redirection as per above linked answer
# now all stdout and stderr will be redirected to the file and console.
# remove the `tee` command if you want it to go just to the file.
# now if you want to write to the original stdout (i.e. terminal)
echo "Hello World" >&3
# "Hello World" will be written to the terminal and not the logs.
I suggest you look into the xargs command. It was made to solve the problem of programtically building up argument lists and passing them off to executables for batch processing
http://en.wikipedia.org/wiki/Xargs