Strange stack bash error: --: command not found - bash

I have very simple haskell project with only executable my-exec. All it does is printing "Hello, world!" to console.
I want to create the script file bin/setup.sh which will run my executable and also do some echo
#!/usr/bin/env stack
-- stack exec bash
echo Echo printing
my-exec
When I run it I get
$ ./bin/setup.sh
./bin/setup.sh: line 2: --: command not found
Echo printing
Hello, world!
And I don't understand what is the issue with this file and why it says --: command not found but still working as expected.
I understand that in this simple example I could write it in much easier form, but in my real situation I have to make like 10 of non trivial exec calls and don't want to duplicate stack exec multiple times.
So what can I do to get rid of this error?

Here's the problem. The first line:
#!/usr/bin/env stack
is interpreted by your operating system (e.g., the Linux kernel) as indicating that the script should be invoked using the equivalent of the shell command:
$ /usr/bin/env stack setup.sh
or, since env is just there to search the path for stack, the equivalent of:
$ stack setup.sh
If you run this manually, you'll get the same error. That's because, when stack is invoked this way, it reads the indicated file, searching for a line of the form:
-- stack blah blah whatever blah blah
after the first #! line. Normally, this line looks something like:
-- stack --resolver lts-10.0 script
which tells stack to run the script as if you had run the shell command:
$ stack --resolver lts-10.0 script hello.sh
which interprets hello.sh as a Haskell program, instead of a shell script, but runs it using the lts-10.0 resolver, and all is well.
However, you've told stack to use the command stack exec bash, so stack invokes your script with the equivalent of:
$ stack exec bash hello.sh
which is basically the same as running:
$ bash hello.sh
after setting up the stack paths and so on.
FINALLY, then, the shell bash is running your script. Bash ignores the first line, because it starts with # character that indicates a shell comment. But when Bash tries to interpret the second line, it's as if you entered the following command at the shell prompt:
$ -- stack exec bash
Bash looks for a program named -- to run with arguments stack exec bash, and you get an error message. The script keeps running, though, so the echo and my-exec lines get run as expected.
Wow.
Here's one way that may work for you. You can use:
#!/bin/bash
exec stack exec bash <<EOF
echo Echo printing
./hello
EOF
This shell script will invoke stack exec bash using a so-called "here doc", basically passing everything up to the EOF as a script file for stack exec bash to run.

Related

What does `exec 200>lockfile` do?

I'm not familiar with the exec command. A bash tutorial about how to lock files throws this:
exec 200>lockfile
flock 200
...
flock -u 200
I got that it is creating a file named lockfile and assigning it an FD of 200. Then the second command locks that file. When done working with the file, the last command unlocks it.
Doing so, any other concurrent instance of the same script will stay at that second line until the first instance unlocks the file. Cool.
Now, what I don't understand is what is exec doing.
Directly from the bash command line, both options seem to work:
exec 200>lockfile
200>lockfile
But when the second option is used in the script, a "Bad file descriptor error" is raised.
Why is exec needed to avoid the error?
--- edit ---
After some more "serious research", I've found an answer here. The exec command makes the FD stay for the entire script or current shell.
So doing:
200>lockfile flock 200
Would work. But later flock -u 200 would raise a "Bad FD error".
The manual seems to mention shell replacement with given command. What does that has to do with file descriptors?
This is explained in the second sentence:
exec: exec [-cl] [-a name] file [redirection ...]
Exec FILE, replacing this shell with the specified program.
If FILE is not specified, the redirections take effect in this
shell. [...]
Essentially, doing exec 42> foo.txt from inside myscript.sh opens foo.txt for writing on FD 42 in the current process.
This is similar to running ./myscript.sh 42> foo.txt from a shell in the first place, or using open and dup2 in a C program.

How can I capture the raw command that a shell script is running?

As an example, I am trying to capture the raw commands that are output by the following script:
https://github.com/adampointer/go-deribit/blob/master/scripts/generate-models.sh
I have tried to following a previous answer:
BASH: echoing the last command run
but the output I am getting is as follows:
last command is gojson -forcefloats -name="${struct}" -tags=json,mapstructure -pkg=${p} >> models/${p}/${name%.*}_request.go
What I would like to do is capture the raw command, in other words have variables such as ${struct}, ${p} and ${p}/${name%.*} replaced by the actual values that were used.
How do I do this?
At the top of the script after the hashbang #!/usr/bin/env bash or #!/bin/bash (if there is any) add set -x
set -x Print commands and their arguments as they are executed
Run the script in debug mode which will trace all the commands in the script: https://stackoverflow.com/a/10107170/988525.
You can do that without editing the script by typing "bash generate-models.sh -x".

Script not working as Command line

i've created simple bash script that do the following
:
#!/usr/bin/env bash
cf ssh "$1"
When I run the command line from the CLI like cf ssh myapp its running as expected, but when I run the script like
. myscript.sh myapp
I got error: App not found
I dont understand what is the difference, I've provided the app name after I invoke the script , what could be missing here ?
update
when I run the script with the following its working, any idea why the "$1" is not working ...
#!/usr/bin/env bash
cf ssh myapp
When you do this:
. myscript.sh myapp
You don't run the script, but you source the file named in the first argument. Sourcing means reading the file, so it's as if the lines in the file were typed on the command line. In your case what happens is this:
myscript.sh is treates as the file to source and the myapp argument is ignored.
This line is treated as a comment and skipped.
#!/usr/bin/env bash
This line:
cf ssh "$1"
is read as it stands. "$1" takes the value of $1 in the calling shell. Possibly - most likely in your case - it's blank.
Now you should know why it works as expected when you source this version of your script:
#!/usr/bin/env bash
cf ssh myapp
There's no $1 to resolve, so everything goes smoothly.
To run the script and be able to pass arguments to it, you need to make the file executable and then execute it (as opposed to sourcing). You can execute the script for example this way:
./script.bash arg1 arg2

How can I monitor a bash script?

I am running a bash script that takes hours. I was wondering if there is way to monitor what is it doing? like what part of the script is currently running, how long did it take to run the whole script, if it crashes at what line of the script stopped working, etc. I just want to receive feedback from the script. Thanks!!!
from man page for bash,
set -x
After expanding each simple command, for command, case command, select command, or arithmetic for command, display the expanded value of PS4, followed by the command and its expanded arguments or associated word list.
add these to the start of your script,
export PS4='+{${BASH_SOURCE}:$LINENO} '
set -x
Example,
#!/bin/bash
export PS4='+{${BASH_SOURCE}:$LINENO} '
set -x
echo Hello World
Result,
+{helloworld.sh:6} echo Hello World
Hello World
Make a status or log file. For example add this inside your script:
echo $(date) - Ok >> script.log
Or for a real monitoring you can use strace on linux for see system call, example:
$ while true ; do sleep 5 ; done &
[1] 27190
$ strace -p 27190

Dash -x fails with Bad substitution error

I'm trying to learn how to write portable shell scripts, to do so I'm starting to migrating my personal utilities from bash to sh (dash on my system). There is however a error I'm getting in all cases when I try to run the scripts in debugging mode $ dash -x script
For instance, on this script:
#!/bin/sh
echo hi
If I run it as: $ dash script, I get the 'hi' string, however if I run it as: $ dash -x script or if I add the set -x command before echo:
#!/bin/sh
set -x
echo hi
It fails with the error:
script.sh: 3: script.sh: Bad substitution
this makes very difficult to debug my scripts. I'm running ubuntu 12.04 with dash 0.5.7-2ubuntu2
Just by the time I finished writing my question I realized I was using a personalized PS4 (which is used in xtrace mode), my PS4 was defined as:
>>(${BASH_SOURCE}:${LINENO}): ${FUNCNAME[0]:+${FUNCNAME[0]}(): }
I changed temporarily as PS4=">>" and everything went ok, I can now debug my scripts on dash. Hope this helps someone.

Resources