Why Success is an error for bash? - bash

Working in bash I got an error:
user#host:~$ cd ..
bash: cd: write error: Success
It happend once, and next time I tried to cd everything went fine. But I do not want this error to repeat, so I have 2 questions about this error:
Why bash tried to write something while changing dir?
And more intriguing - why Success could be an error?

Why bash tried to write something while changing dir?
Bash keeps a history of every command you run, which ultimately gets recorded in ~/.bash_history. It's likely that the attempted write was related to that.
And more intriguing - why Success could be an error?
That's a display bug. Success is not an error.
If you want the developer take on how it happens, I'm pretty confident in saying that:
bash detected an error, probably via the return code of an I/O function, and
it called the C perror() function to print an explanatory message. By the time it did so, however,
the C errno variable had been reset, if ever it had been set in the first place.
Usually such a reset of errno happens when you call another library function between calling the one that signaled the error and calling perror(). Looking at the actual error message, it is plausible that the bash implementation called sprintf() to format part of the error message, but in doing so clobbered errno.

Related

Bash : error handling in "export" instruction

I'm using bash for writing a script.
I use set -ein order to have the script exit (in error) when a line results in an error (i like this approach, that I find less error prone). I almost always combine it with set -u for also "raising" an error when it is attempted to read an undefined variable.
It has worked like a charm since a long time. But today, I found a bug in my script that I could not catch with "set -eu" before, because this does not seem to raise an error like usual.
For the following, I will use the false command that always returns an error (in my script this is another personnal command, but this makes it easier for you to reproduce).
set -eu
A=$(false)
The code above exits bash with an error. Cool this is what I expect.
BUT :
set -eu
export A=$(false)
does not raise any error, bash is totally fine with it !
It even sets the variable A with an empty string, so further reads of $A will not raise an error either !
This is really weird for me, is there a way of getting the expected behaviour, maybe another option for ```set`` ?
I can do this for having it raise an error, but I will have to write it this way every time, so it is less useful for catching bugs :
set -eu
A=$(false)
export A=$A
God bash is complicated :-)
You're not getting an error because the export itself is successful. I can't find explicit documentation of the behavior, but command substitution when building the arguments to another command seems to be one of the special cases where a non-zero exit status doesn't make set -e trigger.

What does each field in a typical sh error message mean? Ex. "sh: 1: ipconfig: not found"

I have been receiving Shell error messages, typically due to a bad command, etc., but have been unable to debug the cause of these messages due to not understanding the information being provided.
I have been looking for documentation, but have been unable to find any regarding the format of an sh error message.
Example:
If I use the following command, it will fail, due to no 'ipconfig' command being available:
$ sh -c "ipconfig"
sh: 1: ipconfig: not found
What I'd like to understand is what each 'field' in that message pertains to? I assume it is:
[interpreter]: [???]: [command]: [error related to command]
I can't for the life of me determine what the number refers to, and I can't be sure if my understanding of the other fields is accurate.
Context:
I am debugging a Python2.7 pytest script used for automation testing, and there are numerous points where this script is executing shell commands. However, the output I receive is:
(32512, 'sh: 2: 2: not found')
I know that function being used to execute the shell script returns a tuple with status code and output. I know that that status code is essentially 'command not found', and the error message is also stating that. Another function is returning a string which is used for this command, and I assume what is happening is that somewhere along the way, a bad argument must have been passed and now the script is attempting to execute, what would basically be sh -c "2". I can't be sure though, as these are a lot of assumptions I'm making from a limited understanding of this error message.
If anyone could please enlighten me as to what the fields in this error ACTUALLY mean I'd be forever grateful!!

How can I make a local Git hook run a Windows executable and wait for it to return?

I'm working in a Windows environment. I have a Git repository and am writing a custom pre-commit hook. I am much more comfortable writing a quick and dirty console application in C# than trying to figure out Perl syntax so that's the route I'm going.
My .git/hooks/precommit file looks like this:
#!/bin/sh
start MyHelperApp.exe
And this works somewhat. As you can see I have a compiled helper application in the root of the repo directory (and it is .gitignore'd), and this does indeed launch my application successfully when I call git commit. However, it doesn't wait for the process to finish nor does it seem to care what the return code of the process is. I assume this is because start is asynchronous and it returns a 0 exit code every time.
I have reason to suspect that the start process which is getting called here is not the native Windows start command, because I tried changing it to start /wait MyHelperApp.exe but this had no effect. Also trying to call MyHelperApp.exe directly gives a "command not found" error, and so does changing start to call. I suspect that start is an emulated bash command and it's running the bash version instead of the Windows version?
Anyways, my helper app does return different exit codes depending on different conditions, so it'd be great if those could be used. (Pre-commit hooks fail if a program in the script returns any exit code besides zero.) How might I go about utilizing this?
Call the executable directly, don't use start.
Also trying to call MyHelperApp.exe directly gives a "command not found" error
If the PATH variable doesn't contain a . entry, bash won't look in the current directory to find executables. Call ./MyHelperApp.exe to make it explicit that it should be run from the current directory.

Bash script - run process & send to background if good, or else

I need to start up a Golang web server and leave it running in the background from a bash script. If the script in question in syntactically correct (as it will be most of the time) this is simply a matter of issuing a
go run /path/to/index.go &
However, I have to allow for the possibility that index.go is somehow erroneous. I should explain that in Golang this for something as "trival" as importing a module that you then fail to use. In this case the go run /path/to/index.go bit will return an error message. In the terminal this would be something along the lines of
index.go:4:10: expected...
What I need to be able to do is to somehow change that command above so I can funnel any error messages into a file for examination at a later stage. I tried variants on go run /path/to/index.go >> errors.txt with the terminating & in different positions but to no avail.
I suspect that there is a bash way to do this by altering the priority of evaluation of the command via some judiciously used braces/brackets etc. However, that is way beyond my bash capabilities. I would be most obliged to anyone who might be able to help.
Update
A few minutes later... After a few more experiments I have found that this works
go run /path/to/index.go &> errors.txt &
Quite apart from the fact that I don't in fact understand why it works there remains the issue that it produces a 0 byte errors.txt file when the command goes to completion without Golang throwing up any error messages. Can someone shed light on what is going on and how it might be improved?
Taken from man bash.
Redirecting Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be redirected to the file whose name is the expansion of word.
There are two formats for redirecting standard output and standard error:
&>word
and
>&word
Of the two forms, the first is preferred. This is semantically equivalent to
>word 2>&1
Appending Standard Output and Standard Error
This construct allows both the standard output (file descriptor 1) and the standard error output (file descriptor 2) to be appended to the file whose name is the expansion of word.
The format for appending standard output and standard error is:
&>>word
This is semantically equivalent to
>>word 2>&1
Narūnas K's answer covers why the &> redirection works.
The reason why the file is created anyway is because the shell creates the file before it even runs the command in question.
You can see this by trying no-such-command > file.out and seeing that even though the shell errors because no-such-command doesn't exist the file gets created (using &> on that test will get the shell's error in the file).
This is why you can't do things like sed 'pattern' file > file to edit a file in place.

Receiving error '__variables_definition:57: bad option: -n' when changing directory in Zsh terminal

In my zsh terminal on OSX, I receive the error __variables_definition:57: bad option: -n twice in a row whenever I use cd and when I first open the terminal. I tried Googling the error, and received no results. I'm hoping it looks familiar to someone on here. I was told to see if cd was aliased to anything, but by typing alias, it doesn't appear to be.
This doesn't seem to be causing any actual problems, it's just a slight annoyance and I'd like to know what's causing it.
It looks like the chpwd hook is set to a script with an error.
This hook is called everytime the working directory is changed. There are two ways to set this hook:
by setting defining the a function named chpwd. To check this run whence -c chpwd. It will either return the function body or "chpwd not found"
by defining an array with the name chpwd_functions, which contains a list of functions that are to be called. Run echo $chpwd_functions to get the list and then whence -c name for each name to get the function bodies (or just for func in $chpwd_functions; do whence -c $func; done to do it in one go).
Most likely here it is the second case and the culprit is a function named __variable_definition. In the 57th line of this function there is a faulty call to a command which does not know the option -n. Considering the name of the surrounding function it is probably typeset or one of its equivalents declare, float, integer, local or readonly.
You will have to look in your zsh configuration where __variable_definition is defined and fix the error there.
Note: the output of whence -c name is not always entirely identical to the definition of the function as - among other things - empty lines are removed. As the line number in the error message refers to the original definition (including empty lines) the numbering may be off compared to the output of whence -c name.

Resources