When I start a new bash shell, if I run the command echo $? as the first thing, I get 1. How can I run bash with the "default" exit code being 0?
Context: I am running msys2 in a terminal window in VS Code. If I start msys2, and then realize I didn't need a shell now and just type exit, bash exits with 1, causing VS Code to pop up an annoying warning.
Most likely something in your profile is failing and setting the status code to 1. Since status codes are overwritten by each process that runs, it'll probably be something towards the end.
Related
Im pretty new to bash and want to open a second bash script on a second terminal.
but for some reason im not able to doe this.
Im using gnome-terminal and ive already set my preference to "hold terminal open"
if i just type in the terminal:
gnome-terminal --window-with-profile=shit -e "./Test.sh"
I get an error that says:
The child process exited normally with status 0 (and sometimes 2)
The test bash script is one line that says:
echo "hoi"
If anyone has the answer please let me know
Thanks in advance
The child process exited normally with status 0 is not an error message. It indicates everything went well in your script.
When hold terminal open is selected, this message will appear as a pop-up on the top of the new window when a command exits. Your output is simply too short for you to see under the pop-up message. If you add more lines to your output you should be able to see something.
I have a bash script that runs a list of small programs mostly written in C and Python, since the programs themselves are NOT bug free and they might crash or run into infinite loop, so in the BASH script, I run the programs in subshell so it won't break the main shell, here is what it likes:
#!/usr/bin/env bash
set -e
for py_p in "${py_program_list[#]}"; do
(python "$py_p") || echo "terminated!"
done
for c_p in "${c_program_list[#]}"; do
("$c_p") || echo "terminated!"
done
The problem is, when loops in python programs, the bash script won't be affected by any error in the python program which is what I expected. However, the bash script exit immediately if any C program exit with error.
UPDATE:
I am using BASH 3.2 in OSX 10.9.5
UPDATE 2:
Updated the question to make it more clear, sorry for the confusion. The problem I have is about the C program, the python part confirm the error in subshell won't affect the main shell but the C program breaks the rule.
the Python scripts are fine, no matter I use Ctrl + C or they crash
for some reason, they won't stop the main shell from running which is
what I expect. But the C programs don't, type Ctrl + C when a C
program is running will exit the bash script.
Python handles the interrupt signal itself (outputting Traceback …KeyboardInterrupt) and then terminates normally, returning the exit status 1 to bash.
Your C programs evidently don't handle the signal, so the default action is taken, to terminate the process; bash is informed that the program was terminated by signal SIGINT.
Now bash behaves differently depending on the kind of the child program's termination (normal or signaled): In the first case, it continues execution with || echo "terminated!", in the second case, it terminates itself, as you observed.
You can change that behavior by trapping the signal in your script, e. g. by inserting
trap "echo interrupted" INT
somewhere before the for c_p loop.
Everything depends on the Python programs exit status. Maybe they return the same value regardless they fact their execution was successful or not. So... basically, you cannot rely on their exit status.
I'm trying to write a script in Fish that runs a Make recipe and then executes all of the resultant binaries. The problem I'm having is that I would like to have the script exit with an error code if the make command encounters an error. Whenever I try to capture Make's return value, I end up with its output log instead.
For example:
if test (make allUnitTests) -eq 0
echo "success"
else
echo "fail"
end
returns an error because "test" is seeing the build process, not the terminating condition.
I wrote the script so that I could easily make Jenkins run all my unit tests whenever I trigger a build. Since I haven't been able to get the above section of the script working correctly, I've instead instructed Jenkins to run the make command as a separate command, which does exactly what I want: halting the entire build process without executing any binaries if anything fails to compile. Thus, at this point my question is more of an academic exercise, but I would like to add building the unit test binaries into the script (and have it cleanly terminate on a build error) for the benefit of any humans who might check out the code and would like to run the unit tests.
I played a little with something like:
if test (count (make allUnitTests | grep "Stop")) -eq 0
but this has two problems:
I'm apparently piping stdout when I need to pipe stderr. (Come to think of it, if I could just check to see if anything was written to stderr, then I wouldn't need grep at all.)
Grep is swallowing all the log data piped to it, which I really want to be visible on the console.
You are misunderstanding the parentheses - these run a command substitution. What this does is capture the output of the process running in the substitution, which it will then use as arguments (separated by newlines by default) to the process outside.
This means your test will receive the full output of make.
What you instead want to do is just run if make allUnitTests without any parens, since you are just interested in the return value.
If you would like to do something between running make and checking its return value, the "$status" variable always contains the return value of the last command, so you can save that:
make allUnitTests
set -l makestatus $status
# Do something else
if test $makestatus -eq 0
# Do the if-thing
else
# Do the else-thing
end
I have a shell script that runs a bunch of commands (on OS X 10.7) as part of a build step for XCode. The script removes a bunch of files and copies a bunch of files.
The problem I have right now is that if the cp command fails, the build still 'succeeds' according to XCode, presumably because the script still returns with an exit status of 0. How can I capture the result of the cp ? I looked up the man page and it doesn't seem to return a value.
cp will return an error code (non zero) on failure, but your script probably ignores it and proceeds to the next command.
Unless you explicitly check the return code of each command in a multi-step script, then the shell will carry on.
See Aborting a shell script if any command returns a non-zero value? for how to exit a script on any error.
I want to run an executable from a ruby rake script, say foo.exe
I want the STDOUT and STDERR outputs from foo.exe to be written directly to the console I'm running the rake task from.
When the process completes, I want to capture the exit code into a variable. How do I achieve this?
I've been playing with backticks, process.spawn, system but I cant get all the behaviour I want, only parts
Update: I'm on Windows, in a standard command prompt, not cygwin
system gets the STDOUT behaviour you want. It also returns true for a zero exit code which can be useful.
$? is populated with information about the last system call so you can check that for the exit status:
system 'foo.exe'
$?.exitstatus
I've used a combination of these things in Runner.execute_command for an example.
backticks will get stdout captured into resulting string
foo.exe suggests you are running windows - do you have anything like cygwin installed? if you run your script within unixy shell you can do this:
result = `foo.exe 2>&1`
status = $?.exitstatus
quick googling says this should also work in native windows shell but i can't test this assupmtion