Stop execution of PL/SQL when error without using raiserrror - oracle

See this code:
test.sh
#!/bin/bash
echo "hello!"
klsdslkdsd
echo "bye"
When I run it I get:
hello!
/tmp/test.sh: line 3: klsdslkdsd command not found
bye
Although there is a syntactic error, the execution goes one (is a script after all).
Now, if I do:
testWithStop.sh
#!/bin/bash
set -e
echo "hello!"
klsdslkdsd
echo "bye"
I get:
hello!
/tmp/test.sh: line 5: klsdslkdsd command not found
The execution stops because I get an exit error from every executed line. If exit != 0 it aborts.
I would like to replicate this behavior in a set of (oracle) PL/SQL (and solely SQL) code.
At this point, even with error, the Oracles DB Manager manages the error gracefully and don't stop the execution of the SQLs. I would like that, when an error (syntactic or semantic) is found, the program aborts but without touching the logic of the program (if so, I would have to change several hundreds of PL/SQL and is not possible).
I know I could do it using raiserror or creating a macro anonymous block that encapsulate portions of my codes, so I could retrieve the exceptions. As I said, it could not be possible to do with several hundreds (could be thousands) of isolated (and logic complex) PL/SQLs
Is there an equivalent to set -e in SQL or an elegant way to do it?
EDIT:
In particular, I´m executing (and calling) de PL/SQLs via a shell script (bash).
For example, the sqlbash.sh file:
#!/bin/bash
sqlplus ..... <<EOF
select * from table;
sdsdsfdsf <--- intentional error!
select * from table2;
EOF
If I called the script, I will get the syntactic error, but the execution goes on and doesn't abort. I would like to abort, for example, mimicking the behavior of an exit 1.
Something like:
#!/bin/bash
sqlplus ..... <<EOF
select * from table;
sdsdsfdsf <--- intentional error!
exit 1 <--- it will abort and the script WILL FAIL at this point
select * from table2;
EOF
I'm working with an scheduler, so, it is necessary for me to see that the script fails, not that it was executed and gave me warnings.

Your looking for WHENEVER SQLERROR and/or WHENEVER OSERROR SQL*Pus commands. They allow you to, among other things, exit if something bad happens in the middle of the script.
Put something like
whenever oserror exit 1
whenever sqlerror exit 2
at the top of your scripts, and you'll be able to tell from your shell script that something failed.

Related

SQLplus Error handling for nested scripts

I'll try to explain the best I can.
I want to run a script through sqlplus. I know that I can use # or ## or START.
I also know that to stop and exit in case of errors, I can use
WHENEVER SQLERROR EXIT SQL.SQLCODE ROLLBACK
What I need and I don't know is: is it possible to break the execution of a nested script in case of error, but continue in the main script? CONTINUE in the WHENEVER is not what I want. The nested script must stop processing in case of error.
It's for CI/CD automation script I'm writing.
like this:
SET ECHO OFF
WHENEVER SQLERROR EXIT SQL.SQLCODE ROLLBACK
START SCRIPT_WITH_ERRO.sql
PROMPT "This line should be prompted even if there was an error before"
SHOW ERRORS

sqlplus not properly ended script

i have a plsql script, which is not properly terminated:
create or replace package mypackage
... (no semicolon/forward slash at the end)
the script is executed with sqlplus (12.1) on windows powershell:
sqlplus user/pass#host #my_not_properly_ended_file.pks
i would expect sqlplus to terminate with exit code 1 and an error message, but instead it prompts for input.
how can i get an error message and exit code in this situation?
edit: solution should also work with dml statements that are not terminated with a semicolon.
You can use shell redirection instead of #:
sqlplus user/pass#host < my_not_properly_ended_file.pks
This will also prevent it getting 'stuck' if the script doesn't end with an exit command.
However, it won't return an error code to the shell in either case. As far as SQL*Plus is concerned you put the incomplete statement into its buffer but never attempted to execute it (as there was no slash); and as it didn't run, it didn't error. So setting whenever sqlerror or whatever won't make any difference either.

parallel execution in shell scripting hangs

My requirement is to run multiple shell scripts at a time.
After searching on Google could conclude that I can use "&" at the end of filename while triggering the run like:
sh file.sh &
the thing is I have for loop which generates the values and gives runtime parameters for the shell script:
sample code:
declare -a arr=("1" "2")
for ((i=0;i<${#arr[#]};++i));
do
sh fileto_run.sh ${arr[i]}
done
this successfully triggers the fileto_run.sh in parallel but it hangs there itself.. imagine I have echo statement in the script then the following is how the code hangs:
-bash-x.x$ 1
2
until I use ctrl+c the code execution wont exit.
I thought of using a break statement but that breaks the loop.
Am I doing wrong anywhere?
Please do correct me.

Catch outcome of jenkins job in shell command variable

I have a job in Jenkins that uses two commands in a Execute Shell Command.
The first does a test job, the second creates a report out of this. It looks a littlebit like this:
node_modules/gulp/bin/gulp.js run-cucumber-tests
node_modules/gulp/bin/gulp.js create-cucumber-report
If there are test failures, the command will exit with code 1. This means the second command won't even be fired. But even though the first command failed, I want the report to be created!
What I've tried is to do this:
node_modules/gulp/bin/gulp.js run-cucumber-tests || true
node_modules/gulp/bin/gulp.js create-cucumber-report
Now the report does get created but the build is marked as succeeded. That's not what I want. I want the jenkins build job to eventually fail, but with the reports created.
I was wondering, maybe I can catch the outcome of the first command in a variable, continue with the second and then throw it after the second command.
You can use set +e to let the script continue even if an error occurred and then use $? to capture the result of the last command. With exit you can force the result code of the script to the previously captured value.
set +e
node_modules/gulp/bin/gulp.js run-cucumber-tests
RESULT=$?
node_modules/gulp/bin/gulp.js create-cucumber-report
exit $RESULT

How to get output of sqlite command in shell script

I am trying to make rollback if something went wrong during the execution of my sql crud operations or if it can not make commit with shell scripting.
I have test2.sh and test.sh
test.sh:
#!/bin/sh
sqlite3 dB.sqlite << EOF
begin;
select * from Table1;
and test2.sh
#!/bin/sh
if echo `./test.sh`|grep -q "SQL error"; then
rollback;
else
err=commit;
if echo $err |grep -q "error"; then
rollback;
fi
fi
There is no table called Table1 and i expected to get the sql error output of test.sh and rollback.
But it gives error : rollback: command not found.
How can i get the error and make rollback? or Is this way that i follow right?
Your test2.sh script fails because the shell is trying to execute a program called rollback, hence the "command not found". You want to use the sqlite instruction rollback, whichs means you'd have at least to do this :
sqlite3 dB.sqlite << EOF
rollback;
EOF
But I don't think this will work. As I see it, rollback has to occur within the same sqlite session as the one started by test.sh to have any effect. Your test.sh script already makes sqlite plow through the commands until it reaches EOF, when you grep the errors with test2.sh, it's already too late. There's no way you can do a rollback from the test2.sh script, which sits outside that sqlite session.
So, to complete my answer, I recommend you to step away from Bash scripting and use a programming language, which will allow you to open a sqlite session, execute commands and control the transactions. Quick example in Python :
import sqlite3
conn = sqlite3.connect('dB.sqlite')
c = conn.cursor()
try:
c.execute('INSERT INTO Table1 VALUES(1)')
conn.commit()
except:
conn.rollback()
c.close()

Resources