Is static sql to be preferred over dynamic sql in postgresql stored procedures? - performance

I am not sure in case of Stored Procedures, if Postgresql treats static sql any differently from a query submitted as a quoted string.
When I create a stored procedure in PostgreSQL using static sql, there seems to be no validation of the table names and table columns or column types but when I run the procedure I get the listing of the problems if any.
open ref_cursor_variable for
select usr_name from usres_master;
-- This is a typing mistake. The table name should be users_master. But the stored procedure is created and the error is thrown only when I run the procedure.
When I run the procedure I (naturally) get some error like :
table usres_master - invalid table name
The above is a trivial version. The real procedures we use at work combine several tables and run to at least a few hundred lines. In PostgresQL stored procedure, is there no advantage to using static sql over dynamic sql i.e. something like open ref_cursor_variable for EXECUTE select_query_string_variable.

The static SQL should be preferred almost time - dynamic SQL should be used only when it is necessary
from performance reasons (dynamic SQL doesn't reuse execution plans). One shot plan can be better some times (and necessary).
can reduce lot of code
In other cases uses static SQL every time. Benefits:
readability
reuse of execution plans
it is safe against SQL injection by default
static check is available

The source of a function is just a string to Postgres. The main reason for this is the fact that Postgres (unlike other DBMS) supports many, even installable languages for functions and procedures. As the Postgres core can't possibly know the syntax of all languages, it can not validate the "inner" part of a function. To my knowledge the "language API" does not contain any "validate" method (in theory this would probably be possible though).
If you want to statically validate your PL/pgSQL functions (and procedures since Postgres 11) you could use e.g. https://github.com/okbob/plpgsql_check/

Related

PLSQL "give" script but not allow users to read

I'm new to pl/sql and my question is: is it possible to "compile" a script in sql plus or sql developer and give the file to other person in order to allow other to execute the code but not allowing them to read the code?
It sounds like you are talking about the Oracle wrap utility (a separate command-line application that is part of your Oracle client install and not a part of SQL Developer) or the dbms_ddl.wrap function which you could invoke from SQL Developer. These create obfuscated statements that will create a stored procedure (or package or function) that behaves normally but where the text in the data dictionary is not human readable. The wrap utility doesn't provide perfect security-- there are unwrapping tools and presentations on the internet that would let an attacker unwrap the code you hand them. And you can often figure out what the unwrapped code is really doing by looking at other data dictionary views (v$sql will show the unwrapped SQL statements that are executed for example) or by tracing a session.
It depends also of definition of the word give. You can store PL/SQL code in the database. Give users right to execute it, to see the source code of the package header, but not to see the source of package body. But of course DBAs can read it, They can also trace it (even if it is wrapped).
Also note that PL/SQL packages are wrapped in a different way than PL/SQL procedures. As of 11g packages are wrapped using simple one-to-one byte substitution. While for PL/SQL procedures, there is stored obfuscated bytecode of DIANA virtual machine. AFAIK there is no accessible unwrap for PL/SQL procedures, it is much harder to reverse engineer.

Same stored procedure acts differently on two/(three) different IDEs

I just created a stored procedure in MS SQL DB using TOAD.
what it does is that it accepts an ID wherein some records are associated with, then it inserts those records to a table.
next part of the stored procedure is to use the ID input to search on the table where the items got inserted and then return it as the result set to the user just to confirm that the information got inserted.
IN TOAD, it does what is expected. It inserts date and returns information using just the stored procedure.
IN Oracle SQL developer however, it does the insert and it ends at that. It seems to not execute the 2nd part of the stored procedure which is a select stmt.
I just have a feeling that this is because of the jdbc adapter. Also why I'm asking is because I'm using a reporting tool Pentaho Report Designer and it would really make it easier if I can do 2 things at the same time. Pentaho Report Designer is also using jdbc adapters, not a coincidence maybe?
But if there are other things that I can tweak I'd really appreciate it.
This is a guess, but worth considering...
There are things called "Batches", where are sets of SQL Statements that are all sent to the server at once, and executed by the server as one set of statements, within a single server-side session. Sending a set of sql statements to the server as a batch will often result in different results than if you sent them one at a time, where each statement is executed in its own session.
I haven't used Toad (or Oracle) in a while, but as I recall, it dealt with batches differently than the other ide I used. If the second statement in your set is relying on being in the same session as the first, and in one ide it is in a separate session, then this might explain what is happening.

Use Parameter with no need of creating stored procedures

Is it possible to use or add parameters on a simple query without the need of creating stored procedures or function? Are Bind Variables possible without creating a stored procedures?
where are you running the code from? If your running from a language like Java / VB you would use a stored procedure or prepared statement.
If using from Sql *Plus terminal or other Sql UI
SQL> variable deptno number
SQL> exec :deptno := 10
SQL> select * from
emp where deptno = :deptno;
From a high level language like Java or VB use stored procedures, following is from the article you linked to so not sure why you are asking this?
In fact, the answer to this is actually quite simple. When you put
together an SQL statement using Java, or VB, or whatever, you usually
use an API for accessing the database; ADO in the case of VB, JDBC in
the case of Java. All of these APIs have built-in support for bind
variables, and it's just a case of using this support rather than just
concatenating a string yourself and submitting it to the database.
For example, Java has PreparedStatement, which allows the use of bind
variables, and Statement, which uses the string concatenation
approach. If you use the method that supports bind variables, the API
itself passes the bind variable value to Oracle at runtime, and you
just submit your SQL statement as normal. There's no need to
separately pass the bind variable value to Oracle, and actually no
additional work on your part. Support for bind variables isn't just
limited to Oracle - it's common to other RDBMS platforms such as
Microsoft SQL Server, so there's no excuse for not using them just
because they might be an Oracle-only feature.
Not sure why you are asking as this information is there on the site http://www.akadia.com/services/ora_bind_variables.html
Well, assuming you will perform queries using an external programming language, prepared statements will make the job.
See http://en.wikipedia.org/wiki/Prepared_statement

Writing an isnull() wrapper for nvl()

We are moving our database from Oracle to SQL Server. My queries make extensive use of Oracle's nvl function. In SQL Server, the function to use is isnull(). If possible, I'd like to start getting my queries ready by changing them to use isnull(), while still on Oracle. My idea is to create a wrapper function isnull() in my schema and change my queries to use that function instead. That way when we switch database platforms, my queries are already using the new function.
Is there a way I can create a wrapper function in Oracle called isnull() that accepts and returns any datatype? Or do I just have to have multiple isnull() declarations, overloaded for all the expected data types?
Another approach might be to use COALESCE instead of NVL, since the syntax for COALESCE is the same in both Oracle and SQL Server. Still, the goal (if it is your goal) of having identical SQL that works efficiently (or even works at all) in both Oracle and SQL Server may not be realistic.
The only way in PL/SQL to have multiple overloads for the same function would be to create them in a package. You can create a package that includes a number of different overloaded IsNull functions that accept and return different data types and use those in your queries. Of course, that does mean that you will have to include the package name in your code. It's potentially easy enough to remove the package name when you move to SQL Server but it won't be an exact migration.

Oracle sql types over dblink

I have two schemas: A and B (Oracle 9). At the A there is a dblink to B. At the B there is a package, that i calls from A. Procedures in B package can returns varying count results and i think that returning a collection is a better way for this reason.
create type B.tr_rad as object (
name varchar2(64)
,code number
,vendor number
,val varchar2(255)
,num number
);
create type B.tt_rad as varray(256) of B.tr_rad;
But from A scheme I cannot use tt_rad type because using SQL-types by dblink is not supported. DBMS_SQL is not supported cursors. Create types with same OID is impossible.
I think to use temporary tables. But firstly it is not that good (after the remote function returns the value, calling side must select collection from remote table). And there are fears of a slowdown of work with temporary tables.
Maybe who knows the alternative interaction?
I've had similar problems in the past. Then I came to the conclusion that fundamentally Oracle's db links are "broken" for anything but simple SQL types (especially UDT's, CLOBS may have problems, XMLType may as well). If you can get the OID solution working then good luck to you.
The solution I resorted to was to use a Java Stored procedure, instead of the DB Link.
Characteristics of the Java Stored Procedure:
Can return a "rich set of types", just about all of the complex types (UDT's, tables/arrays/varrays) see Oracle online documentation for details. Oracle does a much better job of marshalling complex (or rich) types from java, than from a DBLink.
Stored Java can acquire the "default connection" (runs in the same session as the SQL connection to the db - no authentication issues).
Stored Java calls the PL/SQL proc on the remote DB, and the java JDBC layer does the marshaling from the remote DB.
Stored Java packages up the result and returns the results to the SQL or PL/SQL layer.
It's a bit of work, but if you have a bit of java, you should be able to "cut and paste" a solution together from the Oracle documentation and sample.
I hope this helps.
See this existing discussion
referencing oracle user defined types over dblink
An alternative interaction is to have one database with schemas A and B instead of two databases with a database link.
My solution.
On the side B i create temporary table like the collection record. At the A side i have a DBMS_SQL wrapper that calls procedure over dblink. This procedure writes result collection in the temporary table. After successful completion remote procedure i select results from remote temporary table and transform it to local collection type.
Limitations
1. the need for permanent object synchronization.
2. impossibility use A-side procedure (that call remote procedure) in SQL query.
3. the complexity of using.

Resources