load SQL statements from a file using clojure.java.jdbc - jdbc

The REST call is sending the branchId and emplId to this exec-sql-file method. I am passing these as a parameter. I am not able to execute the SQL statement when I pass branch_id = #branchid and empl_id = #emplid. But when I hardcode the branch_id = 'BR101' and empl_id = 123456 then it is working. Any suggestion how to get the branch_Id and empl_Id in my some-statements.sql?
(defn exec-sql-file
[branchid emplid]
(sql/with-db-connection (db-conn)
(sql/db-do-prepared conn
[branchid emplid (slurp (resource "sql/some-statements.sql"))])))
some-statements.sql have this query
DELETE from customer where branch_id = #branchid and empl_id = #emplid;
I am executing this from REPL as
(exec-sql-file "BR101" 123456)
I grab the code snippet from the below post.
Is it possible to patch load SQL statements from a file using clojure.java.jdbc?

There is no simple way to do this as your approach requires that you have to provide parameters to multiple SQL statements in one run. Another issue is that Java's PreparedStatement (used under the hood by clojure.java.jdbc) doesn't support named parameters, so even if parameters to multiple SQL statements done using a single prepared statement would have to be provided for every placeholder (?).
I would suggest following solutions:
use multiple prepared statements (so separate clojure.java.jdbc/execute! calls) for each of the SQL statement you want to execute wrapped in a single transaction (each SQL could be read from a separate file). You could also use some helper library like YeSQL to make loading your SQL statements from external files and exposing them as functions you could call as ordinary Clojure functions. It would be simple but if you change the number of statements you would like to execute, then you need to change your code
create a stored procedure and call them from Clojure providing the parameters - this will define an interface for some DB logic which will be defined on the DB side. Unless you change the interface of your stored procedure you can modify its implementation without changing your Clojure code or redeployment
implement your own logic of interpolating named parameters into your "multistatement" SQL file. The issue is to appropriately escape parameters' values so your code is not vulnerable to SQL injection. I would discourage this solution.

Related

CREATE TABLE statement in Oracle with the existence check

This question is inspired by this.
As stated, I don't want a solution from PL/SQL. I want a 1 or 2 SQL statements that will check for table existence and if its not exist - create it.
Such statement(s) will be plugged into C++ application (not a script) and so I want a plain SQL solution. If such solution is not exist (please say so), I'd like to have a simple string I can plug into C++ code and use either SQLExecute() or a native Oracle client API to execute such a string.
Trying to google for a solution I am getting a results that can be used either in the shell script or a stored procedure. As I explain here and in the previous question - my situation is completely different - I work in C++ and want an appropriate solution.
There is no single SQL statement that will create a table only if it does not exist in Oracle 11g.
It is not obvious to me why you're objecting to a PL/SQL based solution. If you're using raw ODBC calls in C++, you can pass a PL/SQL block to SQLPrepare just as you would pass a plain SQL statement. Given that PL/SQL blocks work almost exactly like a pure SQL statement, it would be unusual to categorically reject a PL/SQL based solution.
If you are going to categorically reject PL/SQL, you can certainly take the logic from any of the PL/SQL based solutions and implement that in a couple of SQL statement executed from your application. For example, you can query dba_| all_| user_tables (depending on your privileges, whether you are creating tables in other schemas, etc.) to determine whether the table exists and then conditionally execute your DDL
select owner, table_name
from dba_tables
where owner = <<schema that will own the table>
and table_name = <<name of the table>>
If that returns no rows you can then execute your DDL.
Of course, you can also just execute your DDL statement and catch the ORA-00955 name is already used by an existing object error in C++.

Is static sql to be preferred over dynamic sql in postgresql stored procedures?

I am not sure in case of Stored Procedures, if Postgresql treats static sql any differently from a query submitted as a quoted string.
When I create a stored procedure in PostgreSQL using static sql, there seems to be no validation of the table names and table columns or column types but when I run the procedure I get the listing of the problems if any.
open ref_cursor_variable for
select usr_name from usres_master;
-- This is a typing mistake. The table name should be users_master. But the stored procedure is created and the error is thrown only when I run the procedure.
When I run the procedure I (naturally) get some error like :
table usres_master - invalid table name
The above is a trivial version. The real procedures we use at work combine several tables and run to at least a few hundred lines. In PostgresQL stored procedure, is there no advantage to using static sql over dynamic sql i.e. something like open ref_cursor_variable for EXECUTE select_query_string_variable.
The static SQL should be preferred almost time - dynamic SQL should be used only when it is necessary
from performance reasons (dynamic SQL doesn't reuse execution plans). One shot plan can be better some times (and necessary).
can reduce lot of code
In other cases uses static SQL every time. Benefits:
readability
reuse of execution plans
it is safe against SQL injection by default
static check is available
The source of a function is just a string to Postgres. The main reason for this is the fact that Postgres (unlike other DBMS) supports many, even installable languages for functions and procedures. As the Postgres core can't possibly know the syntax of all languages, it can not validate the "inner" part of a function. To my knowledge the "language API" does not contain any "validate" method (in theory this would probably be possible though).
If you want to statically validate your PL/pgSQL functions (and procedures since Postgres 11) you could use e.g. https://github.com/okbob/plpgsql_check/

Batch update using Spring

I am trying to have batch update query However each update Query is different but running on the same table. The Where clause is the same.
For example :
TABLE : Column A,B,C,D,ID
update A where ID=1
update B,C where ID=1
update D,B where ID=1 and so on ... ( all the combinations of A,B,C,D)
I have investigated spring jdbc (JDBCTemplate and JDBCNamedParameter ) and QueryDsl but its not possible to have such updates.
Is there any other method by which such update as batch is possible ? I have stick to Spring-JDBC.
Do you want to use a prepared statement passing in the arguments for each update? If so, it's not possible to do this as a batch. You could batch multiple statements, but then you would have to create these statements without using placeholders for the arguments. In this scenario you would use the int[] JdbcTemplate.batchUpdate(String[] sql) method (http://docs.spring.io/spring/docs/4.0.3.RELEASE/javadoc-api/org/springframework/jdbc/core/JdbcTemplate.html#batchUpdate-java.lang.String:A-).
It's not possible to batch different prepared statements using the JDBC API. You can batch individual statements without arguments (http://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html#addBatch(java.lang.String)) or batch multiple sets of arguments for a prepared statement (http://docs.oracle.com/javase/7/docs/api/java/sql/PreparedStatement.html#addBatch()), but the SQL statement would have to be the same for all sets of arguments.
You can still wrap multiple update calls in a transaction, but there would be multiple roundtrips to the database server.
You can wrap your update with a stored proc, then you can batch round trips to the database.
Inside the stored proc you'll need to generate the update based on the arguments passed in. So you could test for null, or pass a separate flag for each column. If the flag is set, then generate SQL that updates that column.

Procedure to delete table rows

I am testing this perl script which basically call procedure and run DELETE on 2 tables.
Questions:
Is there any issue with the procedure or calling procedure in perl?
Can I use 2 deletes in single procedure?
Procedure delete (v_db_id in number)
IS BEGIN
DELETE from TAB1
where db_id = v_db_id;
DELETE from TAB2
where db_id = v_db_id;
END delete;
PERL Script:
sub getdelete {
my $dbID = shift
my $rs;
my $SQL;
$SQL = q{delete (?)};
$rs = executeQuery($SQL,$dbID);
$rs -> fetchrow();
$rs -> finish();
}
PERL Script calling subroutine getdelete as below:
&getdelete ($dbID);
Error:
DBD::Oracle::st execute failed: ORA-00900: invalid SQL statement (DBD Error: OCIStmtExecute)[for statement "delete"]
DELETE doesn't takes an expression that results in a table id; it takes a table id literal. As such, you can't use replaceable parameters. You need to construct the table id literal, which $dbh->quote_identifier can do.
my $sql = 'DELETE '.$dbh->quote_identifier($dbID);
$dbh->do($sql);
You are using a module that wraps DBI in a very poor way1. I have no way of knowing if it'll give access to the database handle or to the handle's quote_identifier method, but at least now you know what to look for.
Notes:
There are three ways to wrap DBI that make sense:
To add functions or override some minor aspect of existing functions.
For example, DBI database statements don't have the selectrow_* methods founds on database handles. Adding these without restricting access to the rest of DBI is perfectly fine.
To provide a higher level abstraction of a database, such as an ORM like DBIx::Class. These are massive systems with thousands if not tens of thousands of lines.
If your wrapper provide a new database interface and its code fits on two screens, it's doing something wrong.
To centralise all DB code by providing application-specific functions like create_user, fetch_daily_report_data, etc. SQL isn't passed
If your wrapper attempts to this but provides functions that expect SQL, it's doing something wrong.
What doesn't make sense is to attempt to simplify DBI, and this appears to be what your wrapper does. DBI actually provides a very simple interface. Any attempts to simplify it is bound to leave out something critical.

Use Parameter with no need of creating stored procedures

Is it possible to use or add parameters on a simple query without the need of creating stored procedures or function? Are Bind Variables possible without creating a stored procedures?
where are you running the code from? If your running from a language like Java / VB you would use a stored procedure or prepared statement.
If using from Sql *Plus terminal or other Sql UI
SQL> variable deptno number
SQL> exec :deptno := 10
SQL> select * from
emp where deptno = :deptno;
From a high level language like Java or VB use stored procedures, following is from the article you linked to so not sure why you are asking this?
In fact, the answer to this is actually quite simple. When you put
together an SQL statement using Java, or VB, or whatever, you usually
use an API for accessing the database; ADO in the case of VB, JDBC in
the case of Java. All of these APIs have built-in support for bind
variables, and it's just a case of using this support rather than just
concatenating a string yourself and submitting it to the database.
For example, Java has PreparedStatement, which allows the use of bind
variables, and Statement, which uses the string concatenation
approach. If you use the method that supports bind variables, the API
itself passes the bind variable value to Oracle at runtime, and you
just submit your SQL statement as normal. There's no need to
separately pass the bind variable value to Oracle, and actually no
additional work on your part. Support for bind variables isn't just
limited to Oracle - it's common to other RDBMS platforms such as
Microsoft SQL Server, so there's no excuse for not using them just
because they might be an Oracle-only feature.
Not sure why you are asking as this information is there on the site http://www.akadia.com/services/ora_bind_variables.html
Well, assuming you will perform queries using an external programming language, prepared statements will make the job.
See http://en.wikipedia.org/wiki/Prepared_statement

Resources