DB2 iSeries doesn't lock on select for update - db2-400

I'm migrating a legacy application using DB2 iSeries on AS400 that has a specific behavior that I have to reproduce using .NET and DB2.Data.DB2.iSeries client for .NET.
What I'm describing works for me with DB2 non AS400 but in AS400 DB2 it worlks for the legacy application i'm replacing - but not with my application.
The behavior in the original application:
Begin Transaction
ExecuteReader () => Select col1 from table1 where col1 = 1 for update.
The row is now locked. anyone else who tries to run Select for update should fail.
Close the Reader opened in line 2.
The row is now unlocked. - anyone else who tried to run select for update should succeed.
Close transaction and live happily ever after.
In my .NET code I have two problems:
Step 2 - only checks if the row is already locked - but doesn't actually lock it. so another user can and does run select for update - WRONG BEHAVIOUR
Once that works - I need the lock to get unlocked when the reader is closed (step 4)
Here's my code:
var cb = new IBM.Data.DB2.iSeries.iDB2ConnectionStringBuilder();
cb.DataSource = "10.0.0.1";
cb.UserID = "User";
cb.Password = "Password";
using (var con = new IBM.Data.DB2.iSeries.iDB2Connection(cb.ToString()))
{
con.Open();
var t = con.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
using (var c = con.CreateCommand())
{
c.Transaction = t;
c.CommandText = "select col1 from table1 where col1=1 FOR UPDATE";
using (var r = c.ExecuteReader())
{
while (r.Read()) {
MessageBox.Show(con.JobName + "The Row Should Be Locked");
}
}
MessageBox.Show(con.JobName + "The Row Should Be unlocked");
}
}
When you run this code twice - you'll see both processes reach the "This row should be locked" which is the problem I'm describing.
The desired result would be that the first process will reach the "This row should be locked" and that the second process will fail with resource busy error.
Then when the first process reaches the second message box - "the row should be unlocked" the second process( after running again ) will reach the "This row should be locked" message.
Any help would be greatly appreciated

The documentation says:
When the UPDATE clause is used, FETCH operations referencing the cursor acquire an exclusive row lock.
This implies a cursor is being used, and the lock occurs when the fetch statement is executed. I don't see a cursor, or a fetch in your code.
Now, whether .NET handles this as a cursor, I don't know, but the DB2 UDB documentation does not have this notation.

Isolation Level allows this behavior. Reading rows that are locked.
ReadUncommitted
A dirty read is possible, meaning that no shared locks are issued and no exclusive locks are honored.

After much investigations we created a work around in the form of a stored procedure that performs the lock for us.
The stored procedure looks like this:
CREATE PROCEDURE lib.Select_For_Update (IN SQL CHARACTER (5000) )
MODIFIES SQL DATA CONCURRENT ACCESS RESOLUTION WAIT FOR OUTCOME
DYNAMIC RESULT SETS 1 OLD SAVEPOINT LEVEL COMMIT ON RETURN
NO DISALLOW DEBUG MODE SET OPTION COMMIT = *CHG BEGIN
DECLARE X CURSOR WITH RETURN TO CLIENT FOR SS ;
PREPARE SS FROM SQL ;
OPEN X ;
END
Then we call it using:
var cb = new IBM.Data.DB2.iSeries.iDB2ConnectionStringBuilder();
cb.DataSource = "10.0.0.1";
cb.UserID = "User";
cb.Password = "Password";
using (var con = new IBM.Data.DB2.iSeries.iDB2Connection(cb.ToString()))
{
con.Open();
var t = con.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
using (var c = con.CreateCommand())
{
c.Transaction = t;
c.CommandType = CommandType.StoredProcedure;
c.AddParameter("sql","select col1 from table1 where col1=1 FOR UPDATE");
c.CommandText = "lib.Select_For_Update"
using (var r = c.ExecuteReader())
{
while (r.Read()) {
MessageBox.Show(con.JobName + "The Row Should Be Locked");
}
}
MessageBox.Show(con.JobName + "The Row Should Be unlocked");
}
}
We don't like it - but it works.

Related

Is there any way to view the physical SQLs executed by Calcite JDBC?

Recently I am studying Apache Calcite, by now I can use explain plan for via JDBC to view the logical plan, and I am wondering how can I view the physical sql in the plan execution? Since there may be bugs in the physical sql generation so I need to make sure the correctness.
val connection = DriverManager.getConnection("jdbc:calcite:")
val calciteConnection = connection.asInstanceOf[CalciteConnection]
val rootSchema = calciteConnection.getRootSchema()
val dsInsightUser = JdbcSchema.dataSource("jdbc:mysql://localhost:13306/insight?useSSL=false&serverTimezone=UTC", "com.mysql.jdbc.Driver", "insight_admin","xxxxxx")
val dsPerm = JdbcSchema.dataSource("jdbc:mysql://localhost:13307/permission?useSSL=false&serverTimezone=UTC", "com.mysql.jdbc.Driver", "perm_admin", "xxxxxx")
rootSchema.add("insight_user", JdbcSchema.create(rootSchema, "insight_user", dsInsightUser, null, null))
rootSchema.add("perm", JdbcSchema.create(rootSchema, "perm", dsPerm, null, null))
val stmt = connection.createStatement()
val rs = stmt.executeQuery("""explain plan for select "perm"."user_table".* from "perm"."user_table" join "insight_user"."user_tab" on "perm"."user_table"."id"="insight_user"."user_tab"."id" """)
val metaData = rs.getMetaData()
while(rs.next()) {
for(i <- 1 to metaData.getColumnCount) printf("%s ", rs.getObject(i))
println()
}
result is
EnumerableCalc(expr#0..3=[{inputs}], proj#0..2=[{exprs}])
EnumerableHashJoin(condition=[=($0, $3)], joinType=[inner])
JdbcToEnumerableConverter
JdbcTableScan(table=[[perm, user_table]])
JdbcToEnumerableConverter
JdbcProject(id=[$0])
JdbcTableScan(table=[[insight_user, user_tab]])
There is a Calcite Hook, Hook.QUERY_PLAN that is triggered with the JDBC query strings. From the source:
/** Called with a query that has been generated to send to a back-end system.
* The query might be a SQL string (for the JDBC adapter), a list of Mongo
* pipeline expressions (for the MongoDB adapter), et cetera. */
QUERY_PLAN;
You can register a listener to log any query strings, like this in Java:
Hook.QUERY_PLAN.add((Consumer<String>) s -> LOG.info("Query sent over JDBC:\n" + s));
It is possible to see the generated SQL query by setting calcite.debug=true system property. The exact place where this is happening is in JdbcToEnumerableConverter. As this is happening during the execution of the query you will have to remove the "explain plan for"
from stmt.executeQuery.
Note that by setting debug mode to true you will get a lot of other messages as well as other information regarding generated code.

Airflow retain the same database connection?

I'm using Airflow for some ETL things and in some stages, I would like to use temporary tables (mostly to keep the code and data objects self-contained and to avoid to use a lot of metadata tables).
Using the Postgres connection in Airflow and the "PostgresOperator" the behaviour that I found was: For each execution of a PostgresOperator we have a new connection (or session, you name it) in the database. In other words: We lose all temporary objects of the previous component of the DAG.
To emulate a simple example, I use this code (do not run, just see the objects):
import os
from airflow import DAG
from airflow.operators.postgres_operator import PostgresOperator
default_args = {
'owner': 'airflow'
,'depends_on_past': False
,'start_date': datetime(2018, 6, 13)
,'retries': 3
,'retry_delay': timedelta(minutes=5)
}
dag = DAG(
'refresh_views'
, default_args=default_args)
# Create database workflow
drop_exist_temporary_view = "DROP TABLE IF EXISTS temporary_table_to_be_used;"
create_temporary_view = """
CREATE TEMPORARY TABLE temporary_table_to_be_used AS
SELECT relname AS views
,CASE WHEN relispopulated = 'true' THEN 1 ELSE 0 END AS relispopulated
,CAST(reltuples AS INT) AS reltuples
FROM pg_class
WHERE relname = 'some_view'
ORDER BY reltuples ASC;"""
use_temporary_view = """
DO $$
DECLARE
is_correct integer := (SELECT relispopulated FROM temporary_table_to_be_used WHERE views LIKE '%<<some_name>>%');
BEGIN
start_time := clock_timestamp();
IF is_materialized = 0 THEN
EXECUTE 'REFRESH MATERIALIZED VIEW ' || view_to_refresh || ' WITH DATA;';
ELSE
EXECUTE 'REFRESH MATERIALIZED VIEW CONCURRENTLY ' || view_to_refresh || ' WITH DATA;';
END IF;
END;
$$ LANGUAGE plpgsql;
"""
# Objects to be executed
drop_exist_temporary_view = PostgresOperator(
task_id='drop_exist_temporary_view',
sql=drop_exist_temporary_view,
postgres_conn_id='dwh_staging',
dag=dag)
create_temporary_view = PostgresOperator(
task_id='create_temporary_view',
sql=create_temporary_view,
postgres_conn_id='dwh_staging',
dag=dag)
use_temporary_view = PostgresOperator(
task_id='use_temporary_view',
sql=use_temporary_view,
postgres_conn_id='dwh_staging',
dag=dag)
# Data workflow
drop_exist_temporary_view >> create_temporary_view >> use_temporary_view
At the end of execution, I receive the following message:
[2018-06-14 15:26:44,807] {base_task_runner.py:95} INFO - Subtask: psycopg2.ProgrammingError: relation "temporary_table_to_be_used" does not exist
Someone knows if Airflow has some way to retain the same connection to the database? I think it can save a lot of work in creating/maintaining several objects in the database.
You can retain the connection to the database by building a custom Operator which leverages the PostgresHook to retain a connection to the db while you perform some set of sql operations.
You may find some examples in contrib on incubator-airflow or in Airflow-Plugins.
Another option is to persist this temporary data to XCOMs. This will give you the ability to keep the metadata used with the task in which it was created. This may help troubleshooting down the road.

Update function not working in vb.net

Currently I'm develop a system using VB.NET. I have the following query for UPDATE. This query is work when I run in SQL Developer
UPDATE CCS2_TBL_INSPECTION_STANDARD SET CCSEQREVITEM = :CCSEQREVITEM,
CCSREVEFFECTIVEDATE = TO_DATE(:CCSREVEFFECTIVEDATE,'DD/MM/YYYY') WHERE
CCSEQID = :CCSEQID
But when I try applied this query in VB.net, its not work. Actually the flow for this update function is work but when I update the data, it is not working. For example, I want update name from 'Ali' to 'Abu', when I click the update button, there popup windows says that "Update success" but the name is not change to 'Abu', it still 'Ali'. There no error when I execute. Anyone know? Below VB.net code:
Protected Sub editInspectionRev(eqid As String)
Dim xSQL As New System.Text.StringBuilder
xSQL.AppendLine("UPDATE CCS2_TBL_INSPECTION_STANDARD")
xSQL.AppendLine("SET")
xSQL.AppendLine("CCSEQREVITEM = :CCSEQREVITEM, CCSREVEFFECTIVEDATE = TO_DATE(:CCSREVEFFECTIVEDATE,'DD/MM/YYYY')")
xSQL.AppendLine("WHERE CCSEQID = :CCSEQID")
Using cn As New OracleConnection(ConString)
cn.Open()
Dim cmd As New OracleCommand(xSQL.ToString, cn)
cmd.Connection = cn
cmd.Parameters.Add(":CCSEQREVITEM", txtRevContent.Text)
cmd.Parameters.Add(":CCSREVEFFECTIVEDATE", txtRevEffDate.Text)
cmd.Parameters.Add(":CCSEQID", eqid)
cmd.ExecuteNonQuery()
cn.Close()
End Using
success3.Visible = True
DisplayRevisionDetails()
End Sub
The problem is that you have executed the transaction but failed to COMMIT it. There is an example of the correct method here, which I will reproduce in part below for posterity
Using connection As New OracleConnection(connectionString)
connection.Open()
Dim command As OracleCommand = connection.CreateCommand()
Dim transaction As OracleTransaction
' Start a local transaction
transaction = connection.BeginTransaction(IsolationLevel.ReadCommitted)
' Assign transaction object for a pending local transaction
command.Transaction = transaction
...
command.ExecuteNonQuery()
transaction.Commit()
Observe that we have begun the transaction, and then committed it after executing.

How to get Oracle exception in SQLcl script when using util.execute?

I try to write a batch file using Oracles SQLcl. In this file, i want to insert a new table row with util.execute. This just returns true / false, which is a boolean return of success/failure.
My question is, how i get the error message of the exception which is thrown, so that i can find out, what the problem is with my insert-statement.
What i do:
First of all, i connect to my database server and start my script:
me#pc:/myproject$ /sqlcl/bin/sql schemaname/pw#server.com:1521/sid
SQLcl: Release 17.3.0 Production [...]
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit [...]
SQL>
SQL> #mybatchscript.js path/image.jpg
My mybatchscript.js looks like this:
script
var tabName = "MY_TABLE_NAME";
var HashMap = Java.type("java.util.HashMap");
var bindmap = new HashMap();
var filePath="&1";
print("\nreading file: "+ filePath);
var blob=conn.createBlob();
var bstream=blob.setBinaryStream(1);
java.nio.file.Files.copy(java.nio.file.FileSystems.getDefault().getPath(filePath),bstream);
bstream.flush();
bindmap.put("content",blob); // has content
bindmap.put("size",blob.length()); // is 341989
// the follow command fails
var doInsert = util.execute("insert into "
+ tabName
+ " (id, main_id, file_name, file_type,"
+ " file_size, file_content, table_name)"
+ " values("
+ " SEQ_MY_TABLE_NAME.nextval, 1,"
+ " 'testname', 'image/jpeg', :size, :content,"
+ " 'my_table_name')"
,bindmap);
sqlcl.setStmt(
"show errors \n"
);
sqlcl.run();
if(!doInsert) {
print("insert failed");
print(doInsert);
exit;
}
/
The console output is like:
reading file: path/image.jpg
insert failed
false
The script is working until the util.execute insert-statement. It returns false, so the insert-statement failed. But it doesn't tell me, why. I have no idea, how i get access to the error message or the exception which is thrown inside the util.execute?
I also tried to turn on SERVEROUTPUT or ERRORLOGGING, but it has the same output as above and the error log table is empty:
SQL> set errorlogging on
SQL> show errorlogging
errorlogging is ON TABLE SPERRORLOG
SQL> set serveroutput on
SQL> show serveroutput
serveroutput ON SIZE UNLIMITED FORMAT WORD_WRAPPED
My knowledge source were these slides where my script is also based on, i didn't find information about the error / exception handling for the util functions in general?
There's basically 2 ways.
1- When using util.execute ( or any util.XYZ functions ) the last error message is retrieved with the following. I also just updated the scripting README with this : https://github.com/oracle/oracle-db-tools/blob/master/sqlcl/README.md
var msg = util.getLastException()
2- When using sqlcl.run()
There's an example I wrote here:
https://github.com/oracle/oracle-db-tools/blob/master/sqlcl/examples/audio.js
The example is a tad silly in that it makes noises on success/failure but you'll see the code that gets the error. Check the ctx.getProperty("sqldev.last.err.message" That will get the last sqlerr message.
if ( ctx.getProperty("sqldev.last.err.message") ) {
//
// FAILED !
//
play("chew_roar.wav");
} else {
//
// Success !!
//
play("R2.wav");
}

Netezza Batch Insert is very slow even in Batch execute mode

I am referring to this documentation. http://www-01.ibm.com/support/docview.wss?uid=swg21981328. As per the article if we use executeBatch method then inserts will be faster (The Netezza JDBC driver may detect a batch insert, and under the covers convert this to an external table load and external table load will be faster). I had to execute millions of insert statements and i am getting only a speed of 500 records per minute per connection max. Is there any better way to load data faster to netezza via jdbc connection? I am using spark and jdbc connection to insert the records.Why external table via loading is not happening even when i am executing in batches. Given below is the spark code i am using,
Dataset<String> insertQueryDataSet.foreachPartition( partition -> {
Connection conn = NetezzaConnector.getSingletonConnection(url, userName, pwd);
conn.setAutoCommit(false);
int commitBatchCount = 0;
int insertBatchCount = 0;
Statement statement = conn.createStatement();
//PreparedStatement preparedStmt = null;
while(partition.hasNext()){
insertBatchCount++;
//preparedStmt = conn.prepareStatement(partition.next());
statement.addBatch(partition.next());
//statement.addBatch(partition.next());
commitBatchCount++;
if(insertBatchCount % 10000 == 0){
LOGGER.info("Before executeBatch.");
int[] execCount = statement.executeBatch();
LOGGER.info("After execCount." + execCount.length);
LOGGER.info("Before commit.");
conn.commit();
LOGGER.info("After commit.");
}
}
//execute remaining statements
statement.executeBatch();
int[] execCount = statement.executeBatch();
LOGGER.info("After execCount." + execCount.length);
conn.commit();
conn.close();
});
I tried this approach(batch insert) but found very slow,
So I put all data in CSV & do external table load for each csv.
InsertReq="Insert into "+ tablename + " select * from external '"+ filepath + "' using (maxerrors 0, delimiter ',' unase 2000 encoding 'internal' remotesource 'jdbc' escapechar '\' )";
Jdbctemplate.execute(InsertReq);
Since I was using java so JDBC as source & note that csv file path is in single quotes .
Hope this helps.
If you find better than this approach, don't forget to post. :)

Resources