Retrieving autoincrement value when using #JdbcInsert - jdbc

I'm trying to store a row in a DB2 database table where the primary key is an autoincrement. This works fine but I'm having trouble wrapping my head around how to retrieve the primary key value for further processing after successfully inserting the row. How do you achieve this? #JdbcInsert only returns the amount of rows that were inserted ...

Since there does not seem to be a way to do this with SSJS (at least to me), I moved this particular piece of logic from my SSJS controller to a Java helper bean I created for JDBC related tasks. A Statement is capable of handing back generated keys (using the method executeUpdate()). So I still create my connection via #JdbcGetConnection, but then hand it in into the bean. This is the interesting part of the bean:
/**
* SQL contains the INSERT Statement
*/
public int executeUpdate(Connection conn, String SQL){
int returnVal;
Statement stmt = conn.createStatement();
stmt.executeUpdate(SQL,
Statement.RETURN_GENERATED_KEYS);
if(!conn.getAutoCommit()) conn.commit();
ResultSet keys = stmt.getGeneratedKeys();
if(keys.next()){
returnVal = keys.getInt(1);
} else {
returnVal = -1;
}
return returnVal;
}
If you insert more than one row at a time, you'll need to change the key retrieval handling, of course.

In newer DB2 Versions you can transform every Insert into a Select to get automatic generated key columns. An example is:
select keycol from Final Table (insert into table (col1, col2) values (?,?))
keycol is the name of your identity column
The Select can be executed with the same #Function than your usual queries.

Related

How to UPDATE multiple rows, based on multiple conditions?

I'm trying to update a single column in a table for many rows, but each row will have a different updated date value based on a unique where condition of two other columns. I'm reading the data from a csv, and simply updating the date column in the row located from the combination of values in the other two columns.
I've seen this
SQL update multiple rows based on multiple where conditions
but the SET value will not be static, and will need to match each row where the other two column values are true. This is because in my table, the combination of those two other columns are always unique.
Psuedocode
UPDATE mytable SET date = (many different date values)
WHERE col_1 = x and col_2 = y
col_1 and col_2 values will change for every row in the csv, as the combination of these two values are unique. I was looking into using CASE in postgres, but I understand it cannot be used with multiple columns.
So basically, a csv row has a date value, that must be updated in the record where col_1 and col_2 equals their respective values in the csv rows. If these values don't exist in the database, they are simply ignored.
Is there an elegant way to do this in a single query? This query is part of a spring batch job, so I might not be able to use native postgres syntax, but I'm struggling to even understand the format of the query so I can worry about the syntax later. Would I need multiple update statements? If so, how can I achieve that in the write step of a spring batch job?
EDIT: Adding some sample data to explain process
CSV rows:
date, col_1, col_2
2021-12-30, 'abc', 'def'
2021-05-30, 'abc', 'zzz'
2021-07-30, 'hfg', 'xxx'
I'll need my query to locate a record where col_1='abc' AND col_2=def, then change the date column to 2021-12-30. I'll need to do this for every row, but I don't know how to format the UPDATE query.
You can insert your CSV data into a (temporary) table (say mycsv) and use UPDATE with a FROM clause. For instance:
CREATE TEMP TABLE mycsv (date DATE, col_1 TEXT, col_2 TEXT);
COPY mycsv FROM '/path/to/csv/csv-file.csv' WITH (FORMAT csv);
UPDATE mytable m SET date = c.date
FROM mycsv c WHERE m.col_1 = c.col_1 AND m.col_2 = c.col_2;
Create an Itemwriter implementation, and override the write() method. That method accepts a list of objects, each returned from your ItemProcessor implementation.
In the write method, simply loop through the objects, and call update on each one in turn.
For Example:
In the ItemWriter:
#Autowired
private SomeDao dataAccessObject;
#Override
public void write(List<? extends YourDTO> someDTOs) throws Exception {
for(YourDTO dto: someDTOs) {
dataAccessObject.update(dto);
}
}
In your DAO:
private static final String sql = "UPDATE mytable SET dateField = ? WHERE col_1 = ? and col_2 = ?";
public void update(YourDTO dto) {
Object[] parameters = { dto.getDate(), dto.getCol1(), dto.getCol2()};
int[] types = {Types.DATE, Types.STRING, Types.STRING};
jdbcTemplate.update(sql, parameters, types);
}

Reference Key Error in Transaction Scope Using Entity Framework

I am Using Transaction Scope for Performing the Insert In Multiple Tables using Try and Catch. But when i am Getting the Error Within Transaction Scope it's not allowing me to Save the Data in catch also.
My Code
using (var transaction = new TransactionScope())
{
try
{
//Insert in Table1
//Insert in Table2
//Insert in Table3
transaction.Complete();
transaction.Dispose();
}
catch(Exception ex)
{
transaction.Dispose();
//Insert in ErrorHandlerTable (Independent Table)
}
}
Now The Problem is whenever i am getting the error in try block for foreign key constraints i am unable to insert into ErrorHandlerTable (Independent Table). Always Getting Following Exception:
{"The INSERT statement conflicted with the FOREIGN KEY constraint \"FK_Table1_PkId\". The conflict occurred in database \"MyTransactionDatabase\", table \"dbo.Table2\", column 'PkId'.\r\nThe statement has been terminated."}
Can anyone help in this?
I think this will help you to revert the operations in the tables, please try using below stuff
using (var transaction = new TransactionScope())
{
try
{
//Insert in Table1
//Insert in Table2
//Insert in Table3
transaction.Complete();
transaction.Dispose();
}
catch(Exception ex)
{
transaction.Dispose();
//what i have changed
context.Table1 Table1Object = new YoSafari.Migration.EntityFramework.Table1(); //Create New Object of the table in which u want to insert i.e. Table1 or Table2 etc..
using (var context = new ContextClass())
{
context.Entry(Table1Object).State = EntityState.Unchanged;
//Insert in ErrorHandlerTable (Independent Table i.e. Table1 or Table2 etc..)
context.SaveChanges();
}
}
}
It will create new object of the tables that will Unchanged the operations and allow you to insert the record in to your ErrorHandlerTable
Please let me know if you are still facing any issue with this.
As answered here INSERT statement conflicted with the FOREIGN KEY constraint :-
In your table ysmgr.Table2, it has a foreign key reference to
another table. The way a FK works is it cannot have a value in that
column that is not also in the primary key column of the referenced
table.
If you have SQL Server Management Studio, open it up and sp_help
'ysmgr.Table2'. See which column that FK is on, and which
column of which table it references. You're inserting some bad data.
So the steps are :-
1.run sp_helpconstraint
2.pay ATTENTION to the constraint_keys column returned for the foreign key
The problem is that, even if your code has disposed the TransactionScope, when it inserts the data in the ErrorHandlerTable tha happens still inside the TransactionScope. So, something is going wrong, and you get a misleading error.
To avoid this, change the code so that the insertion in the ErrorHandlerTable is done outside of the original transaction scope. To do so, you can nest a new using block to provide a new, independent TransactionScope like this:
using(var ts = new TrasanctionScope(TransactionScopeOption.RequiresNew)
or this
using(var ts = new TrasanctionScope(TransactionScopeOption.Suppress)
The first option simply creates a new transaction, indepenend of the original one. But, if your insert is an atomic operation, as it seems, you can also use the second option, which creates a new independent transactionless scope.
In this way you can be sure that your insertion in the ErrorHandlerTable happens without any interference with the original transaction scope.
Please, see this docs:
TransactionScope Constructor (TransactionScopeOption)
TransactionScopeOption Enumeration

Get the latest inserted PK after submitting a linq insert stored procedure

I have a stored procedure that updates a table using linq, eg: (this is just example code by way)
using (DataContext db = new DataContext())
{
d.sp_Insert_Client( textboxName.Text, textBoxSurname.Text);
}
What I would like to know is how to retrieve (if possible) newly generated primary key of the above inserted row, as I need this primary key as a foreign key to complete another insert.
You have to modify your stored procedure to return that value from database and then regenerate your Linq mapping to update that change in your ORM files. After that your sp_Insert_Client method will return an integer.
The other way to do that is to add another parameter into the query and mark it as output one.
To get last inserted I'd inside your SP use SCOPE_IDENTITY: http://msdn.microsoft.com/pl-pl/library/ms190315.aspx
I think you need to retrieve value by using the output parameter that you can check over here : Handling stored procedure output parameters A Scott Gu post which explain that easily
Procedure
For you
create procdeudre nameofprocedure
#id int output
as
begin
insert in your table statement
--retrieve identity value
select #id = scope_identity();
end
Code

Sybase JDBC get generated keys

In Postgres, I can write
INSERT .. RETURNING *
To retrieve all values that had been generated during the insert. In Oracle, HSQLDB, I can use
String[] columnNames = ...
PreparedStatement stmt = connection.prepareStatement(sql, columnNames);
// ...
stmt.execute();
stmt.getGeneratedKeys();
To retrieve all values that had been generated. MySQL is a bit limited and only returns columns that are set to AUTO_INCREMENT. But how can this be done with Sybase SQL Anywhere? The JDBC driver does not implement these methods, and there is no INSERT .. RETURNING clause, as in Postgres. Is there way to do it, other than maybe running
SELECT ##identity
immediately after the insert?
My current implementation executes three consecutive SQL statements:
-- insert the data first
INSERT INTO .. VALUES (..)
-- get the generated identity value immediately afterwards
SELECT ##identity
-- get the remaining values from the record (possibly generated by a trigger)
SELECT * FROM .. WHERE ID = :previous_identity
The third statement can be omitted, if only the ID column is requested

Auto-increment in Oracle without using a trigger

What are the other ways of achieving auto-increment in oracle other than use of triggers?
You can create and use oracle sequences. The syntax and details are at
http://www.techonthenet.com/oracle/sequences.php
Also read the article
http://rnyb2.blogspot.com/2006/02/potential-pitfall-with-oracle-sequence.html
to understand the limitations with respect to AUTONUMBER in other RDBMS
If you don't need sequential numbers but only a unique ID, you can use a DEFAULT of SYS_GUID(). Ie:
CREATE TABLE xxx ( ID RAW(16) DEFAULT SYS_GUID() )
A trigger to obtain the next value from a sequence is the most common way to achieve an equivalent to AUTOINCREMENT:
create trigger mytable_trg
before insert on mytable
for each row
when (new.id is null)
begin
select myseq.nextval into :new.id from dual;
end;
You don't need the trigger if you control the inserts - just use the sequence in the insert statement:
insert into mytable (id, data) values (myseq.nextval, 'x');
This could be hidden inside an API package, so that the caller doesn't need to reference the sequence:
mytable_pkg.insert_row (p_data => 'x');
But using the trigger is more "transparent".
As far as I can recall from my Oracle days, you can't achieve Auto Increment columns without using TRIGGER. Any solutions out there to make auto increment column involves TRIGGER and SEQUENCE (I'm assuming you already know this, hence the no trigger remarks).
Create a sequence:
create sequence seq;
Then to add a value
insert into table (id, other1, other2)
values (seq.nextval, 'hello', 'world');
Note: Look for oracle docs for more options about sequences (start value, increment, ...)
From 12c you can use an identity column, which makes explicit the link between table and auto-increment; there's no need for a trigger or a sequence. The syntax would be:
create table <table_name> ( <column_name> generated as identity );
In addition to e.g. FerranB's answer:
It is probably worth to mention that, as opposed to how auto_incement works in MySQL:
sequences work database wide, so they can be used for multiple tables and the values are unique for the whole database
therefore: truncating a table does not reset the 'autoincrement' functionaltiy
If you don't really want to use a "trigger-based" solution, you can achieve the auto-increment functionality with a programmatical approach, obtaining the value of the auto increment key with the getGeneratedKeys() method.
Here is a code snippet for your consideration:
Statement stmt = null;
ResultSet rs = null;
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_UPDATABLE);
stmt.executeUpdate("DROP TABLE IF EXISTS autoIncTable");
stmt.executeUpdate("CREATE TABLE autoIncTable ("
+ "priKey INT NOT NULL AUTO_INCREMENT, "
+ "dataField VARCHAR(64), PRIMARY KEY (priKey))");
stmt.executeUpdate("INSERT INTO autoIncTable (dataField) "
+ "values ('data field value')",
Statement.RETURN_GENERATED_KEYS);
int autoIncKeyFromApi = -1;
rs = stmt.getGeneratedKeys();
if (rs.next()) {
autoIncKeyFromApi = rs.getInt(1);
}
else {
// do stuff here
}
rs.close();
source: http://forums.oracle.com/forums/thread.jspa?messageID=3368856
SELECT max (id) + 1
FROM table

Resources