Reference Key Error in Transaction Scope Using Entity Framework - linq

I am Using Transaction Scope for Performing the Insert In Multiple Tables using Try and Catch. But when i am Getting the Error Within Transaction Scope it's not allowing me to Save the Data in catch also.
My Code
using (var transaction = new TransactionScope())
{
try
{
//Insert in Table1
//Insert in Table2
//Insert in Table3
transaction.Complete();
transaction.Dispose();
}
catch(Exception ex)
{
transaction.Dispose();
//Insert in ErrorHandlerTable (Independent Table)
}
}
Now The Problem is whenever i am getting the error in try block for foreign key constraints i am unable to insert into ErrorHandlerTable (Independent Table). Always Getting Following Exception:
{"The INSERT statement conflicted with the FOREIGN KEY constraint \"FK_Table1_PkId\". The conflict occurred in database \"MyTransactionDatabase\", table \"dbo.Table2\", column 'PkId'.\r\nThe statement has been terminated."}
Can anyone help in this?

I think this will help you to revert the operations in the tables, please try using below stuff
using (var transaction = new TransactionScope())
{
try
{
//Insert in Table1
//Insert in Table2
//Insert in Table3
transaction.Complete();
transaction.Dispose();
}
catch(Exception ex)
{
transaction.Dispose();
//what i have changed
context.Table1 Table1Object = new YoSafari.Migration.EntityFramework.Table1(); //Create New Object of the table in which u want to insert i.e. Table1 or Table2 etc..
using (var context = new ContextClass())
{
context.Entry(Table1Object).State = EntityState.Unchanged;
//Insert in ErrorHandlerTable (Independent Table i.e. Table1 or Table2 etc..)
context.SaveChanges();
}
}
}
It will create new object of the tables that will Unchanged the operations and allow you to insert the record in to your ErrorHandlerTable
Please let me know if you are still facing any issue with this.

As answered here INSERT statement conflicted with the FOREIGN KEY constraint :-
In your table ysmgr.Table2, it has a foreign key reference to
another table. The way a FK works is it cannot have a value in that
column that is not also in the primary key column of the referenced
table.
If you have SQL Server Management Studio, open it up and sp_help
'ysmgr.Table2'. See which column that FK is on, and which
column of which table it references. You're inserting some bad data.
So the steps are :-
1.run sp_helpconstraint
2.pay ATTENTION to the constraint_keys column returned for the foreign key

The problem is that, even if your code has disposed the TransactionScope, when it inserts the data in the ErrorHandlerTable tha happens still inside the TransactionScope. So, something is going wrong, and you get a misleading error.
To avoid this, change the code so that the insertion in the ErrorHandlerTable is done outside of the original transaction scope. To do so, you can nest a new using block to provide a new, independent TransactionScope like this:
using(var ts = new TrasanctionScope(TransactionScopeOption.RequiresNew)
or this
using(var ts = new TrasanctionScope(TransactionScopeOption.Suppress)
The first option simply creates a new transaction, indepenend of the original one. But, if your insert is an atomic operation, as it seems, you can also use the second option, which creates a new independent transactionless scope.
In this way you can be sure that your insertion in the ErrorHandlerTable happens without any interference with the original transaction scope.
Please, see this docs:
TransactionScope Constructor (TransactionScopeOption)
TransactionScopeOption Enumeration

Related

Oracle DataAdapter Update method inserts only one row when updateBatchSize=0

I am trying to implement batch insert in the Oracle database table using ado.net OracleDataAdapter class. I am following the below steps:
Create the DataTable object by specifying the column names
Next, populate the DataTable with DataRows
foreach (var item in product)
{
//bulkUpdateHelper.AddEntity(item);
var dataRow = _dataTable.NewRow();
dataRow[1] = item.Description;
dataRow[2] = item.Name;
dataRow[3] = item.Price;
dataRow[4] = item.Category;
_dataTable.Rows.Add(dataRow);
}
I am creating the insert command and select command with the insert query and select queries
I added the insert parameter for each of the columns in insert SQL.
Then, I created the OracleDataAdapter object by specifying the select command.
I called the update method setting the UpdateBatchSize=0.
On calling the update, only one row gets inserted into the Database whereas datatable has more than one rows. Also, when I try to set the UpdateBatchSize>0 I am getting object reference not found an error.
Please help if anyone has faced the same issue.
Thanks for the help and time.
I figured out the issue. After adding the datarow, I was calling AcceptChanges() method on data table instead of datarow.

drop global temporary table (attempt to create, alter or drop an index on temporary...) [duplicate]

In our project I create some global temp table that will be like these:
CREATE GLOBAL TEMPORARY TABLE v2dtemp (
id NUMBER,
GOOD_TYPE_GROUP VARCHAR2(250 BYTE),
GOOD_CODE VARCHAR2(50 BYTE),
GOOD_TITLE VARCHAR2(250 BYTE)
)
ON COMMIT PRESERVE ROWS;
but the problem comes when I want to drop this table.
Oracle will not let me to drop the table, and it says:
ORA-14452: attempt to create, alter or drop an index on temporary table already in use
I have to use this table in some procedure but it may be changed dependent to other reports. So I should always drop the table then I should recreate it with my needed fields.
I have to use this for some business reasons so it is not possible for me to use tables, or other things. I can use just temp tables.
I tried on commit delete rows, but when I call my procedure to use the data in this table there are no more rows in the table and they have been deleted.
Any helps will greatly appreciated,
thanks in advance
/// EDIT
public void saveJSONBatchOpenJobs(final JSONArray array, MtdReport report) {
dropAndCreateTable();
String sql = "INSERT INTO v2d_temp " +
"(ID, KARPARDAZ, GOOD_TYPE_GROUP, GOOD_CODE, GOOD_TITLE, COUNT, "
+ "FACTOR_COUNT, GHABZ_COUNT, DEAL_NO, DEAL_DATE, REQUEST_NO, REQUEST_DATE, "
+ "REQUEST_CLIENT, STATUS, TYPE, MTDREPORT_ID, GEN_SECURITY_DATA_ID) " +
"VALUES (MTD_KARPARDAZ_OPEN_JOBS_SEQ.nextval,?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)";
getJdbcTemplate().batchUpdate(sql, new BatchPreparedStatementSetter() {
#Override
public void setValues(PreparedStatement ps, int i) throws SQLException {
JSONArray values = array.getJSONArray(i);
if(!values.get(0).equals("null"))
ps.setString(1, values.get(0).toString());
else
ps.setNull(1, Types.VARCHAR);
if(!values.get(1).equals("null"))
ps.setString(2, values.get(1).toString());
else
ps.setNull(2, Types.VARCHAR);
if(!values.get(2).equals("null"))
ps.setString(3, values.get(2).toString());
else
ps.setNull(3, Types.VARCHAR);
if(!values.get(3).equals("null"))
ps.setString(4, values.get(3).toString());
else
ps.setNull(4, Types.VARCHAR);
if(!values.get(4).equals("null"))
ps.setBigDecimal(5, new BigDecimal(values.get(4).toString()));
else
ps.setNull(5, Types.NUMERIC);
if(!values.get(5).equals("null"))
ps.setBigDecimal(6, new BigDecimal(values.get(5).toString()));
else
ps.setNull(6, Types.NUMERIC);
if(!values.get(6).equals("null"))
ps.setBigDecimal(7, new BigDecimal(values.get(6).toString()));
else
ps.setNull(7, Types.NUMERIC);
if(!values.get(7).equals("null"))
ps.setString(8, values.get(7).toString());
else
ps.setNull(8, Types.VARCHAR);
if(!values.get(8).equals("null"))
ps.setDate(9, new Date(new Timestamp(values.getLong(8)).getDateTime()));
else
ps.setNull(9, Types.DATE);
if(!values.get(9).equals("null"))
ps.setString(10, values.get(9).toString());
else
ps.setNull(10, Types.VARCHAR);
if(!values.get(10).equals("null"))
ps.setDate(11, new Date(new Timestamp(values.getLong(8)).getDateTime()));
else
ps.setNull(11, Types.DATE);
if(!values.get(11).equals("null"))
ps.setString(12, values.get(11).toString());
else
ps.setNull(12, Types.VARCHAR);
if(!values.get(12).equals("null"))
ps.setString(13, values.get(12).toString());
else
ps.setNull(13, Types.VARCHAR);
if(!values.get(13).equals("null"))
ps.setString(14, values.get(13).toString());
else
ps.setNull(14, Types.VARCHAR);
if(!values.get(14).equals("null"))
ps.setLong(15, new Long(values.get(14).toString()));
else
ps.setNull(15, Types.NUMERIC);
if(!values.get(15).equals("null"))
ps.setLong(16, new Long(values.get(15).toString()));
else
ps.setNull(16, Types.NUMERIC);
}
#Override
public int getBatchSize() {
return array.size();
}
});
String bulkInsert = "declare "
+ "type array is table of d2v_temp%rowtype;"
+ "t1 array;"
+ "begin "
+ "select * bulk collect into t1 from d2v_temp;"
+ "forall i in t1.first..t1.last "
+ "insert into vertical_design values t1(i);"
+ "end;";
executeSQL(bulkInsert);
}
private void dropAndCreateTable() {
String dropSql = "declare c int;"
+ "begin "
+ "select count(*) into c from user_tables where table_name = upper('v2d_temp');"
+ "if c = 1 then "
+ "truncate table v2d_temp"
+ "drop table v2d_temp;"
+ " end if;"
+ "end;";
executeSQL(dropSql);
String createSql = "CREATE GLOBAL TEMPORARY TABLE v2d_temp (\n"
+ "DEAL_ID NUMBER,\n"
+ "id NUMBER,\n"
+ "karpardaz VARCHAR2(350),\n"
+ "GOOD_TYPE_GROUP VARCHAR2(250 BYTE),\n"
+ "GOOD_CODE VARCHAR2(50 BYTE),\n"
+ "GOOD_TITLE VARCHAR2(250 BYTE),\n"
+ "COUNT NUMBER,\n"
+ "FACTOR_COUNT NUMBER,\n"
+ "GHABZ_COUNT NUMBER,\n"
+ "DEAL_NO VARCHAR2(50 BYTE),\n"
+ "DEAL_DATE DATE,\n"
+ "REQUEST_NO VARCHAR2(50 BYTE),\n"
+ "REQUEST_DATE DATE,\n"
+ "REQUEST_CLIENT VARCHAR2(250 BYTE),\n"
+ "STATUS VARCHAR2(250 BYTE),\n"
+ "TYPE VARCHAR2(250 BYTE),\n"
+ "GEN_SECURITY_DATA_ID NUMBER(10),\n"
+ "MTDREPORT_ID NUMBER\n"
+ ")\n"
+ "ON COMMIT PRESERVE ROWS";
executeSQL(createSql);
}
private void executeSQL(String sql) {
Connection con = null;
try {
con = getConnection();
Statement st = con.createStatement();
st.execute(sql);
} catch (SQLException e) {
e.printStackTrace();
} finally {
if(con != null) {
try {
con.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
}
Oracle global temporary tables are not transient objects. They are proper heap tables. We create them once and any session can use them to store data which is visible only to that session.
The temporary aspect is that the data is not persistent beyond one transaction or one session. The key implementation detail is that the data is written to a temporary tablespace not a permanent one. However, the data is still written to - and read from - disk, so there is a notable overhead to the use of global temporary tables.
The point is we are not supposed to drop and recreate temporary tables. If you're trying to port SQL Server style logic into Oracle then you should consider using PL/SQL collections to maintain temporary data in-memory. Find out more.
The specific cause of ORA-14452 is that we cannot drop a global temporary table which has session scope persistence if it has contained data during the session. Even if the table is currently empty...
SQL> create global temporary table gtt23 (col1 number)
2 on commit preserve rows
3 /
Table created.
SQL> insert into gtt23 values (1);
1 row created.
SQL> commit;
Commit complete.
SQL> delete from gtt23;
1 row deleted.
SQL> commit;
Commit complete.
SQL> drop table gtt23;
drop table gtt23
*
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table already in use
SQL>
The solution is to end the session and re-connect, or (somewhat bizarrely) to truncate the table and then drop it.
SQL> truncate table gtt23;
Table truncated.
SQL> drop table gtt23;
Table dropped.
SQL>
If some other session is using the global temporary table - and that is possible (hence the global nomenclature) then you won't be able to drop the table until all the sessions disconnect.
So the real solution is to learn to use global temporary tables properly: create specific global temporary tables to match each report. Or, as I say, use PL/SQL collections instead. Or, even, just learn to write well-tuned SQL. Often we use temporary tables as a workaround to a poorly-written query which could be saved with a better access path.
Having looked at your full code, the flow seems even more bizarre:
Drop and re-create a global temporary table
Populate temporary table
Select from temporary table into PL/SQL array
Insert into actual table using bulk insert from PL/SQL array
There's so much overhead and wasted activity in here. All you need to do is take the data you insert into v2d_temp and directly populate vertical_design, ideally with an INSERT INTO ... SELECT * FROM statement. You will require some pre-processing to convert a JSON array into a query but that is easy to achieve in either Java or PL/SQL.
It seems certain to me that global temporary tables are not the right solution for your scenario.
"our boss or other persons persist to do something through their way, so you cannot change that"
What you have is a Boss Problem not a Programming Problem. Consequently it is off-topic as far as StackOverflow goes. But here are some suggestions anyway.
The key thing to remember is that we are not talking about a compromise on some sub-optimal architecture: what your boss proposes clearly won't work in a multi-user environment. so, your options are:
Ignore the ORA-14452 error, proceed into production and then use the "but you told me to" defence when it all goes horribly wrong. This is the weakest play.
Covertly junk the global tables and implement something which will work in a multi-user scenario. This is high-risk because you have no defence if you botch the implementation.
Speak to your boss. Tell them you're running into the ORA-14452 error, say you have done some investigation and it appears to a fundamental issue with using global temporary tables in this fashion but obviously you've overlooked something. Then, ask them how they got around this problem when they've implemented it before. This can go several ways, maybe they have a workaround, maybe they'll realise that this is the wrong way to use global temporary tables, maybe they'll tell you to get lost. Either way, this is the best approach: you've raised concerns to the appropriate level.
Good luck.
Killing sessions is the only way to work around ORA-14452 errors. Use the data dictionary to find other sessions using the temporary table and kill them
with a statement like alter system kill session 'sid,seriall#,instance_id';.
This is the "official" solution mentioned in the Oracle support document
HOW TO DIAGNOSE AN ORA-14452 DURING DROP OF TEMPORARY TABLE (Doc ID 800506.1). I've successfully used this method in the past, for a slightly different reason.
Killing sessions requires elevated privileges and can be tricky; it may require killing, waiting, and trying again several times.
This solution is almost certainly a bad idea for many reasons. Before you implement this, you should try to leverage this information as proof that this is
the wrong way to do it. For example, "Oracle documentation says this method requires alter system privilege, which is dangerous and raises some
security issues...".
Another approach worth considering here is to rethink whether you need a temporary table at all.
It is a very common programming practice among those who transition from other RDBMSs to Oracle to overuse them, because they do not understand that you can use such features as Common Table Expressions to implicitly materialise a temporary result set that can be referenced in other parts of the same query, and on other systems it has become natural to write data to a table and then select from it.
The failure is usually compounded by not understanding that PL/SQL-based row by row processing is inferior in almost every respect to SQL-based set processing -- slower, more complex to code, more wordy, and more error prone -- but Oracle presents so many other powerful features for SQL processing that even when it is required it can generally be integrated directly into a SQL SELECT statement anyway.
As a side-note, in 20 years of writing Oracle code for reporting and ETL, I only needed to do use row-by-row processing once, and never needed to use a temporary table.
You can see all running sessions by running:
SELECT * FROM V$SESSION
To kill a session, you have a few options. The following command waits for the current in-flight transactions to complete before disconnecting:
ALTER SYSTEM DISCONNECT SESSION ‘sid,serial#’ POST_TRANSACTION
While the following command is just like kill -9; it wipes the O/S process out:
ALTER SYSTEM DISCONNECT SESSION ‘sid,serial#’ IMMEDIATE
The latter is the most efficient in killing the session that is preventing you from removing a temp table. However, use with caution, as it's a pretty brute force function. Once the session is terminated, you can remove the temp table without getting errors.
You can read more about the different ways of killing a session here (I'm not affiliated with this website, I came across it myself when I had a similar problem to yours):
https://chandlerdba.wordpress.com/2013/07/25/killing-a-session-dead/

Mybatis Spring Transactional multiple delete constraint violated

I am working with Spring 3 and Mybatis 3.
Everything is working ok just when i want to make a cascade delete.
Ive got 2 tables with a middle M-M relationship table. Something like Table1 ---> MiddleTable ---> Table2
I want to make a deletion from the midle table and after that delete de data related in the Table2.
In using a Transactional method
#Transactional
public void relacionaReservaLibreBonoLibre(ParametrosRelacionReservaBono params) throws Exception{
ReservaBean r=rm.buscarReservaPorPK(params.getReserva());
for(BonoJson b:params.getListaBonosAdd()){
HotelBean h=hm.buscaHotelPorCodHotel(b.getHotel());
EstacionBean e=em.buscaEstacionPorEstacionYHotel(b.getEstacion(),h.getCnHotel());
DocumentoBean db=new DocumentoBean();
db.setCnEstacion(e.getCnEstacion());
db.setCnHotel(h.getCnHotel());
db.setCnTipDoc(r.getCnTipoDoc());
db.setFlLibre(true);
db.setTeDoc(b.getCodBono());
Integer docId=dm.insertaDocumento(db);
DocumentoReservaBean drb=new DocumentoReservaBean();
drb.setCnDoc(docId);
drb.setCnReserva(r.getCnReserva());
drm.insertaDocumentoReserva(drb);
}
for(BonoJson b:params.getListaBonosQuit()){
HotelBean h=hm.buscaHotelPorCodHotel(b.getHotel());
EstacionBean e=em.buscaEstacionPorEstacionYHotel(b.getEstacion(),h.getCnHotel());
ReservaDocumentoReservaBean filtro=new ReservaDocumentoReservaBean();
filtro.setTeDoc(b.getCodBono());
filtro.setCnReserva(r.getCnReserva());
filtro.setFlLibre(true);
List<ReservaDocumentoReservaBean> resPrev=rdm.getReservaDocumentos(filtro);
for(ReservaDocumentoReservaBean resPart:resPrev){
DocumentoReservaBean drb=new DocumentoReservaBean();
drb.setCnDocReserva(resPart.getCnDocReserva());
drm.eliminaDocumentoReservaPorPK(drb);
DocumentoBean db=new DocumentoBean();
db.setCnDoc(resPart.getCnDoc());
dm.eliminaDocumentoPorPK(db);
}
}
}
It works great just when is executes de
dm.eliminaDocumentoPorPK(db);
It launches the Constraint violation Table2 to Middle table, that its suposed to be deleted in
drm.eliminaDocumentoReservaPorPK(drb);
¿Any hint?
Thanks in advance.
There are several options:
Delete from Table2 and then delete from MiddleTable
If this is acceptable (that is MiddleTable entity owns Table2 entity) then change foreign key in database so that rows in Table2 were deleted by cascade when row in MiddleTable is deleted. Just add ON CASCADE DELETE to foreign key from Table2 to MiddleTable definition.
Make foreign key constrain deferred if your database supports this.

Retrieving autoincrement value when using #JdbcInsert

I'm trying to store a row in a DB2 database table where the primary key is an autoincrement. This works fine but I'm having trouble wrapping my head around how to retrieve the primary key value for further processing after successfully inserting the row. How do you achieve this? #JdbcInsert only returns the amount of rows that were inserted ...
Since there does not seem to be a way to do this with SSJS (at least to me), I moved this particular piece of logic from my SSJS controller to a Java helper bean I created for JDBC related tasks. A Statement is capable of handing back generated keys (using the method executeUpdate()). So I still create my connection via #JdbcGetConnection, but then hand it in into the bean. This is the interesting part of the bean:
/**
* SQL contains the INSERT Statement
*/
public int executeUpdate(Connection conn, String SQL){
int returnVal;
Statement stmt = conn.createStatement();
stmt.executeUpdate(SQL,
Statement.RETURN_GENERATED_KEYS);
if(!conn.getAutoCommit()) conn.commit();
ResultSet keys = stmt.getGeneratedKeys();
if(keys.next()){
returnVal = keys.getInt(1);
} else {
returnVal = -1;
}
return returnVal;
}
If you insert more than one row at a time, you'll need to change the key retrieval handling, of course.
In newer DB2 Versions you can transform every Insert into a Select to get automatic generated key columns. An example is:
select keycol from Final Table (insert into table (col1, col2) values (?,?))
keycol is the name of your identity column
The Select can be executed with the same #Function than your usual queries.

Get the latest inserted PK after submitting a linq insert stored procedure

I have a stored procedure that updates a table using linq, eg: (this is just example code by way)
using (DataContext db = new DataContext())
{
d.sp_Insert_Client( textboxName.Text, textBoxSurname.Text);
}
What I would like to know is how to retrieve (if possible) newly generated primary key of the above inserted row, as I need this primary key as a foreign key to complete another insert.
You have to modify your stored procedure to return that value from database and then regenerate your Linq mapping to update that change in your ORM files. After that your sp_Insert_Client method will return an integer.
The other way to do that is to add another parameter into the query and mark it as output one.
To get last inserted I'd inside your SP use SCOPE_IDENTITY: http://msdn.microsoft.com/pl-pl/library/ms190315.aspx
I think you need to retrieve value by using the output parameter that you can check over here : Handling stored procedure output parameters A Scott Gu post which explain that easily
Procedure
For you
create procdeudre nameofprocedure
#id int output
as
begin
insert in your table statement
--retrieve identity value
select #id = scope_identity();
end
Code

Resources