Is there a way to make DBUnit "Delete All" for all tables prior to inserting required rows - dbunit

(Newbie DBUnit question Alert!)
It appears that DBUnit for each table 'deletes all the records from a table and then does its insert operation'.
This means that you can't use the xml load file order to clear the data down as any constraining record would be deleted and recreated before the records can be removed from the other tables (I hope that makes sense!).
If there system 'deleted all' from all tables in the xml in order (reversed or otherwise) this problem would not exist.
So is there a way of making it do this?
I am using MS SQL with InsertIdentityOperation(DatabaseOperation.CLEAN_INSERT).
Cheers.

Inherited DBTest and added a delete all step to setup for running the normal routine.
Clean_Insert then becomes "automatic Delete_all" + Insert in getSetUpOperation.

Related

Oracle Data Integrator- ODI 12.2.1--Loadplan Issue no of records count issue

I come across a scenario in my project.I am loading data from file to Table using ODI.I am running My interfaces through loadplan.I've 1000 Records in my source file,and also getting 1000 records in target file.but when I'm checking ODI loadplan execution log its showing number of insert is 2000.can anyone please help.or is it a ODI bug.?
The number of inserts does not only show the inserts in the target table but also all the insert happening in temporary tables. Depending on the knowledge modules (KMs) used in an interface, ODI might load data in a C$_ table (LKM) or I$_ table (IKM/CKM). The rows loaded in these table will also be counted.
You can look at the code generated in the operator to check if your KMs are using using these temporary. You can also simulate an execution to see the code generated.

What's a way I can save a trigger "template" in oracle?

Let's say I created a table test_table in development just to test a trigger, this trigger would then be reused in many other tables (future and existing).
So I code the trigger, test it, all good! But at the moment, if I want to replicate it, I will have to copy it from test_table's triggers and edit it.
So if someone deletes the table accidentally, the trigger is gone, and I don't have it saved nowhere else. Or if I just want to delete random test tables in our database, I can't.
What's a recommended way to save a trigger as a "template" in oracle? So I can reuse it in other tables and have it not be dependant of a random test table, or any table.
There are a lot of ways you can keep a copy of your TRIGGER SQLText.
Here's a few examples.
In Version Control:
You can use any of the many version control tools to maintain a versioned history for any code you like, including SQL, PL/SQL, etc. You can rewind time, view differences over time, track changes to the template, even allow concurrent development.
As a Function:
If you want the template to live in the database, you can create a FUNCTION (or PACKAGE)that takes as parameters the target USER and TABLE, and it replaces the USER and TABLE values in its template to generate the SQLTEXT required to create or replace the template TRIGGER on the target TABLE. You can make it EDITIONABLE as needed.
In a Table:
You can always just create a TABLE that holds template TRIGGER SQLText as a CLOB or VARCHAR2. It would need to be somewhere where it isn't likele to be "randomly" deleted, though. You can AUDIT changes to the TABLE's data, to see the template change over time. Oracle has tons of auditing options.
In the logs:
You can just log (all) DDL out. If you ENABLE_DDL_LOGGING, the log xml will have a copy of every DDL statement, categorized, along with when and where it came from.

Unique Constraint Violated on empty table

I recently received a case which my client came across the ORA-00001: unique constraint violated error. This happened when a program tried to truncate two tables and then insert data into them.
From the error-log file, the truncate step was completed,
delete from INTERNET_GROUP
delete from INTERNET_ITEM
BUT right after this, the insertion to the Internet_group table triggered the ORA-00001 error. I am wondering if there is any database settings related to this error? I never used Oracle and am wondering if Oracle puts a lock on a row with SELECT statement, in which case the row is locked and not deleted somehow? Any help is appreciated.
Please know that there is a difference between truncate and delete. You say you truncated the table, but you mention "delete from" . That is entirely different.
If you're sure you want to empty the tables, try replacing with
truncate table internet_group reuse storage;
Mind you that a commit is not necessary with the truncate statement as this is considered a DDL (data definition language) statement and not a DML (Data modification language) statement like updates and deletes.
Also, there is no row locking on selects. But changes are only applied and visible for other sessions in the database when commit-ed.
I guess that is wat happened; you deleted the records but did not execute a commit (yet) and subsequently inserted new records.
edit:
I now realize you're probably inserting multiple records....
The other option might be, that the data itself causes a violation. Can you please provide the constraints on the table? There must be a primary key or unique constraint. You might want to hold that against your dataset.

ORACLE :Are grants removed when an object is dropped?

I currently have 2 schemas, A and B.
B has a table, and A executes selects inserts and updates on it.
In our sql scripts, we have granted permissions to A so it can complete its tasks.
grant select on B.thetable to A
etc,etc
Now, table 'thetable' is dropped and another table is renamed to B at least once a day.
rename someothertable to thetable
After doing this, we get an error when A executes a select on B.thetable.
ORA-00942: table or view does not exist
Is it possible that after executing the drop + rename operations, grants are lost as well?
Do we have to assign permissions once again ?
update
someothertable has no grants.
update2
The daily process that inserts data into 'thetable' executes a commit every N insertions, so were not able to execute any rollback. That's why we use 2 tables.
Thanks in advance
Yes, once you drop the table, the grant is also dropped.
You could try to create a VIEW selecting from thetable and granting SELECT on that.
Your strategy of dropping a table regularly does not sound quite right to me though. Why do you have to do this?
EDIT
There are better ways than dropping the table every day.
Add another column to thetable that states if the row is valid.
Put an index on that column (or extend your existing index that you use to select from that table).
Add another condition to your queries to only consider "valid" rows or create a view to handle that.
When importing data, set the new rows to "new". Once the import is done, you can delete all "valid" rows and set the "new" rows to "valid" in a single transaction.
If the import fails, you can just rollback your transaction.
Perhaps the process that renames the table should also execute a procedure that does your grants for you? You could even get fancy and query the dictionary for existing grants and apply those to the renamed table.
No :
"Oracle Database automatically transfers integrity constraints, indexes, and grants on the old object to the new object."
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9019.htm#SQLRF01608
You must have another problem
Another approach would be to use a temporary table for the work you're doing. After all, it sounds like it is just the data is transitory, at least in that table, and you wouldn't keep having to reapply the grants each time you had a new set of data/create the new table

Maximum number of columns in a LINQ to SQL object?

I have 62 columns in a table under SQL 2005 and LINQ to SQL doesn't handle the updates though the reading would work just fine, I tried re-adding the table to the model, created a new data model but nothing worked, I'm guessing I've hit the maximum number of columns limit on an object, can anyone explain that ?
I suspect there is some issue with an identity or timestamp column (something autogenerated on the SQL server). Make sure that any column that is autogenerated is marked that way in the model. You might also want to look at how it is handling concurrency. If you have triggers that update any values on the row after it is updated (changing values) and it is checking all columns on updates, this would cause the update to fail. Typically I create my tables with a timestamp column -- LINQ2SQL picks this up when I generate the model and uses it alone for concurrency.
Solved, either one of the following two
-I'm using a UniqueIdentifier column that was not set as Primary key
-Set Unique ID primary key, checked the properties of the same column in Server Explorer and it was still not showing as Primary key, refreshed the connection,dropped the same table on the model and voila.
So I assume I made a change to my model some time before, deleted the table from the model and added the same from the Server explorer without refreshing the connection and it never used to work.
Question is, does VS Server Explorer maintain it's own table schema and requires connection refresh everytime a change is made in the database ?
There is no limit to the number of columns LINQ to SQL will handle.
Have you got other tables updating successfully?
What else is different about how you are accessing the table content?

Resources