Force Hibernate to issue DELETEs prior to INSERTs to avoid unique constraint violations? - oracle

Background: http://jeffkemponoracle.com/2011/03/11/handling-unique-constraint-violations-by-hibernate
Our table is:
BOND_PAYMENTS (BOND_PAYMENT_ID, BOND_NUMBER, PAYMENT_ID)
There is a Primary key constraint on BOND_PAYMENT_ID, and a Unique constraint on (BOND_NUMBER, PAYMENT_ID).
The application uses Hibernate, and allows a user to view all the Payments linked to a particular Bond; and it allows them to create new links, and delete existing links. Once they’ve made all their desired changes on the page, they hit “Save”, and Hibernate does its magic to run the required SQL on the database. Apparently, Hibernate works out which records need to be deleted, which need to be inserted, and leaves the rest untouched. Unfortunately, it does the INSERTs first, then it does the DELETEs.
If the user deletes a link to a payment, then changes their mind and re-inserts a link to the same payment, Hibernate quite happily tries to insert it then delete it. Since these inserts/deletes are running as separate SQL statements, Oracle validates the constraint immediately on the first insert and issues ORA-00001 unique constraint violated.
We know of only two options:
Make the constraint deferrable
Remove the unique constraint
Option 2 is not very palatable, because the constraint provides excellent protection from nasty application bugs that might allow inconsistent data to be saved. We went with option 1.
ALTER TABLE bond_payments ADD
CONSTRAINT bond_payment_uk UNIQUE (bond_number, payment_id)
DEFERRABLE INITIALLY DEFERRED;
The downside is that the index created to police this constraint is now a non-unique index, so may be somewhat less efficient for queries. We have decided this is not as great a detriment for this particular case. Another downside (advised by Gary) is that it may suffer from a particular Oracle bug - although I believe we will be immune (at least, mostly) due to the way the application works.
Are there any other options we should consider?

From the problem you described, it's not clear if you have an entity BondPayment or if you have a Bond linked directly to a Payment. For now, I suppose you have the link between Payment and Bond through BondPayment. In this case, Hibernate is doing the right thing, and you'll need to add some logic in your app to retrieve the link and remove it (or change it). Something like this:
bond.getBondPayment().setPayment(newPayment);
You are probably doing something like this:
BondPayment bondPayment = new BondPayment();
bondPayment.setPayment(newPayment);
bondPayment.setBond(bond);
bond.setBondPayment(bondPayment);
In the first case, the BondPayment.id is kept, and you are just changing the payment for it. In the second case, it's a brand new BondPayment, and it will conflict with an existing record in the database.
I said that Hibernate is doing the right thing because it threats BondPayment as a "regular" entity, whose lifecycle is defined by your app. It's the same as having a User with a unique constraint on login, and you are trying to insert a second record with a duplicate login. Hibernate will accept (it doesn't knows if the login exists in the database) and your database will refuse.

Related

Entity Framework 6 and Oracle: The table/view does not have a primary key defined. The Entity is read-only

I have an ASP.NET Core application that uses EF6 for dealing with a third-party application's database.
Everything is working as expected, but I'm unable to insert rows into a joining table.
I have two tables, Users and Groups, and a joining table GroupUser that identifies which users are members of which groups. Users has a PK of UserId, and Groups has a PK of GroupId.
GroupUser has only 3 columns: GroupId, UserId and another column (which is irrelevant for this post). The two foreign keys in this table identify a unique record.
Every time I try to insert into GroupUser, I get the inner exception
The table/view does not have a primary key defined. The entity is read-only
The error is correct. There is no PK, but both of the FKs are marked as keys in the model. Shouldn't VS be able to use those as a PK somehow?
The inserts used to work as some point, but required some manual modification of the .edmx file as XML in order to work. Unfortunately, our version control records containing this modification have been lost (and I wasn't the one originally working on this).
I've looked at and tried about a dozen articles around this, but they generally have to do with views instead of tables, so don't seem applicable to my case. The ones that did seem applicable didn't solve the issue.
The only other clue I have for a solution is this comment I found in the code:
// Important note: If you have updated the edmx file in the [redacted]
// project and suddenly start having problems, the edmx file may need to be
// edited as an xml file so that you can make changes necessary to make
// VS believe that the GroupUser table has a primary key. See revision #[redacted]
I'm able to insert into User and Group tables just fine, and as I've said, I don't have access to the revision log mentioned.
Edit: The database is for a third-party application, and unfortunately, it's not as simple as just modifying the table to add a PK. I wish it was. Problem would be solved. But I've been advised by the vendor not to make this change, as it may have unexpected consequences, and would void our support.
How can I 'trick' EF into thinking the table has a key? I'm also open to other workarounds. Modifying the DB structure is currently out of the question.

is bad habit to don't use foreign in migration laravel?

I am new in laravel. In my tutorial video teacher use foreign in migration but,i can create my relationships without it and use just belongTo and hasMany.When i use foreign can not delete one post easily (error is you can not delete because parent foreign has child ......).
my question is my way is good or not? and why?
Thank you all
Your way is good but I think foreign keys are better. Had you not had that foreign key, you would have deleted the post but all that post's children (referred to as orphans because they no longer have a parent) would have stuck around. In order to get around the foreign key error, you would need to first delete all the children for that post, and then delete the post.
The good news is foreign keys can also do this for you so you don't need to worry about keeping track of all the children. When you setup the foreign key, if you add the on delete cascade clause, when deleting the post, the database would automatically remove all of the posts's children for you and deleting a post without first deleting the children would no longer result in an error.
If it's your preference to keep the children around even when the post is deleted, you can use on delete set null instead which would simply set the children's foreign key to null rather than delete the record.
This is all useful for enforcing data integrity (databases should contain only accurate and valid data).
The answer really is not 'is this good practice in Laravel' so much as 'is this good practice for database management'.
There are many articles on the topic as to the good and bad side of using foreign keys. Here is a good explanation on the DBA stack exchange
https://dba.stackexchange.com/questions/168590/not-using-foreign-key-constraints-in-real-practice-is-it-ok
My personal preference is to use them to maintain data integrity. The real power comes in adding cascading deletes to the relationship (if applicable to your design).
It really comes down to how good you want your database to be.The main reasons to use foreign keys in your database are
To prevent actions that would destroy links between your tables
This would prevent the invalid data from being inserted to the foreign key column as it has to point to a existing value
Also defining foreign keys makes your query faster depending on database I don't know the exact milliseconds but if I find it out I will post it.
Well from the laravel point of view the way you do is a better way as this is how one of the main teacher of the Laravel(Jeffrey Way) teaches in the getting started with laravel series.
Foreign Keys are the way to define relationship between tables in your database whereas Laravel belongsTo() or hasMany() is a way to define relationship between tables in Laravel

Changing Primary Key in Oracle

I'm updating a table that was originally poorly designed. The table currently has a primary key that is the name of the vendor. This serves as a foreign key to many other tables. This has led to issues with the Vendor name initially being entered incorrectly or with typos that need to be fixed. Since it's the foreign key to relationships, this is more complicated than it's worth.
Current Schema:
Vendor_name(pk) Vendor_contact comments
Desired Schema:
id(pk) Vendor_name Vendor_contact comments
I want to update the primary key to be an auto-generated numeric key. The vendor name field needs to persist but no longer be the key. I'll also need to update the value of the foreign key on other tables and on join tables.
Is the best way to do this to create a new numeric id column on my Vendor table, crosswalk the id to vendor names and add a new foreign key with the new id as the foreign key, drop the foreign key of vendor name on those tables (per this post), and then somehow mark the id as the primary key and unmark the vendor name?
Or is there a more streamlined way of doing this that isn't so broken out?
It's important to note that only 5 users can access this table so I can easily shut them out for a period of time while these updates are made - that's not an issue.
I'm working with SQLDeveloper and Python/Django.
The biggest problem you have is all the application code which references VENDOR_NAME in the dependent tables. Not just using it to join to the parent table, but also relying on it to display the name without joining to VENDOR.
So, although having a natural key as a foreign key is a PITN, changing this situation is likely to generate a whole lot of work, with a marginal overall benefit. Be sure to get buy-in from all the stakeholders before starting out.
The way I would approach it is this:
Do a really thorough impact analysis
Ensure you have complete regression tests for all the functions which rely on the Vendor data
Create VENDOR_ID as a unique key on VENDOR
Add VENDOR_ID to all the dependent tables
Create a second foreign on all the dependent tables referencing VENDOR_ID
Ensure that the VENDOR_ID is populated whenever the VENDOR_NAME is.
That last point can be tackled by either fix the insert and update statements on the dependent tables, or with triggers. Which approach you take will determine on your application design and also the number of tables involved. Obviously you want to avoid the performance hit of all those triggers if you can.
At this point you have an infrastructure which will support the new primary key but which still uses the old one. Why would you want to do this? Because you could go into Production like this without changing the application code. It gives you the option to move the application code to use VENDOR_ID across a broader time frame. Obviously, if developers have been keen on coding SELECT * FROM you will have issues that need addressing immediately.
Once you've fixed all the code you can drop VENDOR_NAME from all the dependent tables, and switch VENDOR_NAME to unique key and VENDOR_ID to primary key on the master table.
If you're on 11g you should check out Edition-Based Redefinition. It's designed to make this sort of exercise an awful lot easier. Find out more.
I would do it this way:
create your new sequence
create table temp as select your_sequence.nextval,vendor_name, vendor_contact, comments from vendor.
rename the original table to something like vendor_old
add the primary key and other constraints to the new table
rename the new table to the old name
Testing is essential and you must ensure no one is working on the database except you when this is done.

How to insert rows in phpMyAdmin

I have a database which I've opened in phpMyAdmin. I clicked the "Insert" button, which has an icon showing one row being inserted between two others.
When I actually try to insert a row, I get the following error:
1062 - Duplicate entry '294' for key 'PRIMARY'
How do I get phpMyAdmin to insert a row (presumably by increasing all the higher-numbered rows by 1) as the icon and the term "Insert" implies? It only seems to want to "Add" a row to the end, not "Insert" it.
As I said, the icon specifically shows one row being inserted between two others, and this is what I want to do. How do I get it to do what it claims it will do?
First, "INSERT" is standard SQL terminology for putting something in the database; it doesn't specifically mean "putting it between two existing values". I see how the icon can be a bit confusing, but when "insertting" data there is no difference between putting something at the end or in the middle of the database. For that matter, there's no real inherent order to data stored in a database; you can select many different ways to sort it when you display the data (and phpMyAdmin generally does a good job of guessing what's reasonable), but data just exists. You can select to sort it by the primary key or alphabetically by user name or any means you wish.
Second, your primary key shouldn't change. It's the key that holds your data together; if you start changing that your references from other tables will be messed up (see below). So don't change that.
Third, if you have your primary key set up with auto_increment (the A_I checkbox in phpMyAdmin), then you shouldn't ever need to set it or worry about it yourself. It's all managed by MySQL. If you aren't happy with the order and want to move 294 to 295 so you can insert something else at 294, then your database design needs tweaking because that's not how auto_incrementing primary keys are designed to work. As a simple solution, you may wish to create another field called "sort_value" or something that you can change.
Which all brings me to the root cause of your trouble: you're trying to create a new row while reusing an existing auto_increment value, and MySQL is smart enough to know this is a bad idea.
So as I said above, changing your primary key (whether or not it's auto generated) is a bad idea, but it may not be obvious why if you only have one table. But relational databases are designed so that you can reference tables from other tables, so for instance a customer database might have a table for "customers", "products", and "purchases" where the purchases table references the primary key ID from both customers and products...imagine the carnage your data would see if you then change the value of those keys in the customer table. You'd show customers associated with some other customer's purchases. So it might not make sense in your database, but overall that's the best way to handle things.
If you really, really don't want to change your database structure, don't reference that key from any other tables, and don't want to listen to my advice, you should be able to simply turn off the auto_increment function on your primary key and reorder them however you wish.

Maintaining uniqueness of records

There are almost 3 million financial transaction records in my database. These records are loaded from external files containing following fields which are mapped to the table's columns.
Account, Date, Amount, Particulars/Description/Details/Narration
Now There is a need to maintain uniqueness of already loaded and future records.
Since there was no uniqueness in external files which are already loaded so, I think, we have to update existing records by making unique key using given fields, but, it is quite clear that fields in the external file may duplicate.
How to maintain such uniqueness that we can identify a transactions from the file is already loaded. All type of suggestions are welcome.
Edit 1
Currently loaded records are confirmed to be valid, the need to maintain uniqueness has just came up due to loading of some missing records from older files or missing files
Edit 2
Existing records may have duplicate records based on given 4 fields i.e. same values for Account, Date, Amount and Particulars for two or more valid transactions, but it is sure that these records are valid even with duplicate values.
Now for loading missing records we need to identify if a record is already loaded or not so that we don't load a record which is already loaded. So, to me, it looks very hard to know if a record is already loaded based on these fields. I see it as beyond the limits of these fields
Edit 3
Situation has changed now and this is no more a valid question but it would be better to keep it here for others. It has been agreed to add a unique key in records and hence check against this key for duplication
Note - following some clarification from the OP this answer is not relevant to their scenario. The problem is a political or business problem rather than a technical one. I will leave this answer as a solution to a hypothetical question because it may still be of use to some future seekers.
My other response addresses the OP's actual situation.
It seems like you need a compound unique key:
alter table your_table add constraint your_table_uk
unique (Account, Date, Amount, Particulars)
using index
particulars seems a bit woolly as a source of uniqueness, but presumably an account can have more than one transaction for the same amount on any given day, so you need all four columns to guarantee uniqueness of the row.
Or perhaps, as #ypercube suggests, only (Account, Date, Particulars) are necessary.
I have suggested a unique key rather than a primary key constraint because composite primary keys are bad news when it comes to enforcing foreign keys. In this case I would suggest you add a synthetic primary key, populated with a sequence.
You say the loaded records have a proven validity, but if that is not the case change the ALTER TABLE statement to use the EXCEPTIONS INTO clause to find the duplicated rows. You will a special table to capture the constraint violations. Find out more.
"Existing records may have duplicate records based on given 4 fields
i.e. same values for Account, Date, Amount and Particulars for two or
more valid transactions, but it is sure that these records are valid
even with duplicate values."
But how can anybody tell, if there is no token of uniqueness in the loaded data or the source files? What does validity even mean?
"Now for loading missing records we need to identify if a record is
already loaded or not so that we don't load a record which is already
loaded."
Without an existing source of uniqueness you cannot do this. Because it you have two rows for a given combination of (Account, Date, Amount, Particulars) and that's okay, what are the rules for determining that a third instance of (Account, Date, Amount, Particulars) is a record which which has already been loaded, hence invalid, or record which has not been loaded, hence valid.
"So, to me, it looks very hard to know if a record is already loaded
based on these fields. I see it as beyond the limits of these fields"
You're right to say that the solution cannot be found in the data as you describe it. But the solution is actually very simple. You go to the people who have asserted the validity of the loaded records and present them with a list of these additional records. They'll be able to use their skill and judgement to tell you which records are valid, and you load those.
" it is my duty to find the solution"
No it is not your duty. Right now the duty lies on the shoulders of the data owner to define their data set accurately, and that includes identifying a business key. They are the ones abrogating their responsibilities.
Under the circumstances you have three choices:
Refuse to load any further records until the data owner does their duty.
Load all the records presented to you for loading, without any validation.
Use the horrible NOVALIDATE syntax.
NOVALIDATE is a way of enforcing validation rules for future rows but ignoring violations in the existing data. Basically it's a technical kludge for a political problem.
SQL> select * from t23
/
COL1 COL2
---------- --------------------
1 MR KNOX
1 MR KNOX
2 FOX IN SOCKS
2 FOX IN SOCKS
SQL> create index t23_idx on t23(col1,col2)
/
Index created.
SQL> alter table t23 add constraint t23_uk
unique (col1,col2) novalidate
/
Table altered.
SQL> insert into t23 values (2, 'FOX IN SOCKS')
/
insert into t23 values (2, 'FOX IN SOCKS')
*
ERROR at line 1:
ORA-00001: unique constraint (APC.T23_UK) violated
SQL>
Note that you need to pre-create a non-unique index before adding the constraint. If you don't do that the database will build a unique index and that will override the NOVALIDATE clause.
I describe the NOVALIDATE as horrible because it is. It bakes data corruption into the database. But it is the closest thing you'll get to a solution.
This approach completely ignores the notion of "validity". So it will reject records which perhaps should have loaded because they represent a "valid" nth occurrence of (Account, Date, Amount, Particulars). This is unavoidable. The good news is, nobody will be able to tell, because there are no defined rules for establishing validity.
Whatever option you choose, it is crucial that you explain it clearly to your boss, the data owner, the data owner's boss and whoever else you think fit, and get their written assent to go ahead. Otherwise, sometime down the line people will discover that the database is full of duplicate rows or somebody will complain that a "valid" record hasn't been loaded, and it will all be your fault ... unless you have a signed piece of paper with authorisation from the appropriate top brass.
Good luck
Haki's suggestion of using MERGE has the same effect as NOVALIDATE, because it would load new records and suppress all duplicates. However, it is even more of a kludge: it doesn't address the notion of uniqueness at all. Anybody who had INSERT or UPDATE access would still be able to have any rows they liked. So this approach would only work if you could completely lock down privileges on that table so that its data can only be manipulated through MERGE and no other DML. Depends whether ongoing uniqueness matters. Again, a business decision.
sounds like you need an upsert - or as oracle calls it MERGE
A MERGE operation between two tables allows you to handle two common situations -
The record already exist in the target table and I need to do
something with it - either update or do nothing.
The record does not exist in the target table - Insert it.

Resources