I'm updating a table that was originally poorly designed. The table currently has a primary key that is the name of the vendor. This serves as a foreign key to many other tables. This has led to issues with the Vendor name initially being entered incorrectly or with typos that need to be fixed. Since it's the foreign key to relationships, this is more complicated than it's worth.
Current Schema:
Vendor_name(pk) Vendor_contact comments
Desired Schema:
id(pk) Vendor_name Vendor_contact comments
I want to update the primary key to be an auto-generated numeric key. The vendor name field needs to persist but no longer be the key. I'll also need to update the value of the foreign key on other tables and on join tables.
Is the best way to do this to create a new numeric id column on my Vendor table, crosswalk the id to vendor names and add a new foreign key with the new id as the foreign key, drop the foreign key of vendor name on those tables (per this post), and then somehow mark the id as the primary key and unmark the vendor name?
Or is there a more streamlined way of doing this that isn't so broken out?
It's important to note that only 5 users can access this table so I can easily shut them out for a period of time while these updates are made - that's not an issue.
I'm working with SQLDeveloper and Python/Django.
The biggest problem you have is all the application code which references VENDOR_NAME in the dependent tables. Not just using it to join to the parent table, but also relying on it to display the name without joining to VENDOR.
So, although having a natural key as a foreign key is a PITN, changing this situation is likely to generate a whole lot of work, with a marginal overall benefit. Be sure to get buy-in from all the stakeholders before starting out.
The way I would approach it is this:
Do a really thorough impact analysis
Ensure you have complete regression tests for all the functions which rely on the Vendor data
Create VENDOR_ID as a unique key on VENDOR
Add VENDOR_ID to all the dependent tables
Create a second foreign on all the dependent tables referencing VENDOR_ID
Ensure that the VENDOR_ID is populated whenever the VENDOR_NAME is.
That last point can be tackled by either fix the insert and update statements on the dependent tables, or with triggers. Which approach you take will determine on your application design and also the number of tables involved. Obviously you want to avoid the performance hit of all those triggers if you can.
At this point you have an infrastructure which will support the new primary key but which still uses the old one. Why would you want to do this? Because you could go into Production like this without changing the application code. It gives you the option to move the application code to use VENDOR_ID across a broader time frame. Obviously, if developers have been keen on coding SELECT * FROM you will have issues that need addressing immediately.
Once you've fixed all the code you can drop VENDOR_NAME from all the dependent tables, and switch VENDOR_NAME to unique key and VENDOR_ID to primary key on the master table.
If you're on 11g you should check out Edition-Based Redefinition. It's designed to make this sort of exercise an awful lot easier. Find out more.
I would do it this way:
create your new sequence
create table temp as select your_sequence.nextval,vendor_name, vendor_contact, comments from vendor.
rename the original table to something like vendor_old
add the primary key and other constraints to the new table
rename the new table to the old name
Testing is essential and you must ensure no one is working on the database except you when this is done.
Related
I have an ASP.NET Core application that uses EF6 for dealing with a third-party application's database.
Everything is working as expected, but I'm unable to insert rows into a joining table.
I have two tables, Users and Groups, and a joining table GroupUser that identifies which users are members of which groups. Users has a PK of UserId, and Groups has a PK of GroupId.
GroupUser has only 3 columns: GroupId, UserId and another column (which is irrelevant for this post). The two foreign keys in this table identify a unique record.
Every time I try to insert into GroupUser, I get the inner exception
The table/view does not have a primary key defined. The entity is read-only
The error is correct. There is no PK, but both of the FKs are marked as keys in the model. Shouldn't VS be able to use those as a PK somehow?
The inserts used to work as some point, but required some manual modification of the .edmx file as XML in order to work. Unfortunately, our version control records containing this modification have been lost (and I wasn't the one originally working on this).
I've looked at and tried about a dozen articles around this, but they generally have to do with views instead of tables, so don't seem applicable to my case. The ones that did seem applicable didn't solve the issue.
The only other clue I have for a solution is this comment I found in the code:
// Important note: If you have updated the edmx file in the [redacted]
// project and suddenly start having problems, the edmx file may need to be
// edited as an xml file so that you can make changes necessary to make
// VS believe that the GroupUser table has a primary key. See revision #[redacted]
I'm able to insert into User and Group tables just fine, and as I've said, I don't have access to the revision log mentioned.
Edit: The database is for a third-party application, and unfortunately, it's not as simple as just modifying the table to add a PK. I wish it was. Problem would be solved. But I've been advised by the vendor not to make this change, as it may have unexpected consequences, and would void our support.
How can I 'trick' EF into thinking the table has a key? I'm also open to other workarounds. Modifying the DB structure is currently out of the question.
I am new in laravel. In my tutorial video teacher use foreign in migration but,i can create my relationships without it and use just belongTo and hasMany.When i use foreign can not delete one post easily (error is you can not delete because parent foreign has child ......).
my question is my way is good or not? and why?
Thank you all
Your way is good but I think foreign keys are better. Had you not had that foreign key, you would have deleted the post but all that post's children (referred to as orphans because they no longer have a parent) would have stuck around. In order to get around the foreign key error, you would need to first delete all the children for that post, and then delete the post.
The good news is foreign keys can also do this for you so you don't need to worry about keeping track of all the children. When you setup the foreign key, if you add the on delete cascade clause, when deleting the post, the database would automatically remove all of the posts's children for you and deleting a post without first deleting the children would no longer result in an error.
If it's your preference to keep the children around even when the post is deleted, you can use on delete set null instead which would simply set the children's foreign key to null rather than delete the record.
This is all useful for enforcing data integrity (databases should contain only accurate and valid data).
The answer really is not 'is this good practice in Laravel' so much as 'is this good practice for database management'.
There are many articles on the topic as to the good and bad side of using foreign keys. Here is a good explanation on the DBA stack exchange
https://dba.stackexchange.com/questions/168590/not-using-foreign-key-constraints-in-real-practice-is-it-ok
My personal preference is to use them to maintain data integrity. The real power comes in adding cascading deletes to the relationship (if applicable to your design).
It really comes down to how good you want your database to be.The main reasons to use foreign keys in your database are
To prevent actions that would destroy links between your tables
This would prevent the invalid data from being inserted to the foreign key column as it has to point to a existing value
Also defining foreign keys makes your query faster depending on database I don't know the exact milliseconds but if I find it out I will post it.
Well from the laravel point of view the way you do is a better way as this is how one of the main teacher of the Laravel(Jeffrey Way) teaches in the getting started with laravel series.
Foreign Keys are the way to define relationship between tables in your database whereas Laravel belongsTo() or hasMany() is a way to define relationship between tables in Laravel
I have a database which I've opened in phpMyAdmin. I clicked the "Insert" button, which has an icon showing one row being inserted between two others.
When I actually try to insert a row, I get the following error:
1062 - Duplicate entry '294' for key 'PRIMARY'
How do I get phpMyAdmin to insert a row (presumably by increasing all the higher-numbered rows by 1) as the icon and the term "Insert" implies? It only seems to want to "Add" a row to the end, not "Insert" it.
As I said, the icon specifically shows one row being inserted between two others, and this is what I want to do. How do I get it to do what it claims it will do?
First, "INSERT" is standard SQL terminology for putting something in the database; it doesn't specifically mean "putting it between two existing values". I see how the icon can be a bit confusing, but when "insertting" data there is no difference between putting something at the end or in the middle of the database. For that matter, there's no real inherent order to data stored in a database; you can select many different ways to sort it when you display the data (and phpMyAdmin generally does a good job of guessing what's reasonable), but data just exists. You can select to sort it by the primary key or alphabetically by user name or any means you wish.
Second, your primary key shouldn't change. It's the key that holds your data together; if you start changing that your references from other tables will be messed up (see below). So don't change that.
Third, if you have your primary key set up with auto_increment (the A_I checkbox in phpMyAdmin), then you shouldn't ever need to set it or worry about it yourself. It's all managed by MySQL. If you aren't happy with the order and want to move 294 to 295 so you can insert something else at 294, then your database design needs tweaking because that's not how auto_incrementing primary keys are designed to work. As a simple solution, you may wish to create another field called "sort_value" or something that you can change.
Which all brings me to the root cause of your trouble: you're trying to create a new row while reusing an existing auto_increment value, and MySQL is smart enough to know this is a bad idea.
So as I said above, changing your primary key (whether or not it's auto generated) is a bad idea, but it may not be obvious why if you only have one table. But relational databases are designed so that you can reference tables from other tables, so for instance a customer database might have a table for "customers", "products", and "purchases" where the purchases table references the primary key ID from both customers and products...imagine the carnage your data would see if you then change the value of those keys in the customer table. You'd show customers associated with some other customer's purchases. So it might not make sense in your database, but overall that's the best way to handle things.
If you really, really don't want to change your database structure, don't reference that key from any other tables, and don't want to listen to my advice, you should be able to simply turn off the auto_increment function on your primary key and reorder them however you wish.
Background: http://jeffkemponoracle.com/2011/03/11/handling-unique-constraint-violations-by-hibernate
Our table is:
BOND_PAYMENTS (BOND_PAYMENT_ID, BOND_NUMBER, PAYMENT_ID)
There is a Primary key constraint on BOND_PAYMENT_ID, and a Unique constraint on (BOND_NUMBER, PAYMENT_ID).
The application uses Hibernate, and allows a user to view all the Payments linked to a particular Bond; and it allows them to create new links, and delete existing links. Once they’ve made all their desired changes on the page, they hit “Save”, and Hibernate does its magic to run the required SQL on the database. Apparently, Hibernate works out which records need to be deleted, which need to be inserted, and leaves the rest untouched. Unfortunately, it does the INSERTs first, then it does the DELETEs.
If the user deletes a link to a payment, then changes their mind and re-inserts a link to the same payment, Hibernate quite happily tries to insert it then delete it. Since these inserts/deletes are running as separate SQL statements, Oracle validates the constraint immediately on the first insert and issues ORA-00001 unique constraint violated.
We know of only two options:
Make the constraint deferrable
Remove the unique constraint
Option 2 is not very palatable, because the constraint provides excellent protection from nasty application bugs that might allow inconsistent data to be saved. We went with option 1.
ALTER TABLE bond_payments ADD
CONSTRAINT bond_payment_uk UNIQUE (bond_number, payment_id)
DEFERRABLE INITIALLY DEFERRED;
The downside is that the index created to police this constraint is now a non-unique index, so may be somewhat less efficient for queries. We have decided this is not as great a detriment for this particular case. Another downside (advised by Gary) is that it may suffer from a particular Oracle bug - although I believe we will be immune (at least, mostly) due to the way the application works.
Are there any other options we should consider?
From the problem you described, it's not clear if you have an entity BondPayment or if you have a Bond linked directly to a Payment. For now, I suppose you have the link between Payment and Bond through BondPayment. In this case, Hibernate is doing the right thing, and you'll need to add some logic in your app to retrieve the link and remove it (or change it). Something like this:
bond.getBondPayment().setPayment(newPayment);
You are probably doing something like this:
BondPayment bondPayment = new BondPayment();
bondPayment.setPayment(newPayment);
bondPayment.setBond(bond);
bond.setBondPayment(bondPayment);
In the first case, the BondPayment.id is kept, and you are just changing the payment for it. In the second case, it's a brand new BondPayment, and it will conflict with an existing record in the database.
I said that Hibernate is doing the right thing because it threats BondPayment as a "regular" entity, whose lifecycle is defined by your app. It's the same as having a User with a unique constraint on login, and you are trying to insert a second record with a duplicate login. Hibernate will accept (it doesn't knows if the login exists in the database) and your database will refuse.
I have to add some security for a C#/.NET WinForms/Desktop application. I am using Oracle DB back-end.
The tables are simple: User (ID,Name), Role(ID,Role), UserRole(UserID,RoleID).
I am using the windows account name to populate User table. Role table will for now just be simply 'Admin','SuperUser','BasicUser'...
Since no two people could ever possible have the same windows account name... even when I do not control these name management (netops does, hence why I want to use windows accounts so I don't have to manage it ;)). For Role table, I should again never have dupe value - I control the input, there will only be 3 (tactical app going away within year). UserRole is a join table to represent the Many-To-Many relationships of users and roles, so no surragate key is justified.
Simple question - Why bother with 'ID' (int) in the User and Role table? Any point or advantage here? Is this one of those 'I've always done it this way' type things? Or have I just not done this in awhile and forget the reason?
Names change - primary key values must not. Abigail Smith becomes Abigail Jones and the username changes but a surrogate key protects against having to cascade those changes everywhere.
If you are using a surrogate key but there is a column or combination of columns which should be unique, then enforce that using a unique index. There's a good chance you'll want indexes on your user.name and role.role columns anyway, and a unique index is more space efficient and supplies useful metadata to the optimizer. If you have a surrogate key but don't have another combination of columns that uniquely identify a row then think again whether you have your entity definition right.
One caution. Especially for very narrow tables with few access paths, you may use an index-organized table. Oracle will only allow an index organized table on the primary key, but does allow foreign keys against a unique set of columns (if it is enforced by a unique constraint, not simply a unique index).
It is possible that you'll end up with a table where a unique ID is enforced through a unique index and treated as PK by an ORM and used as the parent for foreign key relationships, but the primary key (as defined in the DB) is the rolename/username/whatever because you want that as the driver for an index-organised table.
A surrogate key is not required on intersection tables, but here are a few reasons to do so:
Consistency: If every table has a single artificial key, you always know the key name when you know the table name.
Ease Of Use: Less typing — one key means ON and WHERE clauses are shorter and thus less error-prone.
Interoperability: Some ORMs only work well with tables with a single primary key column.