I have a database which I've opened in phpMyAdmin. I clicked the "Insert" button, which has an icon showing one row being inserted between two others.
When I actually try to insert a row, I get the following error:
1062 - Duplicate entry '294' for key 'PRIMARY'
How do I get phpMyAdmin to insert a row (presumably by increasing all the higher-numbered rows by 1) as the icon and the term "Insert" implies? It only seems to want to "Add" a row to the end, not "Insert" it.
As I said, the icon specifically shows one row being inserted between two others, and this is what I want to do. How do I get it to do what it claims it will do?
First, "INSERT" is standard SQL terminology for putting something in the database; it doesn't specifically mean "putting it between two existing values". I see how the icon can be a bit confusing, but when "insertting" data there is no difference between putting something at the end or in the middle of the database. For that matter, there's no real inherent order to data stored in a database; you can select many different ways to sort it when you display the data (and phpMyAdmin generally does a good job of guessing what's reasonable), but data just exists. You can select to sort it by the primary key or alphabetically by user name or any means you wish.
Second, your primary key shouldn't change. It's the key that holds your data together; if you start changing that your references from other tables will be messed up (see below). So don't change that.
Third, if you have your primary key set up with auto_increment (the A_I checkbox in phpMyAdmin), then you shouldn't ever need to set it or worry about it yourself. It's all managed by MySQL. If you aren't happy with the order and want to move 294 to 295 so you can insert something else at 294, then your database design needs tweaking because that's not how auto_incrementing primary keys are designed to work. As a simple solution, you may wish to create another field called "sort_value" or something that you can change.
Which all brings me to the root cause of your trouble: you're trying to create a new row while reusing an existing auto_increment value, and MySQL is smart enough to know this is a bad idea.
So as I said above, changing your primary key (whether or not it's auto generated) is a bad idea, but it may not be obvious why if you only have one table. But relational databases are designed so that you can reference tables from other tables, so for instance a customer database might have a table for "customers", "products", and "purchases" where the purchases table references the primary key ID from both customers and products...imagine the carnage your data would see if you then change the value of those keys in the customer table. You'd show customers associated with some other customer's purchases. So it might not make sense in your database, but overall that's the best way to handle things.
If you really, really don't want to change your database structure, don't reference that key from any other tables, and don't want to listen to my advice, you should be able to simply turn off the auto_increment function on your primary key and reorder them however you wish.
Related
I created a view for a client-independent customizing table. The primary key consists of three components - first one being a secondary key on a check table. It is also used to form subsets of the table data. Altogether, it looks something like this:
Column Key
------ --------
frmd Secondary check table and Subset
attr1 KEY
attr2 KEY
url
But everytime I try to insert a new key combination, the view dumps with DATA_LENGTH_0 CX_SY_RANGE_OUT_OF_BOUNDS, because the report tries to access a string. Apparently it is somehow related to the field generictrp being set. What does this flag tell me and how do I change it? Also non-key components like url are not being fetched - the column is totally empty.
Modifying the customizing table via transaction SM30 works fine, but I don't want that ugly first column.
I've tried to recreate the view multiple times and I also compared the settings with existing customizing views.
Access is set to read,change, delete and insert
Display/Maintenance is allowed
Selecting everything from the View with SELECT works fine
EDIT
Picture 1: what I have
Picture 2: what I want; without the first key column...
The most likely reason I faced this problem, is that I had a foreign key of type i. I changed the type to n and regenerated everything.
Seems to work for now.
I'm updating a table that was originally poorly designed. The table currently has a primary key that is the name of the vendor. This serves as a foreign key to many other tables. This has led to issues with the Vendor name initially being entered incorrectly or with typos that need to be fixed. Since it's the foreign key to relationships, this is more complicated than it's worth.
Current Schema:
Vendor_name(pk) Vendor_contact comments
Desired Schema:
id(pk) Vendor_name Vendor_contact comments
I want to update the primary key to be an auto-generated numeric key. The vendor name field needs to persist but no longer be the key. I'll also need to update the value of the foreign key on other tables and on join tables.
Is the best way to do this to create a new numeric id column on my Vendor table, crosswalk the id to vendor names and add a new foreign key with the new id as the foreign key, drop the foreign key of vendor name on those tables (per this post), and then somehow mark the id as the primary key and unmark the vendor name?
Or is there a more streamlined way of doing this that isn't so broken out?
It's important to note that only 5 users can access this table so I can easily shut them out for a period of time while these updates are made - that's not an issue.
I'm working with SQLDeveloper and Python/Django.
The biggest problem you have is all the application code which references VENDOR_NAME in the dependent tables. Not just using it to join to the parent table, but also relying on it to display the name without joining to VENDOR.
So, although having a natural key as a foreign key is a PITN, changing this situation is likely to generate a whole lot of work, with a marginal overall benefit. Be sure to get buy-in from all the stakeholders before starting out.
The way I would approach it is this:
Do a really thorough impact analysis
Ensure you have complete regression tests for all the functions which rely on the Vendor data
Create VENDOR_ID as a unique key on VENDOR
Add VENDOR_ID to all the dependent tables
Create a second foreign on all the dependent tables referencing VENDOR_ID
Ensure that the VENDOR_ID is populated whenever the VENDOR_NAME is.
That last point can be tackled by either fix the insert and update statements on the dependent tables, or with triggers. Which approach you take will determine on your application design and also the number of tables involved. Obviously you want to avoid the performance hit of all those triggers if you can.
At this point you have an infrastructure which will support the new primary key but which still uses the old one. Why would you want to do this? Because you could go into Production like this without changing the application code. It gives you the option to move the application code to use VENDOR_ID across a broader time frame. Obviously, if developers have been keen on coding SELECT * FROM you will have issues that need addressing immediately.
Once you've fixed all the code you can drop VENDOR_NAME from all the dependent tables, and switch VENDOR_NAME to unique key and VENDOR_ID to primary key on the master table.
If you're on 11g you should check out Edition-Based Redefinition. It's designed to make this sort of exercise an awful lot easier. Find out more.
I would do it this way:
create your new sequence
create table temp as select your_sequence.nextval,vendor_name, vendor_contact, comments from vendor.
rename the original table to something like vendor_old
add the primary key and other constraints to the new table
rename the new table to the old name
Testing is essential and you must ensure no one is working on the database except you when this is done.
Background: http://jeffkemponoracle.com/2011/03/11/handling-unique-constraint-violations-by-hibernate
Our table is:
BOND_PAYMENTS (BOND_PAYMENT_ID, BOND_NUMBER, PAYMENT_ID)
There is a Primary key constraint on BOND_PAYMENT_ID, and a Unique constraint on (BOND_NUMBER, PAYMENT_ID).
The application uses Hibernate, and allows a user to view all the Payments linked to a particular Bond; and it allows them to create new links, and delete existing links. Once they’ve made all their desired changes on the page, they hit “Save”, and Hibernate does its magic to run the required SQL on the database. Apparently, Hibernate works out which records need to be deleted, which need to be inserted, and leaves the rest untouched. Unfortunately, it does the INSERTs first, then it does the DELETEs.
If the user deletes a link to a payment, then changes their mind and re-inserts a link to the same payment, Hibernate quite happily tries to insert it then delete it. Since these inserts/deletes are running as separate SQL statements, Oracle validates the constraint immediately on the first insert and issues ORA-00001 unique constraint violated.
We know of only two options:
Make the constraint deferrable
Remove the unique constraint
Option 2 is not very palatable, because the constraint provides excellent protection from nasty application bugs that might allow inconsistent data to be saved. We went with option 1.
ALTER TABLE bond_payments ADD
CONSTRAINT bond_payment_uk UNIQUE (bond_number, payment_id)
DEFERRABLE INITIALLY DEFERRED;
The downside is that the index created to police this constraint is now a non-unique index, so may be somewhat less efficient for queries. We have decided this is not as great a detriment for this particular case. Another downside (advised by Gary) is that it may suffer from a particular Oracle bug - although I believe we will be immune (at least, mostly) due to the way the application works.
Are there any other options we should consider?
From the problem you described, it's not clear if you have an entity BondPayment or if you have a Bond linked directly to a Payment. For now, I suppose you have the link between Payment and Bond through BondPayment. In this case, Hibernate is doing the right thing, and you'll need to add some logic in your app to retrieve the link and remove it (or change it). Something like this:
bond.getBondPayment().setPayment(newPayment);
You are probably doing something like this:
BondPayment bondPayment = new BondPayment();
bondPayment.setPayment(newPayment);
bondPayment.setBond(bond);
bond.setBondPayment(bondPayment);
In the first case, the BondPayment.id is kept, and you are just changing the payment for it. In the second case, it's a brand new BondPayment, and it will conflict with an existing record in the database.
I said that Hibernate is doing the right thing because it threats BondPayment as a "regular" entity, whose lifecycle is defined by your app. It's the same as having a User with a unique constraint on login, and you are trying to insert a second record with a duplicate login. Hibernate will accept (it doesn't knows if the login exists in the database) and your database will refuse.
I have to add some security for a C#/.NET WinForms/Desktop application. I am using Oracle DB back-end.
The tables are simple: User (ID,Name), Role(ID,Role), UserRole(UserID,RoleID).
I am using the windows account name to populate User table. Role table will for now just be simply 'Admin','SuperUser','BasicUser'...
Since no two people could ever possible have the same windows account name... even when I do not control these name management (netops does, hence why I want to use windows accounts so I don't have to manage it ;)). For Role table, I should again never have dupe value - I control the input, there will only be 3 (tactical app going away within year). UserRole is a join table to represent the Many-To-Many relationships of users and roles, so no surragate key is justified.
Simple question - Why bother with 'ID' (int) in the User and Role table? Any point or advantage here? Is this one of those 'I've always done it this way' type things? Or have I just not done this in awhile and forget the reason?
Names change - primary key values must not. Abigail Smith becomes Abigail Jones and the username changes but a surrogate key protects against having to cascade those changes everywhere.
If you are using a surrogate key but there is a column or combination of columns which should be unique, then enforce that using a unique index. There's a good chance you'll want indexes on your user.name and role.role columns anyway, and a unique index is more space efficient and supplies useful metadata to the optimizer. If you have a surrogate key but don't have another combination of columns that uniquely identify a row then think again whether you have your entity definition right.
One caution. Especially for very narrow tables with few access paths, you may use an index-organized table. Oracle will only allow an index organized table on the primary key, but does allow foreign keys against a unique set of columns (if it is enforced by a unique constraint, not simply a unique index).
It is possible that you'll end up with a table where a unique ID is enforced through a unique index and treated as PK by an ORM and used as the parent for foreign key relationships, but the primary key (as defined in the DB) is the rolename/username/whatever because you want that as the driver for an index-organised table.
A surrogate key is not required on intersection tables, but here are a few reasons to do so:
Consistency: If every table has a single artificial key, you always know the key name when you know the table name.
Ease Of Use: Less typing — one key means ON and WHERE clauses are shorter and thus less error-prone.
Interoperability: Some ORMs only work well with tables with a single primary key column.
I created two applications that were essentially identical on heroku. They started off different because I was testing uploading to heroku and having some challenges making the adjustments.
But now things seem to be working, but both have data that I would like to consolidate. Since they run off the same git repository, the code is the same, as are the migrations.
Seems like I need to bring it down locally and merge, but not exactly clear how to do that. Did some searches on Google and nothing clear.
I'd like some help in terms of a step-by-step, I don't have a clear process.
1) I have two apps on heroku where I have the databases. They have the same schemas;
2) I don't need to know where the data came from: I just need it all to reside in a single database
3) I would like to be able to do it with specific sql commands, versus manually opening (not sure how I would do that) and then munging since there are about 10 different interrelated tables.
Thanks!
There is not automatic way to do this since there is no way to automate this in a generic fashion (without doing some stuff you would want to do). Therefore, it'll take a few steps, but you can leverage tools all along the way.
You can use Heroku's built-in tools to get a dump of the table. First download and import the data into your database, and then dump it out into a text file (SQL format).
Once you have one of the data sets in SQL as text, you need to edit the file a little. You need to make it an import script instead of a "rebuild the database" script that starts by deleting existing rows (or tables). If you're careful, it may already be in the right format, but likely something will be off.
There are a few gotchas you can run into:
If you have generated keys for records-- which you probably do-- then you'll have to renumber them in the data set you are importing. There may be a way to export them without generated keys, but what I have done is use a quick grep to renumber them outside of the range of the database I'm merging into.
If there are references to theses keys in other tables (as foreign keys), you'll have to renumber there as well.
Some tables may be "reference tables", and the same on both systems, so you can skip importing them.
Some tables may not need to be merged.
Once you have the text file in good shape, run it locally and test it. If it messes things up, don't worry-- just download the production data (the one you're importing into), and try again. Iterate until you have everything working well locally. Then, upload the file to heroku.
I know it sounds like a few steps-- and it is. There are no tricky problems to solve, though. You just need to go slowly and carefully. Get someone to pair with you on it to help you think it through.
Assuming you don't need to eliminate duplicates, you can do this for each table
insert into db1.tablea
select * from db2.tablea ;
Some complications:
if the tables have id columns, you need to make sure they don't clash, by replacing old ids with new ids
but, since the ids are the keys that link the tables, you need to make sure that new ids match in each table.
Here's a quick and dirty way to do it:
Find the highest id in any table in the first database.
Call this max_key_db1.
Then update all keys in the second database to be current_value plus max_key_db1.
Note that you'll need to update both primary keys and foreign keys for this to work, e.g.:
update db2.tablea set id = id + max_key_db1, foreign_id = foreign_id + max_key_db1;
update db2.tableb set id = id + max_key_db1, a_id = a_id + max_key_db1;
etc.
Now you have a self-consistent db2 with all keys (primary and foreign) with values that don't exist in db1; in other words, you keys are unique across both databases.
Now you can insert the rows from db2 into db1:
insert into db1.tablea
select * from db2.tablea ;
Note this won't work if the tables inserted into create their own ids using auto-increment or triggers; in this case you'll have to specify the ciolumns explicitly and turn off any auto-generated ids:
insert into db1.tablea( id, foreign_id, col1, ...)
select id, foreign_id, col1 from db2.tablea ;
Alternately, you can leave db2 unaltered, by doing this all in one step for each table:
insert into db1.tablea( id, foreign_id, col3, col4)
select id + max_key_db1, foreign_id + max_key_db1, col3, col4 from db2.tablea ;
Of course, do this all inside a transaction, and don't commit until you're sure you've gotten every table and all are correct. And do this on copies of your databases.
Now, since you used the highest key in db1 regardless of table, likely your ids won't be consecutive, but who cares? Keys are keys. What you will need to do is reset any auto_increment or sequence, for each table, so that teh next auto-generated key is higher than the highest key in that table. How you do that depends on what RDBMS you're using.
Going to close this -- decided to just manually select the right data and re-enter it so I can do some error checking -- a pain but this approach doesn't seem to have an easy answer. Note to self: keep all production data in production versus test-driving.
If you only need to do this once, you could get it done easily using ms access.
You can work out any conflict by creating some query in the visual query designer.
You can connect to an sqlite3 database by using the odbc driver for sqllite3 and link those tables in access.