After truncating my database and re-insert it with script my spring doesn't refer to the latest data anymore. But somehow when I create new user, the hibernate still create user id as 1, which in fact already existed after I committed my script. Refreshing or restarting project won't help. Any ideas ? Thanks
Databases like oracle maintain the sequence buffer for the primary key, you will have to set your sequence number in the sequence table then it will start generating the primary key to the next number in the sequence.
In Postgres also you can do the same and this will solve your problem.
Related
I'm trying to auto-generate ID's for my entity, but it's not generating. Instead, it's starting from 1 when there already exists an entry with id "1" in my DB. Why is it not generating id "9" for my new entity?
Typically when creating a table with GenerationType.IDENTITY on postgres, Hibernate will setup the id column plus a database sequence to manage this id.
By convention the sequence name will be "tablename_id_seq". E.g., for the table ad_group_action there will be a corresponding sequence ad_group_action_id_seq. You can connect to the database to double-check the actual sequence name created.
The sequence just starts from 1 and increments each time a row is inserted by Hibernate.
But if there are pre-existing rows -- or if rows with existing IDs are inserted "manually" into the table -- those rows can conflict with the sequence.
One solution is to simply reset the sequence (from pgAdmin or another database client) to start at a higher number (say 100), using something like:
ALTER SEQUENCE ad_group_action_id_seq RESTART WITH 100;
Now Hibernate will not conflict with the existing rows (assuming their max id is < 100).
Alternatively, when inserting rows manually, omit the id column and let postgres automatically set them. This way the table and the sequence will always be in sync.
I have few tables in my database where the primary keys are auto generated using Hibernate seqhilo generator configuration. We need to archive these records and at a later point, should be able to restore them in case of a business scenario. My question is if I restore these tables with simple insert statements will that suffice or should I worry about the sequence generator? I would like to have the same ID and not a new generated one. To be clear these re-inserts will happen via direct SQL and not via Hibernate.
I am working with an Oracle database (11g Release 2). Imagine multiple connections doing the following simultaneously:
Start transaction
Check if a specific value exists in a table of unique values
If the value does not exist, insert it
Commit transaction
It seems to me that the only way to prevent conflicts is to block connections from performing the above 4-step sequence while any other connection is currently performing the 4-step sequence.
Can transactions achieve this kind of broad locking/blocking in Oracle?
Thanks in advance for your answers and advice on how to best deal with this scenario.
Add a unique check constraint, and implement an exception handler to get the next sequence and try again.
This is assuming you're using pl/sql.
An alternative would be using an Oracle sequence, with cache size 1. This will also ensure no gaps in the sequence
2. SELECT * FROM table_name FOR UPDATE to block all reads from other sessions...
We are trying to do master master replication for DataBases and Magento's DB is giving us issues:
Error 'Duplicate entry 'hle9agote6b43tvnrl3c3n9g76' for key 'PRIMARY'' on query. Default database: 'magento_d'. Query: 'INSERT INTO core_session (session_expires, session_data, session_id) VALUES ('1352860963', 'K6uI7suW8PVkzYh6wxLoKjy_gyxavZpSUfNN2QwDkjw85sRHcGN1EjDFHhOH22uof3qvTOwDUHJISln-f8jYENR6SDGZgSxYtzw_cqZZP0yVB1rY6WwMH-AEEHvJAhGeZWCv6-QEbQR1iA83KE0-nxgqcUR0KGpyFBt5LvWcX9osNXMFcrN5aPII3JXJQw4F2bprP_HiF2qNh3NqWsU4LBq3mLN9GYTaHBprLkeQ4LyOkpI0IL67jWuBnvc8wzg3eHWbbesETSXSgjv59mKJOmN2vqpabhBaqLgyItLDNLo4v8jotbf1evrKvpYTbfpht1bDe89HMgJT-5fRenOkyddTwlHzoKK7uKaDpUN7kdkzcDUOFZNDTlBRKo447R_zTP4jk_6UQlDcAO10QKiW8L9PQkF5qB-GB_7xsJyEoH5e7Ysef27BGtztpjdO-PCLwgUQ4GJ4oftOv4RYj-EtKD5WL6TKDcvxxJzCnE2aSAINVW92bu0oYwhJQn3-cy4JhxQsh48PAJq1xcG9gVpsuzaJ4rbDrQZ45_yN41-MVpHaiM73M24tFsZdGe5LLVnb7zRxMfdTF1ZfTuuaK-8TB4mPsFIVDRuJEGBjHlsx2BXDHFucaLxnfR5ibGjgiGZaDKUS2CmLyAAsHV7rSKGOy0ArSIS4PJrnh4vQbylodN4JK4z19nPRDt1yxbsn8uf0zSYa11G2SLZsPFz0vk7AUVWlCtKsmKdCBtR6F3lNg_9M88JMtVirbpwhNQbCDIQZ-4nm793wrQmfuuT1bloA0ZpMfQi1ouEZEjL
From what I can tell I think the auto_inc needs to be different for magento_d on one of the masters (shop2).
Mysql Truncate Table, Auto Increment not working
Check page 38, 39:
http://www.percona.com/files/presentations/percona-live/PLMCE2012/PLMCE2012-Diagnosing_Failures_in_MySQL_Replication.pdf
So, all I need to do is on one of the master's make magento_d's core_session table auto increment differently? How would you guys approach this issue. I just don't want to corrupt anything and cause myself more work/headache.
Best Regards,
George
This issue has been resolved.
What we did is:
Login as root user to MySQL
switched to the DB with table core_session...--> use magento_d
deleted the contents from core_session...--> delete from core_session;
changed the auto increment value for core_session to avoid conflict in future...--> alter table core_session AUTO_INCREMENT = 10;
master master replication works now and no conflict
You could solve that by adding appropriate indexes to the core_resource table, or saving sessions into Redis or Memcache, or Truncating the whole table, that would solve the problem TEMPORARILY, but not definitely...other problems (duplicates, foreign keys) will come up in other tables and stop replication.
In order to fix this, you should use MySql row-based replication and binary logs in row format, otherwise those errors will keep on stopping replication.
Give it a try and let us know your result if you're still experiencing this issue.
Cheers.
I have 62 columns in a table under SQL 2005 and LINQ to SQL doesn't handle the updates though the reading would work just fine, I tried re-adding the table to the model, created a new data model but nothing worked, I'm guessing I've hit the maximum number of columns limit on an object, can anyone explain that ?
I suspect there is some issue with an identity or timestamp column (something autogenerated on the SQL server). Make sure that any column that is autogenerated is marked that way in the model. You might also want to look at how it is handling concurrency. If you have triggers that update any values on the row after it is updated (changing values) and it is checking all columns on updates, this would cause the update to fail. Typically I create my tables with a timestamp column -- LINQ2SQL picks this up when I generate the model and uses it alone for concurrency.
Solved, either one of the following two
-I'm using a UniqueIdentifier column that was not set as Primary key
-Set Unique ID primary key, checked the properties of the same column in Server Explorer and it was still not showing as Primary key, refreshed the connection,dropped the same table on the model and voila.
So I assume I made a change to my model some time before, deleted the table from the model and added the same from the Server explorer without refreshing the connection and it never used to work.
Question is, does VS Server Explorer maintain it's own table schema and requires connection refresh everytime a change is made in the database ?
There is no limit to the number of columns LINQ to SQL will handle.
Have you got other tables updating successfully?
What else is different about how you are accessing the table content?