MySQL master master replication error - Duplicate key in Magento on insert into - magento

We are trying to do master master replication for DataBases and Magento's DB is giving us issues:
Error 'Duplicate entry 'hle9agote6b43tvnrl3c3n9g76' for key 'PRIMARY'' on query. Default database: 'magento_d'. Query: 'INSERT INTO core_session (session_expires, session_data, session_id) VALUES ('1352860963', 'K6uI7suW8PVkzYh6wxLoKjy_gyxavZpSUfNN2QwDkjw85sRHcGN1EjDFHhOH22uof3qvTOwDUHJISln-f8jYENR6SDGZgSxYtzw_cqZZP0yVB1rY6WwMH-AEEHvJAhGeZWCv6-QEbQR1iA83KE0-nxgqcUR0KGpyFBt5LvWcX9osNXMFcrN5aPII3JXJQw4F2bprP_HiF2qNh3NqWsU4LBq3mLN9GYTaHBprLkeQ4LyOkpI0IL67jWuBnvc8wzg3eHWbbesETSXSgjv59mKJOmN2vqpabhBaqLgyItLDNLo4v8jotbf1evrKvpYTbfpht1bDe89HMgJT-5fRenOkyddTwlHzoKK7uKaDpUN7kdkzcDUOFZNDTlBRKo447R_zTP4jk_6UQlDcAO10QKiW8L9PQkF5qB-GB_7xsJyEoH5e7Ysef27BGtztpjdO-PCLwgUQ4GJ4oftOv4RYj-EtKD5WL6TKDcvxxJzCnE2aSAINVW92bu0oYwhJQn3-cy4JhxQsh48PAJq1xcG9gVpsuzaJ4rbDrQZ45_yN41-MVpHaiM73M24tFsZdGe5LLVnb7zRxMfdTF1ZfTuuaK-8TB4mPsFIVDRuJEGBjHlsx2BXDHFucaLxnfR5ibGjgiGZaDKUS2CmLyAAsHV7rSKGOy0ArSIS4PJrnh4vQbylodN4JK4z19nPRDt1yxbsn8uf0zSYa11G2SLZsPFz0vk7AUVWlCtKsmKdCBtR6F3lNg_9M88JMtVirbpwhNQbCDIQZ-4nm793wrQmfuuT1bloA0ZpMfQi1ouEZEjL
From what I can tell I think the auto_inc needs to be different for magento_d on one of the masters (shop2).
Mysql Truncate Table, Auto Increment not working
Check page 38, 39:
http://www.percona.com/files/presentations/percona-live/PLMCE2012/PLMCE2012-Diagnosing_Failures_in_MySQL_Replication.pdf
So, all I need to do is on one of the master's make magento_d's core_session table auto increment differently? How would you guys approach this issue. I just don't want to corrupt anything and cause myself more work/headache.
Best Regards,
George

This issue has been resolved.
What we did is:
Login as root user to MySQL
switched to the DB with table core_session...--> use magento_d
deleted the contents from core_session...--> delete from core_session;
changed the auto increment value for core_session to avoid conflict in future...--> alter table core_session AUTO_INCREMENT = 10;
master master replication works now and no conflict

You could solve that by adding appropriate indexes to the core_resource table, or saving sessions into Redis or Memcache, or Truncating the whole table, that would solve the problem TEMPORARILY, but not definitely...other problems (duplicates, foreign keys) will come up in other tables and stop replication.
In order to fix this, you should use MySql row-based replication and binary logs in row format, otherwise those errors will keep on stopping replication.
Give it a try and let us know your result if you're still experiencing this issue.
Cheers.

Related

Does creating index in Oracle locks the table for reads?

If we specify ONLINE in the CREATE INDEX statement, the table isn't locked during creation of the index. Without ONLINE keyword it isn't possible to perform DML operations on the table. But is the SELECT statement possible on the table meanwhile? After reading the description of CREATE INDEX statement it still isn't clear to me.
I ask about this, because I wonder if it is similar to PostgreSQL or SQL Server:
In PostgreSQL writes on the table are not possible, but one can still read the table - see the CREATE INDEX doc > CONCURRENTLY parameter.
In SQL Server writes on the table are not possible, and additionally if we create a clustered index reads are also not possible - see the CREATE INDEX doc > ONLINE parameter.
Creating an index does NOT block other users from reading the table. In general, almost no Oracle DDL commands will prevent users from reading tables.
There are some DDL statements that can cause problems for readers. For example, if you TRUNCATE a table, other users who are in the middle of reading that table may get the error ORA-08103: Object No Longer Exists. But that's a very destructive change that we would expect to cause problems. I recently found a specific type of foreign key constraint that blocked reading the table, but that was likely a rare bug. I've caused a lot of production problems while adding objects, but so far I've never seen adding an index prevent users from reading the table.

Unique Constraint Violated on empty table

I recently received a case which my client came across the ORA-00001: unique constraint violated error. This happened when a program tried to truncate two tables and then insert data into them.
From the error-log file, the truncate step was completed,
delete from INTERNET_GROUP
delete from INTERNET_ITEM
BUT right after this, the insertion to the Internet_group table triggered the ORA-00001 error. I am wondering if there is any database settings related to this error? I never used Oracle and am wondering if Oracle puts a lock on a row with SELECT statement, in which case the row is locked and not deleted somehow? Any help is appreciated.
Please know that there is a difference between truncate and delete. You say you truncated the table, but you mention "delete from" . That is entirely different.
If you're sure you want to empty the tables, try replacing with
truncate table internet_group reuse storage;
Mind you that a commit is not necessary with the truncate statement as this is considered a DDL (data definition language) statement and not a DML (Data modification language) statement like updates and deletes.
Also, there is no row locking on selects. But changes are only applied and visible for other sessions in the database when commit-ed.
I guess that is wat happened; you deleted the records but did not execute a commit (yet) and subsequently inserted new records.
edit:
I now realize you're probably inserting multiple records....
The other option might be, that the data itself causes a violation. Can you please provide the constraints on the table? There must be a primary key or unique constraint. You might want to hold that against your dataset.

Golden Gate replication from primary to secondary database, WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403

I am using golden gate to replicate data from primary to secondary. I have inserted records in the primary database, but replication abdends with error message
WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403 mapping primaryDB_GG1.TB_myTableName to secondaryDB.TB_myTableName OCI Error ORA-01403: no data found, SQL < UPDATE ......
The update statement has all the columns from table in the where clause.
Whereas there is no such update statement in the application with so many columns in where clause.
Can you help on this issue. Why Golden Gate replication is converting insert in to update while replication.
I know this very old, but if you haven't figured out a solution, please provide your prm file if you can. You may a parameter in there that is converting inserts to updates based upon a PK already existing in the target database. It is likely that handlecollisions or CDR is set.
For replication, you might have already enabled the transaction log in the source db. Now, you need to run from ggsci:
"ADD TRANDATA schema_name.table_name, COLS(...)"
In the COLS part, you need to mention the Column/Columns(comma seperated) that can be used to identify a unique record (You can mention the unique indexed columns if present). If there is no unique index on the table and you are not sure of what columns could be used to uniquely identify a row, then just run from ggsci:
"ADD TRANDATA schema_name.table_name"
Then Golden gate will start logging all the necessary columns for uniquely identifying a row.
Note: This should be done before you start the replication process.

What is the use of disable operation in hbase?

I know it is to disallow anyone from performing any operation on a table, when a schema change is going to be made.
> disable ‘table_name’
But I want more clarification on it. Why should we disallow others to perform any operation on it? Is it just because wrong and unexpected results would be given when a query is made while a schema change is undergoing...!
HBase is a strictly consistent NoSQL database in case of reads and writes.
So achieving consistency is very important for HBase during DB operations.
HBase demands disabling table in case of altering schema changes and dropping tables.
HBase doesn't have a protocol to tell all the regions to update the schema changes online. So we need to disable the table before alter it.
HBase table drop is two step procedure:
Closing all the regions. i.e disable the table
Dropping them. i.e drop the table.
So We must disable all operations except a few operations like list, is_enabled, is_disabled etc... on the table before dropping it.

MiniProfiler SqlServerStorage becomes quite slow

We use mini profiler in two ways:
On developer machines with the pop-up
In our staging/prod environments with SqlServerStorage storing to MS SQL
After a few weeks we find that writing to the profiling DB takes a long time (seconds), and is causing real issues on the site. Truncating all profiler tables resolves the issue.
Looking through the SqlServerStorage code, it appears the inserts also do a check to make sure a row with that id doesnt already exist. Is this to ensure DB agnostic code? This seems it would introduce a massive penalty as the number of rows increases.
How would I go about removing the performance penalty from the performance profiler? Is anyone else experiencing this slow down? Or is it something we are doing wrong?
Cheers for any help or advice.
Hmm, it looks like I made a huge mistake in how that MiniProfilers table was created when I forgot about primary key being clustered by default... and the clustered index is a GUID column, a very big no-no.
Because data is physically stored on disk in the same order as the clustered index (indeed, one could say the table is the clustered index), SQL Server has to keep every newly inserted row in that physical order. This becomes a nightmare to keep sorted when we're using essentially a random number.
The fix is to add an auto-increasing int and switch the primary key to that, just like all the other tables (why I overlooked this, I don't remember... we don't use this storage provider here on Stack Overflow or this issue would have been found long ago).
I'll update the table creation scripts and provide you with something to migrate your current table in a bit.
Edit
After looking at this again, the main MiniProfilers table could just be a heap, meaning no clustered index. All access to the rows is by that guid ID column, so no physical ordering would help.
If you don't want to recreate your MiniProfiler sql tables, you can use this script to make the primary key nonclustered:
-- first remove the clustered index from the primary key
declare #clusteredIndex varchar(50);
select #clusteredIndex = name
from sys.indexes
where type_desc = 'CLUSTERED'
and object_name(object_id) = 'MiniProfilers';
exec ('alter table MiniProfilers drop constraint ' + #clusteredIndex);
-- and then make it non-clustered
alter table MiniProfilers add constraint
PK_MiniProfilers primary key nonclustered (Id);
Another Edit
Alrighty, I've updated the creation scripts and added indexes for most querying - see the code here in GitHub.
I would highly recommended dropping all your existing tables and rerunning the updated script.

Resources