This is regarding Liquibase insert records. Suppose in version v1 I have an xml file with 50 insert records and I want to add 30 more insert records in version 2. Can I go with same file and change id and add these records? Actually, I did in same file and I got a "unique constraints error" while updating commands.
at liquibase.database.AbstractJdbcDatabase.executeStatements(AbstractJdbcDatabase.java:1211)
at liquibase.changelog.ChangeSet.execute(ChangeSet.java:600)
... 7 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value
violates unique constraint
No, the idea is that you would want to have two changesets. You already have a changeset in v1 that adds the first 50. You should add a second changeset to add the next 30. Changesets (for the most part) should be considered immutable once they have been deployed anywhere besides your own local developer database. The main exception to that is changesets that deploy things like functions or procedures, where the SQL code in the file is always the latest and most correct version.
Related
Failing to understand in which situation will an insert fail with no data found error. Any insights please.
Oracle GoldenGate Delivery for Oracle process started, group REPA discard file opened: 2020-08-21 18:32:07.326069
Current time: 2020-08-21 18:32:08
Discarded record from action ABEND on error 1403
No data found
Aborting transaction on /zfssa/gg_02/ogg/dirdat/REPA/EX beginning at seqno 473 rba 425209949
error at seqno 473 rba 425214669
Problem replicating SRC.TABLE to TGT.TABLE.
Record not found
Mapping problem with insert record (target format) SCN:3329198919.29.23.78560...
An error by NO DATA FOUND usually points to a inconsistency problem. The REPLICAT is basically an application doing data manipulation using SQL statements. If it attempts to perform a DML and the database rejects it, normally is because of inconsistency with the attempted DML and the records related to it.
For example, attempting to delete a row which does not exist will fail with a database error. Aside from an Oracle GoldenGate bug, this usually points to target database inconsistencies. In other words, the target database is in a state that the customer did not expect it to be in.
Determine why your target database is not in the state that you expect it to be. Numerous reasons can cause this and below are some them. This list is by no means exhaustive.
Possible Causes
The use of parameters to ignore previous errors such as HANDLECOLLISIONS, REPERROR with IGNORE or DISCARD options.
Primary keys or key columns that are different between source and target database. They might be the same columns but the type and/or size are different.
The target database is manipulated by an application program.
The target replicat is MAPped by filters and selection, or the extract DML operations have filters. This will be based on your business needs and may or may not be intentional.
Non-primary key table updates. If all the columns are used for replication there are cases whereby more than one update can occur making subsequent DML operations fail.
Non-primary key table updates where KEYCOLS are used. These keys may not be unique. To test uniqueness of selected keys, run the query on the source database based on these KEYCOLS and sort them.
The language and characterset in use (NLS, double-byte or multibyte charset) is different and may cause unexpected conversion issues automatically done by the database. Use SETENV parameter to change the language and set this before the USERID parameter.
Your source database and target database are of a different type, for example Oracle to MSSQL and the conversion done on the primary keys or key columns are not what you expected.
There are other specific configurations, patches, features, default database behaviors and so on. Search the My Oracle Support (MOS) Knowledge base for the database error number, example: "ORA-01403" under the Oracle GoldenGate core product. Review these knowledge solutions to see if they are related to your issue.
In rare situations, this may be an Oracle GoldenGate bug in that a particular DML was not captured or the values were incorrectly interpreted. Please submit an SR if you think this is the case. You will need to provide in addition to the target replicat reports and materials stated above, all extract reports on the source machine and GoldenGate trails.
Duplicate Mapping in replicat parameter with ALLOWDUPTARGETMAP parameter
Incorrect use of Extract parameter THREADOPTIONS PROCESSTHREADS. This can cause it to miss data.
Possible Solutions
What do you need to investigate a replicat database issue?
For a start you will need the replicat parameter file, report file and discard file.
Report file: Contains all warnings, errors, tables that are already mapped, columns mapped or unmapped and all run time environment settings.
Discard file: Display in detail the issue with mapping this table that generates the database error, the columns, its values, position of the record in the GoldenGate trail.
Parameter file: Usually the parameters are within the report file but this will be useful if the report file has been rolled over (REPORTROLLOVER parameter).
What are the next steps?
Query your target database based on the above data. Depending on the database error, the report file and/or discard file may contain the exact SQL statement used. Nevertheless one should be able to construct the appropriate query. This is to confirm that the replicat has indeed reported the correct database errors.
For example if the Oracle DB error was ORA-01403 which means no data found, your query should be selecting the row with the primary keys or keys as specified. Your query should return the same results as the replicat.
Fixing the replicat.
The first thing to consider is whether you can ignore this error for now and resolve the situation later on. If your business allows you to do so then you may either exclude this table altogether (TABLEEXCLUDE) or simply skip this error (REPERROR , DISCARD). If you skip the error, then start the replicat with REPERROR but run it for a short while (stop replicat) and remove this REPERROR. Then restart the replicat.
Fixing the database.
No matter what the reason is, you will need to resync the table that caused this issue.
Configuration Issue
If you have Duplicate mapping in replicat parameters in the replicat parameter file and ALLOWDUPTARGETMAP parameter is used, DML will be applied twice. This leads to ORA-1403 error on delete operation and ORA-001 error for Insert operation. Remove the duplicate mapping to fix the issue.
Summary, there are a lot of possibilities for a no data found error in a replicat process in GoldenGate.
This could also depend on the type of replicat you are using. Many of them perform queries inside the database prior to actually performing a DML statement. The error you have can often result from a lack of permissions in the target database, or if you are using something like a database vault, or other technology which manipulates how DML is performed.
I have a Visual Studio Database Project, there seems to be little and sketchy documentation on this type of Project.
The issue: I want to rename a column.
Problem: The table I want to rename the column on has data in it, so every time I generate a script I end up with this piece of code that causes the script to bomb out because there is data in the table.
IF EXISTS (select top 1 1 from [dbo].[res_file_submission])
RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT
I have no idea how to get round this, and I really don't believe deleting this line is the answer, I have deselected the 'Block incremental deployment if data loss might occur' option, but again this seems to make no difference.
UPDATE: The column has a constraint, which seems to be the cause.
You can simply rename column name in post build deployment script by using command(rename query). It will allow you to change the name of column and also not harm your data in that table.
You can handle this via pre- and post-deployment scripts.
Create a pre-deployment script to back up the table and delete its data:
if (OBJECT_ID('TempDB..#MyTableBackup') is null)
begin
-- backup data to a temp table
SELECT *
INTO #MyTableBackup
FROM MyTable
-- TODO: If you have foreign key constraints that reference MyTable, you'll need to disable them here.
-- delete the data in your table
DELETE MyTable
end
Create a post-deployment script that restores the data:
-- TODO: Only include the SET IDENTITY_INSERT lines if your table has an identity column
--SET IDENTITY_INSERT MyTable ON
INSERT MyTable
SELECT *
FROM #MyTableBackup
--SET IDENTITY_INSERT MyTable OFF
-- TODO: If you disabled foreign key constraints in the pre-deployment script, enable them here.
DROP TABLE #MyTableBackup
Since the pre-deployment script empties your table, the column rename will occur during the regular part of the deployment without getting the "Block incremental deployment..." warning.
Be sure to remove these scripts from the project after the deployment succeeds so that they are not rerun during your next deployment.
The issue was not with my inability to rename the column, but the fact that I needed to rename the column and then deploy the rename, and the compare script kept preventing the rename.
THE SOLUTION: Rename my column in the database project including references to it, then add a script to the pre-deployment script on my target to perform the same name change. Then to run the pre-deployment script on my Database to update it, then run the compare, which will not now include the column rename as both the target and the source have the same name.
We are having issues upgrading our Sonar DB. Along with this generic message we are also seeing Caused by:
java.sql.BatchUpdateException: Cannot insert duplicate key row in object 'dbo.file_sources' with unique index 'file_sources_file_uuid_uniq'. The duplicate key value is etc...
How can we identify what needs to be cleaned up to proceed forward?
The full log is posted on pastebin
Delete the unique index file_sources_file_uuid_uniq on the table file_sources. When Sonarqube is creating a complete new database the index will not be created. It seems the migration scripts missed to delete the index.
I am working on a system to track a project's history. There are 3 main tables: projects, tasks, and clients then 3 history tables for each. I have the following trigger on projects table.
CREATE OR REPLACE TRIGGER mySchema.trg_projectHistory
BEFORE UPDATE OR DELETE
ON mySchema.projects REFERENCING NEW AS New OLD AS Old
FOR EACH ROW
declare tmpVersion number;
BEGIN
select myPackage.GETPROJECTVERSION( :OLD.project_ID ) into tmpVersion from dual;
INSERT INTO mySchema.projectHistiry
( project_ID, ..., version )
VALUES
( :OLD.project_ID,
...
tmpVersion
);
EXCEPTION
WHEN OTHERS THEN
-- Consider logging the error and then re-raise
RAISE;
END ;
/
I got three triggers for each of my tables (projects, tasks, clients).
Here is the challenge: Not everything changes at the same time. For example, somebody could just update a certain tasks' cost. In this case, only one trigger fires and I got one insert. I'd like to insert one record into 3 history tables at once even if nothing changed in the projects and clients tables.
Also, what if somebody changes a project's end_date, the cost, and say the picks another client. Now, I have three triggers firing at the same time. Only in this case, I will have one record inserted into my three history tables. (which I want)
If i modify the triggers to do insert into 3 tables for the first example, then I will have 9 inserts when the second example happens.
Not quite sure how to tackle this. any help?
To me it sounds as if you want a transaction-level snapshot of the three tables created whenever you make a change to any of those tables.
Have a row level trigger on each of the three tables that calls a single packaged procedure with the project id and optionally client / task id.
The packaged procedure inserts into all three history tables the relevant project, client and tasks where there isn't already a history record for that key and transaction (ie you don't want duplicates). You got a couple of choices when it comes to the latter. You can use a unique constraint and either a BULK select and insert with FORALL/SAVE EXCEPTIONS, DML error logging (EXCEPTIONS INTO) or a INSERT...SELECT...WHERE NOT EXISTS...
You do need to keep track of your transactions. I'm guessing this is what you were doing with myPackage.GETPROJECTVERSION. The trick here is to only increment versions when you have a new transaction. If, when you get a new version number, you hold it in a pacakge level variable, you can easily tell whether your session has already got a version number or not.
If your session is going to run multiple transaction, you'll need to 'clear' out the session-level version number if it was part of a previous transaction. If you get DBMS_TRANSACTION.LOCAL_TRANSACTION_ID and store that at the package/session level as well, you can determine if you are in a new transaction, or part of the same transaction.
From your description, it looks like you would be capturing the effective and end date for each of the history rows once any of the original rows change.
Eg. Project_hist table would have eff_date and exp_date which has the start and end date for a given project. Project table would just have an effective date. (as it is the active project).
I don't see why you want to insert rows for all three history tables when only one of the table values is updated. You can pretty much get the details as you need (as of a given date) using your current logic. (inserting old row in the history table for the table that has been updated only.).
Alternative answer.
Have a look at Total Recall / Flashback Archive
You can set the retention to 10 years, and use a simple AS OF TIMESTAMP to get the data as of any particular timestamp.
Not sure on performance though. It may be easier to have a daily or weekly retention and then a separate scheduled job that picks out the older versions using the VERSIONS BETWEEN syntax and stores them in your history table.
I have 62 columns in a table under SQL 2005 and LINQ to SQL doesn't handle the updates though the reading would work just fine, I tried re-adding the table to the model, created a new data model but nothing worked, I'm guessing I've hit the maximum number of columns limit on an object, can anyone explain that ?
I suspect there is some issue with an identity or timestamp column (something autogenerated on the SQL server). Make sure that any column that is autogenerated is marked that way in the model. You might also want to look at how it is handling concurrency. If you have triggers that update any values on the row after it is updated (changing values) and it is checking all columns on updates, this would cause the update to fail. Typically I create my tables with a timestamp column -- LINQ2SQL picks this up when I generate the model and uses it alone for concurrency.
Solved, either one of the following two
-I'm using a UniqueIdentifier column that was not set as Primary key
-Set Unique ID primary key, checked the properties of the same column in Server Explorer and it was still not showing as Primary key, refreshed the connection,dropped the same table on the model and voila.
So I assume I made a change to my model some time before, deleted the table from the model and added the same from the Server explorer without refreshing the connection and it never used to work.
Question is, does VS Server Explorer maintain it's own table schema and requires connection refresh everytime a change is made in the database ?
There is no limit to the number of columns LINQ to SQL will handle.
Have you got other tables updating successfully?
What else is different about how you are accessing the table content?