Greenplum - Update statement failing on timestamp column - greenplum

We have source and target table in greenplum database. we are comparing both the table using sql script.
But Update is not working here. and it is not updating the timestamp column of target table with respect to source table.
Input - Source/target table structure
CREATE TABLE sysprocompanyb.target_customer_table
(
time timestamp without time zone,
"Customer" character(20),
)
DISTRIBUTED BY ("ID");
Noted this
However found this, on Execution of below update statement it is not throwing any error. it is saying Updated all rows successfully. But when i am checking after process completion target timestamp column field is not equal to source timestamp column field.
We tried :
BEGIN;
insert into schemaname.target_customer_table select s.* from schemaname.source_customer_table s LEFT JOIN schemaname.target_customer_table d ON s."Customer"=d."Customer" where d."Customer" is null;
UPDATE schemaname.target_customer_table d
SET "time" = d."time"
FROM schemaname.source_customer_table s
WHERE s."Customer" = d."Customer";
Output
We want to match source and target column after completion of above SQL Transaction.
Any help on it would be much appreciated?

Related

Oracle Goldengate: Can I configure a specific column always to be captured in Extract even though not a Key Column

I have a Goldengate configuration where INSERTALLRECORDS is used on the target database to replicate INSERT, UPDATE and DELETE operations on the source database to new records.
On the Target DB, the Source DML Operation and Commit Time are captured/recorded to the tables, via the use of #GETENV('GGHEADER') calls
This target DB is then read by an ETL process which applies records to Hadoop in the order they were committed to the target DB.
For example:
CREATE TABLE my_test (id NUMBER PRIMARY KEY,
my_text VARCHAR2(100) NOT NULL,
date_added DATE NOT NULL);
If I run the following SQL on the Source DB:
INSERT INTO my_test VALUES (1,'Inserting',SYSDATE);
UPDATE my_test SET my_text = 'Updating' WHERE id = 1;
DELETE FROM my_text WHERE id = 1;
this results in 0 records for ID=1 on the Source DB and 3 records on the Target DB, i.e.
ID MY_TEXT DATE_ADDED GG_DML_TYPE GG_COMMIT_TIMESTAMP
-- ------- ---------- ----------- -------------------
1 Inserting 12-12-2021 16:00:00 INSERT 12-12-2021 16:00:00
1 Updating 12-12-2021 16:00:00 SQL COMPUPDATE 12-12-2021 16:00:01
1 DELETE 12-12-2021 16:00:02
The corresponding tables in Hadoop are partitioned by DAY_ADDED, calulated from the Oracle DATE_ADDED column.
Currently, when applying a DELETE operation from Oracle, the ETL is having to scan all Hadoop partitions to find the matching ID record.
Consequently, to improve performance, I would like the DATE_ADDED column to always have its value captured in the Oracle GG Extract, so this will be present in the GG Trail file for all Source DML operations, including DELETEs
The only way I have found to do this is via the use of LOGALLSUPCOLS - however, this logs all columns in the Extract, which I don't want to do for some of our tables which have lots of columns and high volumes.
If anyone knows a way to always capture explicit columns, i.e. DATE_ADDED in this example, in the GG Extract, this would be much appreciated.
There are 2 things you need to take care of:
Make sure that the data for this column is added to supplemental log. You can make sure this happens - with running:
add trandata my_test cols(date_added)
or by running SQL in the database:
alter table my_test add supplemental log group grp1 (date_added) always;
You need to make sure that this column gets captured by extract process. To make sure this happens you should use in extract parameter:
table my_test, cols(date_added);
This should be enough to include this columns in the trail. You can verify trail file if it actually contains date_added column;

How to load data into target tables when column value equal to either Insert, Update, Delete or None

I have two target tables one is target table and the other one is error table. We have Firm and Indiv source tables to be loaded into target table and error table. I am using union to pass Indiv and Firm data into the target table and error table separately which is straight move.
Now, I need to check if Firm.Action= Insert and if record already exists in target table then we are passing record to error table, if firm.action=update and present in target table we are updating else passing to error table. We also have firm.action=delete and firm.action=None then records can be ignored.
You can check presence of record int table using lookup transformation, after that in expression transformation you can evaluate your conditions.
For example,
IIF(Firm.Action= 'Insert' and is_record_in_lookup = 1, 'Error', ... )

how to pick the latest record from a table and insert into another table

I have a scenario where I have to create a replication of a target table along with source Id field.
If I have Target Table with columns :
Target_ID,
name,
createdate which is getdate()
Then my target replication table will be:
Target_ID,
name,
createdate which is getdate(),
source_ID
Once the record hits the target table , I want the record to be inserted into replication table along with source Id. I can do this and it is working. but I would like to include the condition that only if record is inserted in target then insert in replication table else do not do anything.As of now I am picking up the latest record from target and inserting into replication table. What if there is no record inserted , even then my code would pick the latest record and insert into replication table.
Any help is appreciated!
Try OUTPUT clause on first statement to get inserted.Target_ID, it will return DBNULL if no record inserted.
Then use SQL Transaction to Rollback action if first insert fail.

Is there any major performance issues if I use virtual columns in Oracle?

Is there any major performance issues if I use virtual columns in an Oracle table?
We have a scenario where the db has fields stored as strings. Since other production apps run off those fields we can't easily convert them.
I am tasked with generating reports from the same db. Since I need to be able to filter by dates (which are stored as strings) it was brought to my attention that we could create a virtual date field so that I can query against that.
Has anyone ran into any roadblocks with this approach?
A virtual column is defined using an expression that is evaluated when you select from the table. There is no performance hit on inserts/updates on the table.
For example:
create table t1 (
datestr varchar2(100),
datedt date generated always as (to_date(datestr,'YYYYMMDD'))
);
Table created.
SQL> insert into t1 (datestr) values ('20160815');
1 row created.
SQL> insert into t1 (datestr) values ('xxx');
1 row created.
SQL> commit;
Commit complete.
Note that I was able to insert an invalid date value into datestr. Now we can try to select the data:
SQL> select * from t1 where datedt = date '2016-08-15';
ERROR:
ORA-01841: (full) year must be between -4713 and +9999, and not be 0
This could be a problem for you if you can't guarantee all the strings hold valid dates.
As for performance, when you run the above query what you are really running is:
select * from t1 where to_date(datestr,'YYYYMMDD') = date '2016-08-15';
So the query will not be able to use an index on the datestr column (probably), and you may want to add an index on the virtual column. Again, this won't work if any of the strings don't contain valid dates.
Another consideration is potential impact on existing code. Hopefully you won't have any code like insert into t1 values (...); i.e. not specifying the column list. If you do you will get the error:
ORA-54013: INSERT operation disallowed on virtual columns

Redirect duplicate rows to update while insert

I have a insert statement on table "test". PK on column x in the table "test".
Now while inserting if duplicate row comes then the same row should get updated instead insert.
How can i achieve this.
Is it possible by dup_val_on_index?
Please help.
First create a copy of the above table without any KEY Columns and follow
Step 1: truncate the table first whenever you encounter a bunch of insert statement comes
Step 2: INSERT the above truncated tables
Step 3: Execute the MERGE statement like below
MERGE INTO TABLE_MAIN M
USING TABLE_MAIN_COPY C
ON (m.id = c.id)
WHEN MATCHED THEN UPDATE SET M.somecol = c.somecol
WHEN NOT MATCHED THEN INSERT (m.id, m.somecol)
VALUES (c.id, c.somecol);
You may incur error while on merger ORA-30926: unable to get a stable set of rows in the source tables when there is two or more rows while on update.
you may avoid that using the GROUP function related to id or like ORA-30926: unable to get a stable set of rows in the source tables

Resources