"Missing Value Keyword" in tDbOutput Talent - etl

I am building a job to insert data into a table from another table. I have added lookup for the destination table to ensure that insertion only happens if the record does not exist.
Approach I am using is:
tDbInput (Main) --> tDbInput (Lookup) --> tmap --> replicate --> tFileOutputDelimited, tDbOutput (both at the same time)
While running the job I getting error saying, "Missing Values Keyword".
I ensured that the not null columns are being inserted and that the column name matches with the columns in the destination table in tmap output.
How do I resolve this?

Check that the schema is replicated from the tmap to the t_DBOutput.
Also check the settings of the t_DBOutput.
Is it set on insert data ?
You can also check the generated code of the t_DBOutput to look for problems.

Related

Oracle Data Integrator- ODI 12.2.1--Loadplan Issue no of records count issue

I come across a scenario in my project.I am loading data from file to Table using ODI.I am running My interfaces through loadplan.I've 1000 Records in my source file,and also getting 1000 records in target file.but when I'm checking ODI loadplan execution log its showing number of insert is 2000.can anyone please help.or is it a ODI bug.?
The number of inserts does not only show the inserts in the target table but also all the insert happening in temporary tables. Depending on the knowledge modules (KMs) used in an interface, ODI might load data in a C$_ table (LKM) or I$_ table (IKM/CKM). The rows loaded in these table will also be counted.
You can look at the code generated in the operator to check if your KMs are using using these temporary. You can also simulate an execution to see the code generated.

Why Phoenix always add a extra column (named _0) to hbase when I execute UPSERT command?

When I execute the UPSERT command on apache phoenix, I always see that Phoenix add an extra column (named _0) with an empty value in the hbase, this column(_0) is auto generate by phoenix, but I don't need it, like this:
ROW COLUMN+CELL
abc column=F:A,timestamp=1451305685300,value=123
abc column=F:_0, timestamp=1451305685300, value=  # I want to avoid generate this row
Could you tell me how to avoid that? Thank you very much!
"At create time, to improve query performance, an empty key value is
added to the first column family of any existing rows or the default
column family if no column families are explicitly defined. Upserts will also add this empty key value. This improves query performance by having a key value column we can guarantee always being there and thus minimizing the amount of data that must be projected and subsequently returned back to the client."
Apache Phoenix Documentation
Regarding your question if that is avoidable:
You could work around the problem by adding the following statements at the end of your sql:
ALTER TABLE "<your-table>" ADD "<your-cf>"."_0" VARCHAR(1);
ALTER TABLE "<your-table>" DROP COLUMN "<your-cf>"."_0";
You should only do this if you query some table with phoenix but then access the table with another system that is not aware of this phoenix-specific dummy value.

Golden Gate replication from primary to secondary database, WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403

I am using golden gate to replicate data from primary to secondary. I have inserted records in the primary database, but replication abdends with error message
WARNING OGG-01154 Oracle GoldenGate Delivery for Oracle, rgg1ab.prm: SQL error 1403 mapping primaryDB_GG1.TB_myTableName to secondaryDB.TB_myTableName OCI Error ORA-01403: no data found, SQL < UPDATE ......
The update statement has all the columns from table in the where clause.
Whereas there is no such update statement in the application with so many columns in where clause.
Can you help on this issue. Why Golden Gate replication is converting insert in to update while replication.
I know this very old, but if you haven't figured out a solution, please provide your prm file if you can. You may a parameter in there that is converting inserts to updates based upon a PK already existing in the target database. It is likely that handlecollisions or CDR is set.
For replication, you might have already enabled the transaction log in the source db. Now, you need to run from ggsci:
"ADD TRANDATA schema_name.table_name, COLS(...)"
In the COLS part, you need to mention the Column/Columns(comma seperated) that can be used to identify a unique record (You can mention the unique indexed columns if present). If there is no unique index on the table and you are not sure of what columns could be used to uniquely identify a row, then just run from ggsci:
"ADD TRANDATA schema_name.table_name"
Then Golden gate will start logging all the necessary columns for uniquely identifying a row.
Note: This should be done before you start the replication process.

Number format not matching between Pentaho Kettle and Oracle?

I have a database table in Oracle 11g created and populated with the following code:
CREATE TABLE TEST_TABLE (CODE NUMBER(1,0));
INSERT INTO TEST_TABLE (CODE) VALUES (3);
Now, I want to use this table as a lockup table in a Pentaho Kettle transformation. I want to make sure that the value of a column comes from this table, and if not abort. I have the following setup:
The data frame has a single column called Test of type integer, and a single row. The lookup is configured like this:
However the lookup always fail and the transformation is aborted, no matter if the value of Test is 3 (should be ok), or 4 (should be aborted). However, if I check the "Load all data from table" box, it works as expected.
So my question is this: Why does it not work unless I cache the whole table?
Two further observations:
When it works, and the row is printed in log, I notice that Test is printed like [ 3] and From DB is printed like [3] (without the extra space). I don't know if this is of any significance, though.
If I change the database table so that CODE is created as INT, it works. This leads me to believe it is somehow related to the number formatting. (In my actual application, I can not change the database tables.) I guess I should change the format of Test, but to what? Setting it to Number does not help, nor does Number with length 1 and precision 0.

Truncate target Table based on rows in a Target Flatfile

I have a workflow which loads data from a Flat file to a Stage Table after a few basic checks on a few columns. In the mapping, each time my check fails (meaning if the column has an invalid value) , I make an entry to a ErrorFlatFile with an error text.
Now , I have two targets in my mapping. One being the Stage table and the other is the Error Flat File.
What i want to achieve is this ? Even if there is one entry in the ErrorFlatFile (indicating there is an error in the source file ) , I want to truncate the Target Stage Table.
Can someone Please help me with how i can do this at the session level.
Thanks,
You would need one more session. Make a dummy session (one that reads no data) and add a Pre or Post-SQL statement:
TRUNCATE TABLE YourTargetStageTableName
Create a link from your existing session to the dummy one and add the condition like:
$PMTargetName#numAffectedRow > 0
replacing TargetName with the name of your ErrorFlatFileName. The second session should only be executed in case when there was an entry made to the error file. If there will be no errors, it should not be executed.

Resources