I have flat file data source and Oracle Destination. When I use lookup transformation to validate some data exists in certain tables, it always gives no match. I have converted the data type in derived column, it exactly matches with the table data type. Data type of the column is Unicode string. Exact matches leads to no match output. I am using native SSIS OLEDB connection for Oracle. Please help.
Check your case sensitivity. Sreekanth will not match sreekanth or SREEKANTH
Check for trailing spaces, even in variable length columns.
Those are the two biggest stumbling blocks. If those are not it, then edit your question with the source table definition, the lookup table definition and an appropriate insert statement for both so we can help identify the specifics of what is awry.
Related
Failing to understand in which situation will an insert fail with no data found error. Any insights please.
Oracle GoldenGate Delivery for Oracle process started, group REPA discard file opened: 2020-08-21 18:32:07.326069
Current time: 2020-08-21 18:32:08
Discarded record from action ABEND on error 1403
No data found
Aborting transaction on /zfssa/gg_02/ogg/dirdat/REPA/EX beginning at seqno 473 rba 425209949
error at seqno 473 rba 425214669
Problem replicating SRC.TABLE to TGT.TABLE.
Record not found
Mapping problem with insert record (target format) SCN:3329198919.29.23.78560...
An error by NO DATA FOUND usually points to a inconsistency problem. The REPLICAT is basically an application doing data manipulation using SQL statements. If it attempts to perform a DML and the database rejects it, normally is because of inconsistency with the attempted DML and the records related to it.
For example, attempting to delete a row which does not exist will fail with a database error. Aside from an Oracle GoldenGate bug, this usually points to target database inconsistencies. In other words, the target database is in a state that the customer did not expect it to be in.
Determine why your target database is not in the state that you expect it to be. Numerous reasons can cause this and below are some them. This list is by no means exhaustive.
Possible Causes
The use of parameters to ignore previous errors such as HANDLECOLLISIONS, REPERROR with IGNORE or DISCARD options.
Primary keys or key columns that are different between source and target database. They might be the same columns but the type and/or size are different.
The target database is manipulated by an application program.
The target replicat is MAPped by filters and selection, or the extract DML operations have filters. This will be based on your business needs and may or may not be intentional.
Non-primary key table updates. If all the columns are used for replication there are cases whereby more than one update can occur making subsequent DML operations fail.
Non-primary key table updates where KEYCOLS are used. These keys may not be unique. To test uniqueness of selected keys, run the query on the source database based on these KEYCOLS and sort them.
The language and characterset in use (NLS, double-byte or multibyte charset) is different and may cause unexpected conversion issues automatically done by the database. Use SETENV parameter to change the language and set this before the USERID parameter.
Your source database and target database are of a different type, for example Oracle to MSSQL and the conversion done on the primary keys or key columns are not what you expected.
There are other specific configurations, patches, features, default database behaviors and so on. Search the My Oracle Support (MOS) Knowledge base for the database error number, example: "ORA-01403" under the Oracle GoldenGate core product. Review these knowledge solutions to see if they are related to your issue.
In rare situations, this may be an Oracle GoldenGate bug in that a particular DML was not captured or the values were incorrectly interpreted. Please submit an SR if you think this is the case. You will need to provide in addition to the target replicat reports and materials stated above, all extract reports on the source machine and GoldenGate trails.
Duplicate Mapping in replicat parameter with ALLOWDUPTARGETMAP parameter
Incorrect use of Extract parameter THREADOPTIONS PROCESSTHREADS. This can cause it to miss data.
Possible Solutions
What do you need to investigate a replicat database issue?
For a start you will need the replicat parameter file, report file and discard file.
Report file: Contains all warnings, errors, tables that are already mapped, columns mapped or unmapped and all run time environment settings.
Discard file: Display in detail the issue with mapping this table that generates the database error, the columns, its values, position of the record in the GoldenGate trail.
Parameter file: Usually the parameters are within the report file but this will be useful if the report file has been rolled over (REPORTROLLOVER parameter).
What are the next steps?
Query your target database based on the above data. Depending on the database error, the report file and/or discard file may contain the exact SQL statement used. Nevertheless one should be able to construct the appropriate query. This is to confirm that the replicat has indeed reported the correct database errors.
For example if the Oracle DB error was ORA-01403 which means no data found, your query should be selecting the row with the primary keys or keys as specified. Your query should return the same results as the replicat.
Fixing the replicat.
The first thing to consider is whether you can ignore this error for now and resolve the situation later on. If your business allows you to do so then you may either exclude this table altogether (TABLEEXCLUDE) or simply skip this error (REPERROR , DISCARD). If you skip the error, then start the replicat with REPERROR but run it for a short while (stop replicat) and remove this REPERROR. Then restart the replicat.
Fixing the database.
No matter what the reason is, you will need to resync the table that caused this issue.
Configuration Issue
If you have Duplicate mapping in replicat parameters in the replicat parameter file and ALLOWDUPTARGETMAP parameter is used, DML will be applied twice. This leads to ORA-1403 error on delete operation and ORA-001 error for Insert operation. Remove the duplicate mapping to fix the issue.
Summary, there are a lot of possibilities for a no data found error in a replicat process in GoldenGate.
This could also depend on the type of replicat you are using. Many of them perform queries inside the database prior to actually performing a DML statement. The error you have can often result from a lack of permissions in the target database, or if you are using something like a database vault, or other technology which manipulates how DML is performed.
Problem statement
I need to replace certain dates in my sql dump file and create another dump file. First I need to parse the create table definition and store information about date type columns. I may need to skip certain columns. After that I need to parser the "Insert into table" statements. Break this statement into rows and then into columns and then replace the dates.
Solution I am using Spring batch where reader is Composite Reader. I read the entire table definition (Drop statement, Create statement and Insert statements) in memory and then pass to processor to replace the dates. During reading I also split the insert statements into Rows and Column.
Problem This solution works fine for small dump but getting out of memory for large dumps. e.g. There is one table having long blobs and size is 2GB.
Any idea how can I fix the problem or Spring batch is not a right solution for this. Any help will be highly appreciated.
I have a Column (SALARY) in Source Table from Relational DB,
for example 15000 is a record in SALARY column
and I want to format it as $15,000.00 into the Target table which is a Relational DB
using Expression Transformation.
Thankyou,
Ajay
This can be done using below steps:
Pull the required column from source to Expression transformation.
Create a derived column as the concatenation of $ and the input column by making use of the equation CONCAT('$',Source_Column)
Load this new column to the target, instead of the column from the source.
I hope this is just to learn the functionality. In real life scenarios, this is a bad practice. We need not keep these symbols and all in tables. This can be directly handled at reporting level.
may be the scenario where salary column has salary of different types say 'dollor','euro','pounds', etc in same table.
still i agree with Thomas Cherian that is not good practice to store the symbol with data itself.
if this is the scenario you can do 2 things.
--1.) add another column in table and store the value of type there or add another column which stores the conversion rate also.
--2.) convert all the values in once currency and store it without symbol.
I'm attempting to recover the data from a specific table that exists in a system table dump I performed earlier. I would like to append the rows existing in the dump to any rows that may exist in the active table. The problem is, it's likely that the name of the table in the dump is not the same as what exists in the database currently (They're dynamically created with a prefix of ARC_TREND_). In addition, I don't know the name of the table as it exists in the dump, I was hoping to use SQL Developer to analyze the dump file as I can recognize the correct table by it's columns and it's existing rows.
While i'm going on blind faith that SQL Developer can work with my dump file, when attempting to open it, i'm getting a Java Heap OutOfMemory exception raised. I've adjusted the maximum heap size from 640m to 1024m in both sqldeveloper.bat and in sqldeveloper.conf, but to no avail.
Can someone recommend a course of action for me to take to recover the data from a table which exists in a exp created dump file? A graphical tool would be nice, but I'm no stranger to the command line. I need to analyze the tables that exist in the dump in order to pick the correct one out. Then I assume I can use imp TABLE= to bring it back into the active instance. It likely won't match the existing table name, so I will use SQL Developer to copy the rows from the imported table to the table where I need them to be.
The dump was taken from a Linux server running 10g, and will be imported to (the same server & database instance, upgraded) an 11g instance of the same database.
Thanks
Since you're referring to imp rather than impdp, I assume this wasn't exported with data pump. Either way, I doubt you'll get anything useful through SQL Developer.
Fortunately most of what you're trying to do is quite easy from the command line; just run imp with the INDEXFILE parameter, which will give you a text file containing all the table (commented out with REM) and index creation commands. From that you should be able to spot the table from its column names.
You can't really see any row data though, so if there's more than one possible match you might need to import several tables and inspect the data in them in the database to see which one you really want.
We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters (in fact unlimited characters). The DBAs don't like BLOB or LONG datatypes for maintenance reasons.
The solution that I can think of is to remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get unlimited characters. This tble will be joined with the original table for queries.
Is there any better solution to this problem?
UPDATE: The proprietary programming language allows to define variables of type string and blob, there is no option of CLOB. I understand the responses given, but I cannot take on the DBAs. I understand that deviating from BLOB or LONG will be developers' nightmare, but still cannot help it.
UPDATE 2: If maximum I need is 8000 characters, can I just add 3 more columns so that I will have 4 columns with 2000 char each to get 8000 chars. So when the first column is full, values would be spilled over to the next column and so on. Will this design have any bad side effects? Please suggest.
If a blob is what you need convince your dba it's what you need. Those data types are there for a reason and any roll your own implementation will be worse than the built in type.
Also you might want to look at the CLOB type as it will meet your needs quite well.
You could follow the way Oracle stored their stored procedures in the information schema. Define a table called text columns:
CREATE TABLE MY_TEXT (
IDENTIFIER INT,
LINE INT,
TEXT VARCHAR2 (4000),
PRIMARY KEY (INDENTIFIER, LINE));
The identifier column is the foreign key to the original table. The Line is a simple integer (not a sequence) to keep the text fields in order. This allows keeping larger chunks of data
Yes this is not as efficient as a blob, clob, or LONG (I would avoid LONG fields if at all possible). Yes, this requires more mainenance, buf if your DBAs are dead set against managing CLOB fields in the database, this is option two.
EDIT:
My_Table below is where you currently have the VARCHAR column you are looking to expand. I would keep it in the table for the short text fields.
CREATE TABLE MY_TABLE (
INDENTIFER INT,
OTHER_FIELD VARCHAR2(10),
REQUIRED_TEXT VARCHAR(4000),
PRIMERY KEY (IDENTFIER));
Then write the query to pull the data join the two tables, ordering by LINE in the MY_TEXT field. Your application will need to split the string into 2000 character chunks and insert them in line order.
I would do this in a PL/SQL procedure. Both insert and select. PL/SQL VARCHAR strings can be up to 32K characters. Which may or may not be large enough for your needs.
But like every other person answering this question, I would strongly suggest making a case to the DBA to make the column a CLOB. From the program perspective this will be a BLOB and therefore simple to manage.
You said no BLOB or LONG... but what about CLOB? 4GB character data.
BLOB is the best solution. Anything else will be less convenient and a bigger maintenance annoyance.
Is BFILE a viable alternative datatype for your DBAs?
I don't get it. A CLOB is the appropriate database datatype. If your weird programming language will deal with strings of 8000 (or whatever) characters, what stops it writing those to a CLOB.
More specifically, what error do you get (from Oracle or your programming language) when you try to insert an 8000 character string into a column defined as a CLOB.