How to catch or skip error lines in Datastage when inserting lines in ODBC - insert

I'am using Datastage 11.7 , I want to insert lines my odbc table , however while inserting I want to skip or catch the line that goes down while inserting all the lines in my odbc , all of that so my flow won't go and will continue inserting even if there is an error i need to catch that line or skip it , after that will i need to retrieve that line in order to send it to the ETL support team .
Example :
Dataset ( 10 lines ) -> copy stage -> ODBC
while inserting the 10 lines , 1 of them may have a en error so in this case i want to get that line out of the insertion and let the 9 rest line continue their work .
I tried using the reject mode of the ODBC but it's not working even with that process all the flow go down , last example a line was already there so when i tried to insert the 10 lines it went donw cause of that and only line . how can I solve this problem please ? thank you in advance

Related

What is being deleted?

I'm going through some Visual Foxpro 7.0 which is new to me. I'm having a bit of trouble deciphering a DELETE command inside a DO WHILE with two different work areas.
I'm just not clear on what is actually being deleted. When the DELETE command is issued I'm using Work Area 1 but I'm not looping through any records. If the DELETE command IS being used against tblpay (Work Area 1) then it seems that it's deleting the record that was just inserted which makes no sense. Can someone clue me please?
select 1 (tblpay)
USE tblpay
select 2 (tblfac)
USE tblfac
GOTO top
DO WHILE NOT EOF()
lcfy = fy
lcindex_no = index_no
lcpca = pca
lnpys = padl(alltrim(str(cum_pys)),13,' ')
select 1 (tblpay)
LOCATE FOR fy = lcfy AND index_no = lcindex_no AND pca = lcpca
IF NOT FOUND()
INSERT INTO tblpay(exp_1,fy,exp_3,exp_4,exp_5,exp_6,index_no,exp_8,pca,cum_pys,reversal) ;
values('805',lcfy,SPACE(37),lcdoc_date,lccurdoc,'00',lcindex_no,'99802',lcpca,lnpys,'R')
DELETE
ENDIF
select 2 (tblfac)
SKIP
ENDDO
Admittedly the code you show is not very clear.
Some suggested changes:
select 1 (tblpay)
USE tblpay
should be
USE tblpay IN 0 && Open table tblpay into next available workspace
and
select 2 (tblfac)
USE tblfac
should be
USE tblfac IN 0 && Open table tblfac into next available workspace
Then you would no longer have to remember SELECT 1 or SELECT 2 - Now what did I have in #1 or #2?
Instead you would choose the table it by its Alias such as: SELECT tblpac
The rest of the code doesn't make much sense either.
* You choose table tblfac and SCAN its records for values
* Then you go to table tblpay and attempt to LOCATE one or more specific records
* If the tblpay ** record is NOT FOUND, you then use the values from **tblfac (and other info) and INSERT a new record into tblpay using a SQL Command (you could have also used the VFP commands: APPEND BLANK followed by a REPLACE)
* The DELETE that follows that will Delete the table record that it is currently pointing to - however the way your code is written it might not be what you want.
The way it looks, it seems like if you have NOT FOUND() a matching record in tblpay your record pointer is still pointing to that table, but it is now at the EOF() (End Of File) and not to any actual record. And an attempt to Delete it will not do anything.
In your VFP Development mode, you should use the Debug methods to actually 'see' which table table record pointer is 'looking' at and which record.
To do that you might want to use ** SET STEP ON** in the following manner.
IF NOT FOUND()
INSERT INTO tblpay(exp_1,fy,exp_3,exp_4,exp_5,exp_6,index_no,exp_8,pca,cum_pys,reversal) ;
values('805',lcfy,SPACE(37),lcdoc_date,lccurdoc,'00',lcindex_no,'99802',lcpca,lnpys,'R')
SET STEP ON && Added Here for Debug Purposes ONLY
DELETE
ENDIF
Then when you execute the code in the VFP Development mode and execution hits that line, it will Suspend execution and open the Debug TRACE window - thereby allowing you to investigate the record pointer, etc.
Good Luck
What Dhugalmac has said is partially correct but not entirely. If the record searched is not found, then you are inserting a record and then you are deleting that newly inserted record. Pointer is NOT at EOF but at that new record.
As Dhugalmac said, do not use work area numbers but aliases instead. Above code is not the real code, it wouldn't run without an error.
If you are using this code and the text it is in for learning, immediately stop reading it and dump it away. The code is terrible and doesn't sound to have a purpose (besides having errors).
If your intent is to learn how to delete from VB.Net, just use VFPOLEDB and a DELETE - SQL command with ExecuteNonQuery (just as you would do against SQL server, PostgreSQL, MySql ... any ANSI database). With VB.Net, most of the xbase commands have no place (neither those do while ... skip ...enddo - even you wouldn't use it from within VFP).

SQL LOADER ERROR_0102

Below is my Table structure and .CTL FILE & .CSV file while loading data i am always getting error on first row & other data is getting is getting loaded. if i left a complete blank line on first record all data gets inserted.
can you please help us why i am getting error on first record.
TABLE_STRUCTURE
ING_DATA
(
ING_COMPONENT_ID NUMBER NOT NULL,
PARENT_ING_ID NUMBER NOT NULL,
CHILD_ING_ID NUMBER NOT NULL,
PERCENTAGE NUMBER(7,4) NOT NULL
);
CTL FILE
LOAD DATA
INFILE 'C:\Users\pramod.uthkam\Desktop\Apex\Database\SQL LOADER-PROD\ING_COMPONENT\ingc.csv'
BADFILE 'D:\SQl Loader\bad_orders.txt'
INTO TABLE ING_data
FIELDS
TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
ING_Component_ID ,
Parent_ING_ID ,
Child_ING_ID ,
Percentage
)
CSV FILE
1,3,4,95.0000
2,3,5,5.0000
3,6,7,5.0000
4,6,4,95.0000
5,18,19,19.0000
6,18,20,80.0000
7,18,21,1.0000
8,34,35,85.0000
LOG FILE
Record 1: Rejected - Error on table ING_COMPONENT, column ING_COMPONENT_ID.
ORA-01722: invalid number
Table ING_COMPONENT:
7 Rows successfully loaded.
1 Row not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
Space allocated for bind array: 66048 bytes(64 rows)
Read buffer bytes: 1048576
Total logical records skipped: 0
Total logical records read: 7
Total logical records rejected: 1
Total logical records discarded: 0
BAD FILE
1,3,4,95.0000
I tried loading your file by creating it as is. For me it runs fine. All 8 rows got loaded. No issues. I was trying in Red Hat Linux.
Then I tried 2 things.
dos2unix myctl.ctl
Ran SQLLDR. All rows got inserted.
Then I tried:
unix2dos myctl.ctl
RAN SQLLDR again, all 8 records rejected. So what I believe is that, your first record line ending is not as per you sqlplus readable format. When you enter a blank line manually, your default environment (like in my case UNIX) creates the correct line ending, and the records get read. I'm not sure, but I assume this, based on my own try as above.
So lets say you are loading this file in windows(I think this because your path looks like :)) In your csv file, give a blank line in beginning, then remove it, and do the same thing after first record also(give a blank line after first record, then remove it). Then try again. Maybe it works if the issue is what I suspect.

Jmeter with Oracle 12c: the ";" is unusable

I have a jMeter 3.0 to oracle 12 c using thin connection (used ojdbc 7 and 7_c) and I cannot use the row end line ( ; ). It always returns
Cannot create PoolableConnectionFactory (ORA-00933: SQL command not properly ended
If I remove the ";" from the query everything goes fine. How can I fix this?
I found a workaround for this that avoids having the semi-colon problem:
JDBC Request Query Type needs to be: Update Statement
The query needs to be processed as a block
BEGIN
SQL Statement
END;
There are specific SQL structures that can't be executed as a block but still this enables having legit SQL code within the request and enhances having several statements in the same request.
If you are using JDBC_Request Sampler, you should NOT keep semi-colon as a trailing at the end of a line for SQL query,
Do not enter a trailing semi-colon.
so, without semi-colon, it should work properly and no need to include that.
Reference:
http://jmeter.apache.org/usermanual/component_reference.html#JDBC_Request

Oracle update million records from XML file

Gurus ,
I have reporting shell script on LINUX platform,Oracle 12c database which does the following.
Read the Error XML file created in last 24 hrs( mtime)from the unix directory path
Sed unwanted text like '
Fetch each row and column using cut -d ";" -f $X
Prepare update statement
execute update statement after processing each file to set the error
code.
In UAT I received , 400 files , each file have 20,000 rows. which means, update statement will be prepared 400X20,000 times and each statement will be executed.
The issues I see are:
Unable to log/handle update errors in order to debug or rerun them.
it is taking lot of time even though we have indexes.
What is the best way to handle such situation?
I have following thought in mind: Instead of creating update statements, use sqlldr to load to temp table and execute update/merge two tables. I'm not sure about performance of executing 400 sqlldrs.any idea?
Is there a better way to handle ? in terms of error handling and process.

How to see the actual query executed by an Oracle report containing an error

Like most Oracle Reports in O*Financials, the query is made up of dynamic parts depending on the parameters entered.
When I run the concurrent request the log file contains an obscure error:
ORA-00933: SQL command not properly ended
d.acctd_amount_dr, d.amount_dr) ) C_AMOUNT , trx . trx_number C_TRX_NUMBER FROM ar_cash_receipt_history crh , ar_distributions d , gl_code_combinations gc , ar_batches b , ar_cash_receipts cr , hz_cust_accounts cust_acct , h
(I don't know why it puts spaces in between the trx.trx_number which is one of my changes.)
I have no experience with Oracle Reports itself. But what always works to see which sql statements are being sent from a client to an Oracle db, is a sqlnet trace. For instructions about how to configure sqlnet to create a trace file, please consult the Oracle sql*net documentation or take a look at the OraFAQ.
And please, don't forget to deactivate the tracing feature again after you are done with that sql statement.
Thanks guys.
I copied the query text and concatenated in all the parameters like &LP_ORDER_BY and then displayed it with an SRW_MESSAGE in the BEFORE_REPORT trigger.
Bit tedious, they should have some available field that holds the query.

Resources