sqlldr style error logging in Oracle external tables - oracle

I'm currently trying to get the error messages we get from our Oracle external table loading process to match the level of detail we get when loading via sqlldr.
Currently if I load a file in sqlldr and a record fails I get this error message which is pretty useful - I get the record number, the actual column name that failed and the record in a bad file.
Record 4: Rejected - Error on table ERROR_TEST, column COL1.
ORA-01722: invalid number
I've got an external table along with an INSERT statement to a target table that logs errors to a DBMS_ERRLOG table but this is the equivalent error message from that process.
ORA-01722: invalid number
Whilst this process has the benefit of recording the actual record in a table so you can see column name mappings it doesn't actually list which column has the issue. This is a big issue when looking at tables with many columns...
It looks as though I can have the REJECT LIMIT on the external table itself which will give the same above error but that will then lose the ERR table logging. I'm guessing it's a case of one or the other?

Related

Can DBMS_ERRLOG be used in SQL*Loader functionality to catch the error while inserting tables?

I am trying to get the Oracle Validation errors during the SQLLOADER in a table. Right now, the errors are generated in log file, I am trying to get that errors in a table. I have tried DBMS_ERRLOG but it works for INSERT statement but not with SQLLoader. Is there any way to use DBMS_ERRLOG to capture the error messages from SQL*LOADER?

issue in create table using another table in hive

In hive there is a test table. table data have multiple small files so I want create another table using that test table so the newly created table will have less partitions and query will be fast. But I creating new table it gives me error.
CREATE TABLE IF NOT EXISTS test_merge
STORED AS parquet
AS
SELECT * FROM test;
Error
ERROR : Status: Failed
ERROR : FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
INFO : Completed executing command(queryId=hive_20180108060101_7bca2cc8-e19b-4e6d-aa00-362039526523); Time taken: 366.845 seconds
Error: Error while processing statement: FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=3)
It is working fine with less data. example
CREATE TABLE IF NOT EXISTS test_merge
STORED AS parquet
AS
SELECT * FROM test limit 100000;
It may be memory issues, I don't know. Please help
When you are trying to write parquet format files, spark would catch a batch of rows into data block called "Row Group" before flush them to disk. So usually it requires more memory than row oriented formats. Try to increase "spark.executor.memory" or decrease "parquet.block.size", this may help

Add Column to Hive External Table Error

Trying to add a column to an external table in HIVE but get the error below. This table currently has a thousand partitions registered and I want' to avoid re-creating the table and then running MSCK REPAIR which would take a very long time to complete. Also, the table uses OpenCSVSerde format. How can I add a column
hive> ALTER TABLE schema.Table123 ADD COLUMNS (Column1000 STRING);
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. java.lang.IllegalArgumentException: Error: type expected at the position 0 of '<derived from deserializer>' but '<' is found.

How to rollback or not commit with sql loader [duplicate]

If while loading this file
$ cat employee.txt
100,Thomas,Sales,5000
200,Jason,Technology,5500
300,Mayla,Technology,7000
400,Nisha,Marketing,9500
500,Randy,Technology,6000
501,Ritu,Accounting,5400
using the control file (say) sqlldr-add-new.ctl I came to know all the records are faulty so I want the previously loaded records in that table (those that were loaded yesterday) to be retained if today's had any error. How to handle this exception.
This is my sample ctl file
$ cat sqlldr-add-new.ctl
load data
infile '/home/ramesh/employee.txt'
into table employee
fields terminated by ","
( id, name, dept, salary )
You can't roll back from SQL*Loader, it commits automatically. This is mentioned in the errors parameter description:
On a single-table load, SQL*Loader terminates the load when errors exceed this error limit. Any data inserted up that point, however, is committed.
And there's a section on interrupted loads.
You could attempt to load the data to a staging table, and if it is successful move the data into the real table (with delete/insert into .. select .., or with a partition swap if you have a large amount of data). Or you could use an external table and do the same thing, but you'd need a way to determine if the table had any discarded or rejected records.
try with ERRORS=0.
You could find all explanation here:
http://docs.oracle.com/cd/F49540_01/DOC/server.815/a67792/ch06.htm
ERRORS (errors to allow)
ERRORS specifies the maximum number of insert errors to allow. If the number of errors exceeds the value of ERRORS parameter, SQL*Loader terminates the load. The default is 50. To permit no errors at all, set ERRORS=0. To specify that all errors be allowed, use a very high number.
On a single table load, SQL*Loader terminates the load when errors exceed this error limit. Any data inserted up that point, however, is committed.
SQL*Loader maintains the consistency of records across all tables. Therefore, multi-table loads do not terminate immediately if errors exceed the error limit. When SQL*loader encounters the maximum number of errors for a multi-table load, it continues to load rows to ensure that valid rows previously loaded into tables are loaded into all tables and/or rejected rows filtered out of all tables.
In all cases, SQL*Loader writes erroneous records to the bad filz

Table importation fails with error code ORA-31693

I've been receiving backups from an Oracle database into my Oracle database for 2 years now. My company is running version 10.2.0.1.0 and we are receiving the exports from version 12.1.0.2.0. They are using expdp and I'm using impdp. I added a new column into my database, using this script
ALTER TABLE CONTAINERS
ADD ("SHELL" NUMBER(14, 6) DEFAULT 0 );
After running the above on both databases now when they send an export to me the table in question will not import. I receive the following error.
ORA-31693: Table data object "PAS"."CONTAINERS" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-02373: Error parsing insert statement for table "PAS"."CONTAINERS".
ORA-00904: "SYS_NC00067$": invalid identifier
This error has been going on for a about two weeks, I have tried to resolve the problem multiple ways, this is my last resort as it were.
Any help is greatly appreciated.
Did you try to track down SYS_NC00067? It looks like a system-assigned column name. This sometimes happens when you add a function-based index. Did you create a function-based index on Shell?

Resources