running sqlldr control files in UNIX - oracle

I have a question. I have this control files that works fine when I run it from a windows client. However when I run it directly in Linux, it shows load complete but when I look at my oracle data, there is NO DATA and there are even bad records. Below is my control file that works well in windows but fails in Linux.
NOTE: The control file works if I remove the string or date converted fields
Control file
load data
infile 'HOME/INPUT/FILEA.dat'
badfile 'HOME/BAD/FILEA.bad'
discardfile 'HOME/DIS/FILEA.dsc'
truncate
into table TEST
fields terminated by '|'
trailing nullcols
( ABCcode CHAR(11),
ABCID CHAR(6),
ABC_SEQNO "to_number(:ABC_SEQNO,'999999')",
PSNO "to_number(:PSNO,'99999999999.999')",
ABDF CHAR(1),
ABCFI CHAR(1),
ABC_DATE NULLIF ABC_DATE="00000000" "to_date(:ABC_DATE, 'YYYYMMDD')",
XZY_date NULLIF XZY_date="00000000" "to_date(:XZY_date, 'YYYYMMDD')",
DESC CHAR(1))
Any help or ideas to get this code to run in Linux will be appreciated
Notes about the logfile: The logfile had the following
ORA-00604: error occurred at recursive SQL level 1
ORA-12899: value too large for column "ABCschema"."TEST"."ABC_DATE" (actual: 9, maximum: 8)
Also, the date conversion had the following
NULL if ABC_DATE = 0X3030303030303030(character '00000000')
SQL string for column : "to_date(:ABC_DATE, 'YYYYMMDD')"

Your TEST table has the ABC_DATE column defined as VARCHAR2(8), not as a DATE.
If I create a table as:
create table test (
ABCcode VARCHAR2(11),
ABCID VARCHAR2(6),
ABC_SEQNO NUMBER,
PSNO NUMBER,
ABDF VARCHAR2(1),
ABCFI VARCHAR2(1),
ABC_DATE DATE,
XZY_date DATE,
"DESC" VARCHAR2(1)
);
and have a data file with:
A|B|1|2.3|C|D|20140217|20140218|E
then it loads fine. If I recreate the table as:
create table test (
ABCcode VARCHAR2(11),
ABCID VARCHAR2(6),
ABC_SEQNO NUMBER,
PSNO NUMBER,
ABDF VARCHAR2(1),
ABCFI VARCHAR2(1),
ABC_DATE VARCHAR2(8),
XZY_date DATE,
"DESC" VARCHAR2(1)
);
... then the same control file and data file now give me:
Record 1: Rejected - Error on table TEST, column ABC_DATE.
ORA-12899: value too large for column "<schema>"."TEST"."ABC_DATE" (actual: 9, maximum: 8)
You are converting the string value to a date, but then you're doing an implicit conversion back to a string when it actually inserts the data into the VARCHAR2 column. When it does that it's using your NLS_DATE_FORMAT settings, and the error I got was from having that set to DD-MON-RR.
You have three options really. Either modify your table to have actual DATE columns; or change the control file so it just inserts the plain text value and doesn't do the date conversion at all; or massage your environment so the conversion back to a string gets the format you want the string to be.
Only the first one is really sensible - if it's a date value, always store it as a DATE, never as a string.
The 0X30... thing isn't a problem, that's just showing the internal representation it's using.

Related

Passing shell variable in hive script

I am running hive script using shell job, with passing some parameters.
CREATE EXTERNAL TABLE IF NOT EXISTS db_${hivevar:myEnv}_travail.tble91_formation_eligible(
ID_FORM_ELIG BIGINT,
LIBELLE_COURT_FORM VARCHAR(50),
DEBUT_VALID_FORM TIMESTAMP,
FIN_VALID_FORM TIMESTAMP,
CODE_BRANCHE_FORM VARCHAR(4),
CODE_REGION_FORM VARCHAR(3),
CODE_PUBLIC_FORM VARCHAR(1),
ETAT_FORM VARCHAR(3),
DATE_CREAT_FORM TIMESTAMP,
ORIGINE_CREAT_FORM VARCHAR(20),
DATE_MAJ_FORM TIMESTAMP,
ORIGINE_MAJ_FORM VARCHAR(20),
ID_LISTE_ELIG BIGINT,
LIBELLE_FORMATION VARCHAR(255),
CODE_DIPLOME INT,
NUMERO_FORMATION VARCHAR(50),
CODE_REFERENTIEL VARCHAR(50) COMMENT 'Code associé au type de référentiel (ex: RNCP, RS, ELU, CPF, ...)',
CODE_RECONNAISSANCE VARCHAR(50)COMMENT 'Code de la reconnaissance (ex: RNCP1234, ELU134444, ...)',
NIVEAU_CONTROLE_HABILITATION VARCHAR(50) COMMENT 'Niveau de contrôle d\'habilitation d\'une reconnaissance éligible CPF (informatif, bloquant, absent)'
)
PARTITIONED BY TO_DATE('${hivevar:date_archive}')
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\001'
LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION '/${hivevar:myEnv}/ep/traitement/le5/app_db/db_${hivevar:myEnv}_le5_travail.db/tble91_formation_eligible/';
The problem is about passing the date_archive as a partition column, it'is sourced from shell script as String, but I wanna convert it as a date column, so that I used TO_DATE function.
For information, date_archive has many date values like : 2022-02-03
But, this make a problem
Error: Error while compiling statement: FAILED: ParseException line
22:15 extraneous input 'TO_DATE' expecting ( near '(' in create table
partition specification (state=42000,code=40000)
I tried :
BY (TO_DATE('${hivevar:date_archive}'))
BY (TO_DATE("${hivevar:date_archive}"))
BY (date_archive=TO_DATE("${hivevar:date_archive}"))
But always, I have errors, Any help, please ? Thank you

Error converting varchar to numeric (but there's no number)

I have a table with several columns, like this:
CREATE TABLE CRM.INFO_ADICIONAL
(
ID_INFO_ADICIONAL NUMBER(10) NOT NULL,
NOMBRE VARCHAR2(100 BYTE) NOT NULL,
OBLIGATORIO NUMBER(1) NOT NULL,
TIPO_DATO VARCHAR2(2 BYTE) NOT NULL,
ACTIVO NUMBER(1) NOT NULL,
ID_TIPO_REQUERIMIENTO NUMBER(10) NOT NULL,
ID_USUARIO_AUDIT NUMBER(10) NOT NULL,
ORDEN NUMBER(3) DEFAULT 1,
RECHAZO_POR_NO NUMBER(1),
ID_TIPO_ARCHIVO_ADJUNTO NUMBER(10),
SOLICITAR_EN VARCHAR2(30 BYTE),
ID_CONSULTA NUMBER(10),
COMBO_ID VARCHAR2(40 BYTE),
APLICAR_COMO_VENC NUMBER(1),
MODIFICABLE NUMBER(1) DEFAULT 0,
ID_AREA_GESTION NUMBER(10),
ID_TAREA NUMBER(10)
)
The "COMBO_ID" column is the target. It is defined as VARCHAR, but when I'm trying to insert a row, TOAD displays
"ORA-06502: PL/SQL: error : error de conversión de carácter a número
numérico o de valor"
Or a 'numeric conversion error', in english.
This table have some pre-existing data, and I even found some rows including values at COMBO_ID column, all of them being VARCHAR, i.e.:
NACION (Nation), SEXO (Sex), etc
I tried a few simple SELECT statements
SELECT
ID_INFO_ADICIONAL,
NOMBRE,
OBLIGATORIO,
TIPO_DATO,
ACTIVO,
ID_TIPO_REQUERIMIENTO,
ID_USUARIO_AUDIT,
ORDEN,
RECHAZO_POR_NO,
ID_TIPO_ARCHIVO_ADJUNTO,
SOLICITAR_EN,
COMBO_ID,
APLICAR_COMO_VENC,
ID_CONSULTA,
MODIFICABLE,
ID_AREA_GESTION,
ID_TAREA
INTO
pRegistro
FROM
crm.info_adicional
where pRegistro is declared as
pRegistro INFO_ADICIONAL%ROWTYPE;
Again, I'm still getting this 'numeric conversion error'.
But, wait, if I hardcode the SELECT value in COMBO_ID column with a NUMBER:
SELECT
--other columns
123456 COMBO_ID,
--other columns
INTO
pRegistro
FROM
crm.info_adicional
It works, what the heck, it's defined as VARCHAR.
If I do the same but harcoding a string, it fails to execute again
Already tried in my DEV environment, and it's working fine.
I'm not a pro in Oracle, but I feel pretty lost.
Could it be that tables get "confused"?
Any clues?
That error can also be raised if you try to push a character string that is longer than your VARCHAR2's capacity (40 in your case).
Try to check if all the data you are trying to insert is correct :
SELECT
COMBO_ID
FROM
crm.info_adicional
ORDER BY length(COMBO_ID) desc;
That would also explain why it works fine on your DEV environment which, I suppose, has different data.
Okay, I already found the answer.
Quoting Oracle Documentation:
The %ROWTYPE attribute provides a record type that represents a row in a table or view. Columns in a row and corresponding fields in a record have the same names and datatypes.
So, basically, the SELECT statement needed to be in the same order as the table columns definition.
In my case, I had a few columns (including COMBO_ID) in a different order.
Tried, re-ordering, and works like a charm.
Thank you all for the support.

Oracle converted Varchar field to number how to convert it back to Varchar

Till today we use to enter numeric value in the Varchar column so Oracle converted that varchar field to Numeric field.
And now when we are trying to insert Character value it is throwing ORA-01722 (invalid number).
Could anyone help me out in order to convert it back to varchar field?
The commenters are correct: the database does not change column types on its own. In general, you must create a new column, copy the old data over to the new column, drop the original column, and rename the new column.
drop table deleteme_table;
-- zippy s/b varchar2(30), not integer
CREATE TABLE deleteme_table
(
adate DATE
, zippy INTEGER
);
-- Add the correct type to the table as a new column
ALTER TABLE deleteme_table
ADD (tempcol VARCHAR2 (30));
-- Copy the old values to the new column
UPDATE deleteme_table
SET tempcol = zippy;
-- Get rid of the original column
ALTER TABLE deleteme_table
DROP COLUMN zippy;
-- Rename to original column name
alter table deleteme_table rename column tempcol to zippy;

ORA-01465: invalid hex number in oracle while using BLOB

i am designing a database in oracle 11g. I have designed a table with fields,
CUST_ID, NUMBER(5) //this is a foreign key
Review, BLOB //to store big strings
Date, SYSDATE
now when i'm trying to insert data in the table like-
insert into "ReviewTable" values ( 3, 'hello, this is the first review',SYSDATE)
it gives [Err] ORA-01465: invalid hex number.
If someone can help me with the error?
you cast your string into BLOB, you can do this via package utl_raw.cast_to_raw or convert varchar to clob via to_clob('mystring') and then use procedure DBMS_LOB.convertToBlob in your code
but if you are going to use the fields for string why don`t save them as a CLOB?
Here are 2 examples below with BLOB and CLOB fields
BLOB
create table ReviewTable( CUST_ID NUMBER(5)
,Review BLOB
,Dt Date);
insert into ReviewTable values ( 3, utl_raw.cast_to_raw('hello, this is the first review'),SYSDATE);
CLOB
create table ReviewTable2( CUST_ID NUMBER(5)
,Review CLOB
,Dt Date);
insert into ReviewTable2 values ( 3, 'hello, this is the first review',SYSDATE);

Why did SQL*Loader load 808594481 when using the INTEGER data-type?

I was loading data using SQL*Loader and when making the control file I used the table definition and accidentally left the INTEGER data type on the "version" line.
And in the "version" field (data type integer) it inserted the value 808594481.
I'm having a hard time understanding how it processed this value -- I'm assuming it took it as a literal ... but is that the sum of the ASCII representations of each letter?
NOPE!
SELECT ASCII('I')+ascii('N')+ASCII('T')+ASCII('E')+ASCII('G')+ASCII('E')+ASCII('G')+ASCII('E')+ASCII('R')
FROM SYS.DUAL
returns 666 (which, btw is hilarious).
concatenate ascii values?
SELECT ASCII('I')||ascii('N')||ASCII('T')||ASCII('E')||ASCII('G')||ASCII('E')||ASCII('G')||ASCII('E')||ASCII('R')
FROM SYS.DUAL
returns 737884697169716982
I'm hoping someone out there knows the answer.
This is the actual control file:
OPTIONS (SKIP=1)
LOAD DATA
APPEND into table THETABLE
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(id ,
parent_id ,
record_id ,
version INTEGER,
created_at ,
updated_at ,
created_by ,
updated_by ,
species_and_cohort ,
species_and_cohort_count)
Table DDL:
create table THETABLE
(
id VARCHAR2(36),
parent_id VARCHAR2(36),
record_id VARCHAR2(36),
version INTEGER,
created_at VARCHAR2(25),
updated_at VARCHAR2(25),
created_by VARCHAR2(50),
updated_by VARCHAR2(50),
species_and_cohort VARCHAR2(150),
species_and_cohort_other VARCHAR2(150),
species_and_cohort_count NUMBER
)
Data:
id,parent_id,record_id,version,created_at,updated_at,created_by,updated_by,species_and_cohort,species_and_cohort_other,species_and_cohort_count
60D90F54-C5F2-47AF-951B-27A424EAE8E3,f9fe8a3b-3470-4caf-b0ba-3682a1c79731,f9fe8a3b-3470-4caf-b0ba-3682a1c79731,1,2014-09-23 21:02:54 UTC,2014-09-23 21:02:54 UTC,x#gmail.com,x#gmail.com,"PRCA Cherrylaurel,Sapling","",5
FC6A2120-AA0B-4238-A2F6-A6AEDD9B8202,f9fe8a3b-3470-4caf-b0ba-3682a1c79731,f9fe8a3b-3470-4caf-b0ba-3682a1c79731,1,2014-09-23 21:03:02 UTC,2014-09-23 21:03:02 UTC,x7#gmail.com,x7#gmail.com,"JUVI Eastern Redcedar,Sapling","",45
If you split 808594481 into bytes as it would be encoded in a 32 bit twos complement encoding, and treat each byte as an ascii-encoded character, you get "02,1" or "1,20" depending on byte order. You probably inserted a string that starts or ends with one of those, and some layer between your code and the database silently converted it to an integer.

Resources