Unable to query external table as a script from SQL Developer - oracle

I have an external table defined as:
CREATE TABLE EXAM_BDE_ventes (
customerNumber varchar(255),
clerkId varchar(255),
productId varchar(255),
saleDate varchar(255),
factoryId varchar(255)
)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY mydirectory
ACCESS PARAMETERS
(
RECORDS DELIMITED BY newline
SKIP 0
CHARACTERSET UTF8
BADFILE logs:'ventes.txt.bad'
LOGFILE logs:'ventes.txt.log'
FIELDS TERMINATED BY ';'
OPTIONALLY ENCLOSED BY '"'
)
LOCATION ('LightSaberInc.txt'))
REJECT LIMIT UNLIMITED;
The LightSaberInc.txt file is here, and has nearly 75K lines.
If I query that table as a statement (Ctrl+Enter) I can see the data from the table:
But when I run it as a script (F5) I don't see anything in the script output window:
The log doesn't show any error.
I think this weird bug is hiding an error while I imported the csv. This error is generating other problems later in my code, such as numbers not being properly recognized when I use to_number() .
Why can't I query the external table from a script?

Ok so actually in the script I needed to specify '\r\ninstead of newline.
I guess the file was created using an OS that doesn't use the value newline to specify a new line, but '\r\n instead.

Related

External Table Help - Problem with FIELDS TERMINATED BY ',' if additional comma exists in a value in the field

I have a comma delimited txt file and I am using an external table to load the data down below:
create table test_ext_table
CUSTOMER_ID NUMBER,
CUSTOMER_NAME VARCHAR2(255),
CUSTOMER_NUMBER NUMBER)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory TXT_DIR
access parameters
(RECORDS delimited by newline SKIP 1
FIELDS TERMINATED BY ','
LRTRIM
MISSING FIELD VALUES ARE NULL
)
LOCATION (TEST.txt)
)
REJECT LIMIT UNLIMITED);
I know the external table recognizes each field terminated by a comma, but let's say in the text file I have the following
TEST.txt
customer_id, customer_name, customer_number
1,a,10
2,b,11
3,c,12
4,Hello, Inc,13
For the 4 row in the txt file, because there is an additional ',' in the customer_name field, the external table isn't able to properly read the customer name into the table. Is there a way for me to adjust the external table so that it ignores additional ',' or any special characters?
As far as I know, there are 2 ways to do that:
optionally enclose string into e.g. double quotes
change delimiter from comma to something else, e.g. semi-colon
Otherwise, there's no way to leave it "as is" and make Oracle recognize which comma represents what.

Using SQL reserved words in Hive when creating external temporary table

I need to create an external hive table from hdfs location where one column in files has reserved name (end).
When running the script I get the error:
"cannot recognize input near 'end' 'STRUCT' '<' in column specification"
I found 2 solutions.
The first one is to set hive.support.sql11.reserved.keywords=false, but this option has been removed.
https://issues.apache.org/jira/browse/HIVE-14872
The second solution is to use quoted identifiers (column).
But in this case I get the error:
"org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected character ('c' (code 99)): was expecting comma to separate OBJECT entries"
This is my code for table creation:
CREATE TEMPORARY EXTERNAL TABLE ${tmp_db}.${tmp_table}
(
id STRING,
email STRUCT<string:STRING>,
start STRUCT<long:BIGINT>,
end STRUCT<long:BIGINT>
)
ROW FORMAT SERDE 'org.apache.hive.hcatalog.data.JsonSerDe'
LOCATION '${input_dir}';
It's not possible to rename the column.
Does anybody know the solution for this problem? Or maybe any ideas?
Thanks a lot in advance!
can you try below.
hive> set hive.support.quoted.identifiers=column;
hive> create temporary table sp_char ( `#` int, `end` string);
OK
Time taken: 0.123 seconds
OK
Time taken: 0.362 seconds
hive>
When you set hive property hive.support.quoted.identifiers=column all the values within back ticks are treated as literals.
Other value for above property is none , when it is set to none you can use regex to evaluate the column or expression value.
Hope this helps

Loading Unicode Characters with Oracle SQL Loader (sqlldr) results in question marks

I'm trying to load localized strings from a unicode (UTF8-encoded) csv using SQL Loader into an oracle database. I've tried all sort of combinations but nothing seems to give me the result I'm looking for which is to have special greek characters like (Δ) not get converted to Δ or ¿.
My table definition looks like this:
CREATE TABLE "GLOBALIZATIONRESOURCE"
(
"RESOURCETYPE" VARCHAR2(255 CHAR) NOT NULL ENABLE,
"CULTURE" VARCHAR2(20 CHAR) NOT NULL ENABLE,
"KEY" VARCHAR2(128 CHAR) NOT NULL ENABLE,
"VALUE" VARCHAR2(2048 CHAR),
"DESCRIPTION" VARCHAR2(512 CHAR),
CONSTRAINT "PK_GLOBALIZATIONRESOURCE" PRIMARY KEY ("RESOURCETYPE","CULTURE","KEY") USING INDEX TABLESPACE REPSPACE_IX ENABLE
)
TABLESPACE REPSPACE;
I have tried the following configurations in my control file (and actually every permutation I could think of)
load data
TRUNCATE
INTO TABLE "GLOBALIZATIONRESOURCE"
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
"RESOURCETYPE" CHAR(255),
"CULTURE" CHAR(20),
"KEY" CHAR(128),
"VALUE" CHAR(2048),
"DESCRIPTION" CHAR(512)
)
load data
CHARACTERSET UTF8
TRUNCATE
INTO TABLE "GLOBALIZATIONRESOURCE"
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(
"RESOURCETYPE" CHAR(255),
"CULTURE" CHAR(20),
"KEY" CHAR(128),
"VALUE" CHAR(2048),
"DESCRIPTION" CHAR(512)
)
load data
CHARACTERSET UTF16
TRUNCATE
INTO TABLE "GLOBALIZATIONRESOURCE"
FIELDS TERMINATED BY X'002c' OPTIONALLY ENCLOSED BY X'0022'
TRAILING NULLCOLS
(
"RESOURCETYPE" CHAR(255),
"CULTURE" CHAR(20),
"KEY" CHAR(128),
"VALUE" CHAR(2048),
"DESCRIPTION" CHAR(512)
)
With the first two options, the unicode characters don't get encoded and just show up as upside down question marks.
If I choose last option, UTF16, then I get the following error even though all my data in my fields are much shorter than the length specified.
Field in data file exceeds maximum length
It seems as though every possible combination of ctl file configurations (even setting the byte order to little and big) doesn't work correctly. Can someone please give an example of a configuration (table structure and CTL file) that correctly loads unicode data from a csv? Any help would be greatly appreciated.
Note: I've already been to http://docs.oracle.com/cd/B19306_01/server.102/b14215/ldr_concepts.htm, http://docs.oracle.com/cd/B10501_01/server.920/a96652/ch10.htm and http://docs.oracle.com/cd/B10501_01/server.920/a96652/ch10.htm.
I had same issue and resolved by below steps -
Open data file into Notepad++ , Go to "Encoding" dropdown and select UTF8 encoding and save file.
use CHARACTERSET UTF8 into CTL file and then upload data.
You have two problem;
Character set.
Answer: You can solve this problem by finding your text character set (most of time notepad++ can do this.). After finding character set, you have to find sqlldr correspond of character set name. So, you can find this info from link https://docs.oracle.com/cd/B10501_01/server.920/a96529/appa.htm#975313
After all of these, you should solve character set problem.
In contrast to your actual data length, sqlldr says that, Field in data file exceeds maximum length.
Answer: You can solve this problem by adding CHAR(4000) (or what the actual length is) to problematic column. In my case, the problematic column is "E" column. Example is below. In my case I solved my problem in this way, hope helps.
LOAD DATA
CHARACTERSET UTF8
-- This line is comment
-- Turkish charset (for ÜĞİŞ etc.)
-- CHARACTERSET WE8ISO8859P9
-- Character list is here.
-- https://docs.oracle.com/cd/B10501_01/server.920/a96529/appa.htm#975313
INFILE 'data.txt' "STR '~|~\n'"
TRUNCATE
INTO TABLE SILTAB
FIELDS TERMINATED BY '#'
TRAILING NULLCOLS
(
a,
b,
c,
d,
e CHAR(4000)
)
You must ensure that the following charactersets are the same:
db characterset
dump file characterset
the client from which you are doing the import (NLS_LANG)
If the client-side characterset is different, oracle will attempt to perform character conversions to the native db characterset and this might not always provide the desired result.
Don't use MS Office to save the spreadsheet into unicode .csv.
Instead, use OpenOffice to save into unicode-UTF8 .csv file.
Then in the loader control file, add "CHARACTERSET UTF8"
run Oracle SQL*Loader, this gives me correct results
There is a range of character set encoding that you can use in control file while loading data from sql loader.
For greek characters I believe Western European char set should do the trick.
LOAD DATA
CHARACTERSET WE8ISO8859P1
or in case of MS word input files with smart characters try in control file
LOAD DATA
CHARACTERSET WE8MSWIN1252

Oracle: Import CSV file

I've been searching for a while now but can't seem to find answers so here goes...
I've got a CSV file that I want to import into a table in Oracle (9i/10i).
Later on I plan to use this table as a lookup for another use.
This is actually a workaround I'm working on since the fact that querying using the IN clause with more that 1000 values is not possible.
How is this done using SQLPLUS?
Thanks for your time! :)
SQL Loader helps load csv files into tables: SQL*Loader
If you want sqlplus only, then it gets a bit complicated. You need to locate your sqlloader script and csv file, then run the sqlldr command.
Another solution you can use is SQL Developer.
With it, you have the ability to import from a csv file (other delimited files are available).
Just open the table view, then:
choose actions
import data
find your file
choose your options.
You have the option to have SQL Developer do the inserts for you, create an sql insert script, or create the data for a SQL Loader script (have not tried this option myself).
Of course all that is moot if you can only use the command line, but if you are able to test it with SQL Developer locally, you can always deploy the generated insert scripts (for example).
Just adding another option to the 2 already very good answers.
An alternative solution is using an external table: http://www.orafaq.com/node/848
Use this when you have to do this import very often and very fast.
SQL Loader is the way to go.
I recently loaded my table from a csv file,new to this concept,would like to share an example.
LOAD DATA
infile '/ipoapplication/utl_file/LBR_HE_Mar16.csv'
REPLACE
INTO TABLE LOAN_BALANCE_MASTER_INT
fields terminated by ',' optionally enclosed by '"'
(
ACCOUNT_NO,
CUSTOMER_NAME,
LIMIT,
REGION
)
Place the control file and csv at the same location on the server.
Locate the sqlldr exe and invoce it.
sqlldr userid/passwd#DBname control=
Ex : sqlldr abc/xyz#ora control=load.ctl
Hope it helps.
Somebody asked me to post a link to the framework! that I presented at Open World 2012. This is the full blog post that demonstrates how to architect a solution with external tables.
I would like to share 2 tips: (tip 1) create a csv file (tip 2) Load rows from a csv file into a table.
====[ (tip 1) SQLPLUS to create a csv file form an Oracle table ]====
I use SQLPLUS with the following commands:
set markup csv on
set lines 1000
set pagesize 100000 linesize 1000
set feedback off
set trimspool on
spool /MyFolderAndFilename.csv
Select * from MYschema.MYTABLE where MyWhereConditions ;
spool off
exit
====[tip 2 SQLLDR to load a csv file into a table ]====
I use SQLLDR and a csv ( comma separated ) file to add (APPEND) rows form the csv file to a table.
the file has , between fields text fields have " before and after the text
CRITICAL: if last column is null there is a , at the end of the line
Example of data lines in the csv file:
11,"aa",1001
22,"bb',2002
33,"cc",
44,"dd",4004
55,"ee',
This is the control file:
LOAD DATA
APPEND
INTO TABLE MYSCHEMA.MYTABLE
fields terminated by ',' optionally enclosed by '"'
TRAILING NULLCOLS
(
CoulmnName1,
CoulmnName2,
CoulmnName3
)
This is the command to execute sqlldr in Linux. If you run in Windows use \ instead of / c:
sqlldr userid=MyOracleUser/MyOraclePassword#MyOracleServerIPaddress:port/MyOracleSIDorService DATA=datafile.csv CONTROL=controlfile.ctl LOG=logfile.log BAD=notloadedrows.bad
Good luck !
From Oracle 18c you could use Inline External Tables:
Inline external tables enable the runtime definition of an external table as part of a SQL statement, without creating the external table as persistent object in the data dictionary.
With inline external tables, the same syntax that is used to create an external table with a CREATE TABLE statement can be used in a SELECT statement at runtime. Specify inline external tables in the FROM clause of a query block. Queries that include inline external tables can also include regular tables for joins, aggregation, and so on.
INSERT INTO target_table(time_id, prod_id, quantity_sold, amount_sold)
SELECT time_id, prod_id, quantity_sold, amount_sold
FROM EXTERNAL (
(time_id DATE NOT NULL,
prod_id INTEGER NOT NULL,
quantity_sold NUMBER(10,2),
amount_sold NUMBER(10,2))
TYPE ORACLE_LOADER
DEFAULT DIRECTORY data_dir1
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY '|')
LOCATION ('sales_9.csv') REJECT LIMIT UNLIMITED) sales_external;

Handling UTF-8 characters in Oracle external tables

I have an external table that reads from a fixed length file. The file is expected to contain special characters. In my case the word containing special character is "Göteborg". Because "ö" is a special character, looks like Oracle is considering it as 2 bytes. That causes the trouble. The subsequent fields in the files get shifted by 1 byte thereby messing up the data. Has anyone faced the issue before. So far we have tried the following solution:
Changed the value of NLS_LANG to AMERICAN_AMERICA.WE8ISO8859P1
Tried Setting the Database Character set to UTF-8
Tried changing the NLS_LENGTH_SYMMANTIC to CHAR instead of BYTE using ALTER SYSTEM
Tried changing the External table characterset to: AL32UTF8
Tried changing the External table characterset to: UTF-8
Nothing works.
Other details include:
File is UTF-8 encoded
Operating System : RHEL
Database: Oracle 11g
Any thing else that I might be missing? Any help will be appreciated. Thanks!
The nls_length_semantics only pertains to the creation of new tables.
Below is what I did to fix this very problem.
records delimited by newline
CHARACTERSET AL32UTF8
STRING SIZES ARE IN CHARACTERS
i.e.
ALTER SESSION SET nls_length_semantics = CHAR
/
CREATE TABLE TDW_OWNER.SDP_TST_EXT
(
COST_CENTER_CODE VARCHAR2(10) NULL,
COST_CENTER_DESC VARCHAR2(40) NULL,
SOURCE_CLIENT VARCHAR2(3) NULL,
NAME1 VARCHAR2(35) NULL
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY DBA_DATA_DIR
ACCESS PARAMETERS
( records delimited by newline
CHARACTERSET AL32UTF8
STRING SIZES ARE IN CHARACTERS
logfile DBA_DATA_DIR:'sdp_tst_ext_%p.log'
badfile DBA_DATA_DIR:'sdp_tst_ext_%p.bad'
discardfile DBA_DATA_DIR:'sdp_tst_ext_%p.dsc'
fields
notrim
(
COST_CENTER_CODE CHAR(10)
,COST_CENTER_DESC CHAR(40)
,SOURCE_CLIENT CHAR(3)
,NAME1 CHAR(35)
)
)
LOCATION (DBA_DATA_DIR:'sdp_tst.dat')
)
REJECT LIMIT UNLIMITED
NOPARALLEL
NOROWDEPENDENCIES
/

Resources