I'm trying read a file from a PL/SQL procedure but I am geting ORA-00942 Table or view does not exist error.
Caused by: Error : 942, Position : 21, Sql =
SELECT UBIC_ID FROM LIST_UBICS
, Error Msg = ORA-00942: table or view does not exist
I have a file with an id per line. This file is called list_ubics.csv. I have a File model and a datastore pointing to the file called LIST_UBIC with the UBIC_ID Field.
I created a Task in a new Procedure with this SQL:
SELECT UBIC_ID From LIST_UBICS
LIST_UBICS is my datastore I don't have any table with these name.
I want read these file and make some processing for each line but I don't see any way in the docs to read a text file that works for me.
How can I read this file?
Thanks in advance for any help.
An ODI Procedure written in PL/SQL (Oracle Technology) will be pushed down on the database. The database executing doesn't know about the File Datastore and can not execute SQL statement against it.
If the goal is to load the file with ODI it can be done using an interface (11g) or a mapping (12c) with LKM File to SQL. That will copy the content of the file into a table in the database and any SQL statement can then be executed against it.
Alternatively, it is possible to create a directory in the database, land the file there and create an external table on top of it. Queries can be used on external table but not DML operations. More information here : https://oracle-base.com/articles/9i/external-tables-9i
I just found out last week that there is one more solution and probably the best!
The solution is written here-part-1 and here-part-2 and if you follow the steps exactly, it will work (I implemented it).
Anyway, I will summarize the main idea and steps.
We can use a variable into a package. There is a code (see code at the end) that reads a column from the given file. Making a for statement in the package, will help us read every row, by changing the value of "CRFILE_FIRST_ROW" variable from the code below, with a sequential number starting with 1 (
So, everything is easy as above. Besides the "CRFILE_FIRST_ROW", there are more variable that can be changed, like: CRFILE_FORMAT=D (format:decimal), CRFILE_SEP_FIELD=0x0009 (hexadecimal fileformat) and so on.
Also, as you can see in the original posts (above links), you can generate your view code; you don't need to copy paste from below.
View code:
select TES.C1 C1
from location_of_file/objects_to_import.txt TES
/*$$SNPS_START_KEYSNP$CRDWG_TABLESNP$CRTABLE_NAME=TESTSNP$CRLOAD_FILE=location_of_file/objects_to_import.txtSNP$CRFILE_FORMAT=DSNP$CRFILE_SEP_FIELD=0x0009SNP$CRFILE_SEP_LINE=0x000ASNP$CRFILE_FIRST_ROW=#UTILS.IMPORT_OBJ_READ_INCRSNP$CRFILE_ENC_FIELD=SNP$CRFILE_DEC_SEP=SNP$CRSNP$CRDWG_COLSNP$CRCOL_NAME=C1SNP$CRTYPE_NAME=STRINGSNP$CRORDER=1SNP$CRLENGTH=50SNP$CRPRECISION=50SNP$CRACTION_ON_ERROR=NULLSNP$CR$$SNPS_END_KEY*/
Related
I did search for a similar question, but if I overlooked an existing answer I am glad to be redirected there.
I am working to untangle an Oracle Stored Proceedure in a legacy system written by a long departed developer.
The focus of the proceedure is to upload user data into the existing table structure in a bulk collection and save keystroke time adding 1-x-1 records.
The procedure appears to work without error and the user group would like to expand it to allow additional data to load to separate but related tables.
The author is using the NEXTVAL and CURRVAL commands to add primary key information as new records are added using the CSV data.
But I am confused because my understanding of NEXTVAL/CURRVAL was that they required context and declaration to be used correctly.
For example the Proceedure has the following:
SELECT seq_site.nextval INTO v_curr
FROM DUAL;
UPDATE temp_table
SET site_id = seq_site.currval
However [SEQ_SITE] is not declared anywhere in the preceding lines of the Procedure.
Am I inferring correctly that the clause [SELECT seq_site.nextval INTO v_curr] is the declaration for [SEQ_Record_count]?
(...v_curr is declared an integer early in the procedure declarations btw...)
I want to use an external table to load a csv file as it's very convenient, but the problem is how do i make sure i don't load the same file twice in a row? i can't validate the data loaded because it can be the same information as before; i need to find a way to make sure the user doesnt load the same file as 2h ago for example.
I thought about uploading the file with a different name each time and issuing an alter table command to change the name of the file in the definition of the external table, but it sounds kinda risky.
I also thought about marking each row in the file with a sequence to help differentiate files, but i doubt the client would accept it as they would need to manually do this (the file is exported from somewhere).
Is there any better way to make sure i don't load the same file in the external table except changing the file's name and executing an alter on the table?
Thank you
when you bring the data from external table to your database you can use MERGE command instead of insert. it let you don't worry about duplicate data
see the blog about The Oracle Merge Command
What's more, we can wrap up the whole transformation process into this
one Oracle MERGE command, referencing the external table and the table
function in the one command as the source for the MERGED Oracle data.
alter session enable parallel dml;
merge /*+ parallel(contract_dim,10) append */
into contract_dim d
using TABLE(trx.go(
CURSOR(select /*+ parallel(contracts_file,10) full (contracts_file) */ *
from contracts_file ))) f
on d.contract_id = f.contract_id
when matched then
update set desc = f.desc,
init_val_loc_curr = f.init_val_loc_curr,
init_val_adj_amt = f.init_val_adj_amt
when not matched then
insert values ( f.contract_id,
f.desc,
f.init_val_loc_curr,
f.init_val_adj_amt);
So there we have it - our complex ETL function all contained within a
single Oracle MERGE statement. No separate SQL*Loader phase, no
staging tables, and all piped through and loaded in parallel
I can only think of a solution somewhat like this:
Have a timestamp encoded in the datafile name (like: YYYYMMDDHHMISS-file.csv), where YYYYMMDDHHMISS is the timestamp.
Create a table with the fields timestamp (as above).
Create a shell scripts that:
extracts the timestamp from the datafilename.
calls an sqlscript with the timestamp as the parameter, and return 0 if that timestamp does not exist, <>0 if the timestamp already exist, and in that case exit the script with the error: File: YYYYMMDDHHMISS-file.csv already loaded.
copy the YYYYMMDDDHHMISS-file.csv to input-file.csv.
run the sql loader script that loads the input-file.csv file
when succes: run a second sql script with parameter timestamp that inserts the record in the database to indicate that the file is loaded and move the original file to a backup folder.
when failure: report the failure of the load script.
I have a requirement to insert bulk data into an Oracle database from a CSV file. Now table columns specs match those of the CSV file's header with the exception of three additional fields in database:
A Primary Key field (for which a simple SEQUENCE.NEXTVAL is called)
A field for the name of the CSV file
A field for the last modified date+time of the file
The following stack question address an extra column issue, but the solution is pretty easy because it used Oracle sysdate which is internally available. I need to pass a parameter from either batch script/shell script.
Insert actual date time in a row with SQL*loader
Can PARFILE help here somehow?
My other alternative would be to do the whole task in two steps by writing a small java code:
Use SQL Loader for bulk upload leaving out data for the filename and
modified time
And then run a separate update statement to populate the newly
created rows
But I'm looking for something which will get the job done in one shot. Any advice??
I'm affraid it's not possible with sqlldr alone.
There is no tools for this in sqlldr.
You'd need some sort of script or a program to dynamically create a .ctl file for each load.
Here is a bash script to help you get started:
#!/bin/bash -xv
readonly MY_FILENAME=$1
readonly DB_BUF_TABLE=$2
readonly SQLLDR_CTL="LOAD DATA
CHARACTERSET UTF8
APPEND INTO TABLE $DB_BUF_TABLE
FIELDS TERMINATED BY ';'(
filename \"$MY_FILE_NAME\",
col_foo,
col_bar
)"
echo "$SQLLDR_CTL" > "loader.ctl"
sqlldr control=loader.ctl parfile=loader.par data="$MY_FILENAME"
sqlldrReturnValue=$?
You'd needsome locking with this.. or path separation for concurrent loads to be sure sqlldr starts with proper ctl file
I am trying to copy a table data from dev box db to uat db which are 2 different data bases . I am trying in toad.All the connection details are correct but its not working and throwing the following error.
[Error] Execution (12: 1): ORA-00900: invalid SQL statement
This is what i am trying
copy from abc/cde#//abc.abc.com:1521/devbox to abc/cde#//abc.abc.com/uatbox
INSERT TOOL_SERVICE_MAPPING (*)
USING (SELECT * FROM TOOL_SERVICE_MAPPING)
If your table doesn't have a huge number of rows you can use Toad's Export function: it creates an insert statement for each row. You can then run these statements in destination DB to re-create your table's data.
Here are the steps:
A. Create a copy of the table in destination DB
in source DB in a schema browser window click on the table you want to copy, select "script" tab in the right part of the window: you will find the script to re-create your table; copy this script
paste the script in a new SQL editor window in destination DB and run it. This should create the new table
B. Copy data in new table
in a schema browser window right click on table name in source DB
select "Export Data" from context menu
write "where" statement of your export query (leave it blank if you want to copy the entire table)
select destination: clipboard
click "ok" (now insert statements are stored in your clipboard)
paste insert statements in a new SQL editor window in destination DB
run statements as script (shortcut F5)
copy is a SQL*Plus command, not a SQL statement. I would be surprised if Toad had implemented that particular SQL*Plus command (it does implement many of the simpler commands). If you want to use the copy command, you would need to use SQL*Plus, not Toad.
If you want to use Toad, you would need to use a SQL statement to copy the data. You could create a database link in the destination database that points to the source database and then
INSERT INTO tool_service_mapping
SELECT *
FROM tool_service_mapping#<<db link to source database>>
The easyest and most error-free way I have experienced so far is: Database->Compare->Schemas
It's not too complicated as it looks (lots of checkboxes), but you tick boxes for objects you need to be created in an empty database, and at the end of comparison you end up with SQL script including all objects (triggers, views, sequences, packges) that you selected (checkboxes).
I clearly see all tables, triggers, data, etc in generated sql script and even can tick these I don't wish to create (if any)... Before executing script, TOAD asks you to confirm against which database you are running the script - saved me few times... As ackward as it looks, it works perfectly.
I have arround 200 tables I don't know if this is suitable for huge databases.
I have a requirement to insert 12600 data in one table. The data is in doc file i need to upload all the data in particular table at one shot.
please give me a suggestion to upload the data.
Thanks in advance.
check out sql*loader, which allows relatively easy and fast import into the database from text-files.
More information would be useful, but the following steps are a very general description of how to accomplish your task:
Open file
LOOP
Read line from file
If file at EOF, break out of LOOP
Parse line into variables
Insert data into table using variables from (4)
END LOOP
Close file.
COMMIT
If errors occur, ROLLBACK and exit
Share and enjoy.