Loading multiple concatenated CSV files into Qracle with SQLLDR - oracle

I have a dump of several Postgresql Tables in a selfcontained CSV file which I want to import into an Oracle Database with a matching schema. I found several posts on how to distribute data from one CSV "table" to multiple Oracle tables, but my problem is several DIFFERENT CVS "tables" in the same file.
Is it possible to specify table separators or somehow mark new tables in an SQLLDR control file, or do I have to split up the file manually before feeding it to SQLLDR?

That depends on your data. How do you determine which table a row is destined for? If you can determine which table base on data in the row, then it is fairly easy to do with a WHEN.
LOAD DATA
INFILE bunchotables.dat
INTO TABLE foo WHEN somecol = 'pick me, pick me' (
...column defs...
)
INTO TABLE bar WHEN somecol = 'leave me alone' (
... column defs
)
If you've got some sort of header row that determines the target table then you are going to have to split it before hand with another utility.

Related

Creating txt file using Pentaho

I'm currently trying to create txt files from all tables in the dbo schema
I have like 200s-300s tables there, so it would takes up too much times to create it manually..
I was thinking for creating a loop.
so as example (using AdventureWorks2019) :
select t.name as table_name
from sys.tables t
where schema_name(t.schema_id) = 'Person'
order by table_name;
This would get all the table name within the Person schema.
So I would loop :
Table input : select * from ${table_name}
But then i realized that for txt files, i need to declare all the field and their data types in pentaho, so it would become a problems.
Any ideas how to do this "backup" txt files?
Using Metadata Injection and more queries to the schema catalog tables in SQL Server. You not only need to retrieve the table name, you would need to afterwards retrieve the columns in that table and the data types, and inject that information (metadata) to the text output step.
You have in the samples directory of your spoon installation an example on how to use Metadata Injection, use it, along with the documentation, to build a simple example (the check to generate a transformation with the metadata you have injected is of great use to debug)
I have something similar to copy data from one database to another, both in Oracle, but with SQL Server you have similar catalog tables as in Oracle to retrieve the information you need. I created a simple, almost empty transformation to read one table and write to another. This transformation has almost no information, only the database origin in the Table Input step and the target database in the Table Output step:
And then I have a second transformation where I fill up all the information (metadata) to inject: The query to perform in the Table Input step, and all the data I need in the Table Output: Target table, if I need to truncate before inserting, the columns from (stream field) and to (Table field):

how to create table definition from csv file and also copy data at the same time

I want to load data from a csv file into Vertica. I don't want to create table and the copy data in two separate steps. Instead, I want to create the table, specify the csv file and then let vertica figure out column definitions (names, data type) itself and then load the data.
Something like create table titanic_train () as COPY FROM '/data/train.csv' PARSER fcsvparser() rejected data as table titanic_train_rejected abort on error no commit;
Is it possible?
I guess that if a table has 100s of columns then automating the create table, column definition and data copy would be much easier/faster than doing these steps separately
It's always several steps, no matter what.
Use the built-in bits of Vertica:
CREATE FLEX TABLE foo();
COPY foo FROM '/data/mycsvs/foo.csv' PARSER fCsvParser();
SELECT COMPUTE_FLEXTABLE_KEYS_AND_BUILD_VIEW('foo');
-- THEN, either:
SELECT * FROM foo_view;
-- OR: create a ROS Table:
CREATE TABLE foo_ros AS SELECT * FROM foo_view;
Get a CSV-to-DDL parser from the net, like https://github.com/marco-the-sane/d2l, and install it then:
$ d2l -coldelcomma -chardelquote -drp -copy /data/mycsvs/foo.csv | vsql
So , in the second instance, it's one step, but it calls both d2l and vsql.

Load multiple files content to table using SQL loader

How to insert data from multiple files having different columns into a table in Oracle database using SQL Loader with Single control file.
Basically ,
We have 3 CSV files
file 1 having columns a,b,c
file 2 having columns d,e,f
file 3 having columns g,h,i
We need to insert the above attributes to a Table named "TableTest"
having columns a,b,c ,d,e,f,g,h,i
Using single control file
Thanks in advance
You really can't. You can either splice the .csv files together (a lot of nasty work) or create 3 tables to load and then use plsql or sql to join them together into your target table.

How to point one Hive Table to Multiple External Files?

I would like to be able to append multiple HDFS files to one Hive table while leaving the HDFS files in their original directory. These files are created are located in different directories.
The LOAD DATA INPATH moves the HDFS file to the hive warehouse directory.
As far as I can tell, an External Table must be pointed to one file, or to one directory within which multiple files with the same schema can be placed. However, my files would not be underneath a single directory.
Is it possible to point a single Hive table to multiple external files in separate directories, or to otherwise copy multiple files into a single hive table without moving the files from their original HDFS location?
Expanded Solution off of Pradeep's answer:
For example, my files look like this:
/root_directory/<job_id>/input/<dt>
Pretend the schema of each is (foo STRING, bar STRING, job_id STRING, dt STRING)
I first create an external table. However, note that my DDL does not contain an initial location, and it does not include the job_id and dt fields:
CREATE EXTERNAL TABLE hivetest (
foo STRING,
bar STRING
) PARTITIONED BY (job_id STRING, dt STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
;
Let's say I have two files I wish to insert located at:
/root_directory/b1/input/2014-01-01
/root_directory/b2/input/2014-01-02
I can load these two external files into the same Hive table like so:
ALTER TABLE hivetest
ADD PARTITION(job_id = 'b1', dt='2014-01-01')
LOCATION '/root_directory/b1/input/2014-01-01';
ALTER TABLE hivetest
ADD PARTITION(job_id = 'b2', dt='2014-01-02')
LOCATION '/root_directory/b2/input/2014-01-02';
If anyone happens to require the use of Talend to perform this, they can use the tHiveLoad component like so [edit: This doesn't work; check below]:
The code talend produces for this using tHiveLoad is actually LOAD DATA INPATH ..., which will remove the file off its original location in HDFS.
You will have to do the earlier ALTER TABLE syntax in a tHiveLoad instead.
The short answer is yes. A Hive External Table can be pointed to multiple files/directories. The long answer will depend on the directory structure of your data. The typical way you do this is to create a partitioned table with the partition columns mapping to some part of your directory path.
E.g. We have a use case where an external table points to thousands of directories on HDFS. Our paths conform to this pattern /prod/${customer-id}/${date}/. In each of these directories we have approx 100 files. In mapping this into a Hive Table, we created two partition columns, customer_id and date. So every day, we're able to load the data into Hive, by doing
ALTER TABLE x ADD PARTITION (customer_id = "blah", dt = "blah_date") LOCATION '/prod/blah/blah_date';
Try this:
LOAD DATA LOCAL INPATH '/path/local/file_1' INTO TABLE tablename;
LOAD DATA LOCAL INPATH '/path/local/file_2' INTO TABLE tablename;

How to create table dynamically based on the uploaded csv file column header using oracle apex

Based in the csv file column header it should create table dynamically and also insert records of that csv file into the newly create table.
Ex:
1) If i upload a file TEST.csv with 3 columns, it should create a table dynamically with three
2) Again if i upload a new file called TEST2.csv with 5 columns, it should create a table dynamically with five columns.
Every time it should create a table based on the uploaded csv file header..
how to achieve this in oracle APEX..
Thanks in Advance..
Without creating new tables you can treat the CSVs as tables using a TABLE function you can SELECT from. If you download the packages from the Alexandria Project you will find a function that will do just that inside CSV_UTIL_PKG (clob_to_csv is this function but you will find other goodies in here).
You would just upload the CSV and store in a CLOB column and then you can build reports on it using the CSV_UTIL_PKG code.
If you must create a new table for the upload you could still use this parser. Upload the file and then select just the first row (e.g. SELECT * FROM csv_util_pkg.clob_to_csv(your_clob) WHERE ROWNUM = 1). You could insert this row into an Apex Collection using APEX_COLLECTION.CREATE_COLLECTION_FROM_QUERY to make it easy to then iterate over each column.
You would need to determine the datatype for each column but could just use VARCHAR2 for everything.
But if you are just using generic columns you could just as easily just store one addition column as a name of this collection of records and store all of the uploads in the same table. Just build another table to store the column names.
Simply store this file as BLOB if structure is "dynamic".
You can use XML data type for this use case too but it won't be very different from BLOB column.
There is a SecureFile feature since 11g, It is a new BLOB implementation, it performs better than regular BLOB and it is good for unstructured or semi structured data.

Resources