What can I do to enhance the performance of bulk data loading using Derby? - performance

I am using Derby In-Memory DB. I need to perform some data loading from csv files in the beginning. For now, it takes about 25 seconds to load all the csv files into their tables. I hope the time can be reduced. Due to the data files are not very large actually.
What I have done is using the built-in procedure from derby.
{CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (?,?,?,',','"','UTF-8',1 )} or
{CALL SYSCS_UTIL.SYSCS_IMPORT_TABLE (?,?,?,',','"','UTF-8',0 )}
The only special thing is sometimes the data in one tables is splitted into many small csv files. So I have to load them one by one.And I have tested if I can combine them together, it will only take 16 seconds. However I cannot remove this feature because it is needed by the user.
Is there anything I can do to reduce the time of loading data? Should I disable log or write some user-defined function/procedure or any other tune can be done? Any advice will be fine.
Thanks!

Use H2 instead of Derby, and use the CSVREAD feature. If that's still too slow, see the fast import optimization, or use the CSV tool directly (without using a database). Disclaimer: I wrote the CSV support for H2.

Related

Google Cloud SQL: Alternative to LOAD DATA INFILE

I am working in Google App Engine and we have a Python script that dumps data in Google Cloud SQL. One of the data sets we have to dump is huge. We dump around 150K rows of data once a day daily.
I know Google Cloud SQL does not support LOAD DATA INFILE, which I would have normally used. My question is, whether there is an alternative to LOAD DATA INFILE that I can use to speed up the process of data dumping.
Inserting the data normally, without LOAD DATA INFILE, takes about 5 minutes.
As stated in this comment of another question, LOAD DATA LOCAL INFILE is supported by App Engine.
The MySQL Manual explains on how to use this statement.
Things you can do to get better bulk import performance:
Create a .sql file and do an import
Make sure that the insert statements do more than one row at a time. A good rule of thumb is one megabyte per INSERT.
Switch to async replication
Do the import from an App Engine app. The app will be colocated with your Cloud SQL instance, greatly reducing the network latency.

How to implement ORACLE to VERTICA replication?

I am in the process of creating an Oracle to Vertica process!
We are looking to create a Vertica DB that will run heavy reports. For now is all cool Vertica is fast space use is great and all well and nice until we get to the main part getting the data from Oracle to Vertica.
OK, initial load is ok, dump to csv from Oracle to Vertica, load times are a joke no problem so far everybody things is bad joke or there's some magic stuff going on! well is Simply Fast.
Bad Part Now -> Databases are up and going ORACLE/VERTICA - and I have data getting altered in ORACLE so I need to replicate my data in VERTICA. What now:
From my tests and from what I can understand about Vertica insert, updates are not to used unless maybe max 20 per sec - so real time replication is out of question.
So I was thinking to read the arch log from oracle and ETL -it to create CSV data with the new data, altered data, deleted values-changed data and then applied it into VERTICA but I can not get a list like this:
Because explicit data change in VERTICA leads to slow performance.
So I am looking for some ideas about how I can solve this issue, knowing I cannot:
Alter my ORACLE production structure.
Use ORACLE env resources for filtering the data.
Cannot use insert, update or delete statements in my VERTICA load process.
Things I depend on:
The use of copy command
Data consistency
A max of 60 min window(every 60 min - new/altered data need to go to VERTICA).
I have seen the Continuent data replication, but it seems that nowbody wants to sell their prod, I cannot get in touch with them.
will loading the whole data to a new table
and then replacing them be acceptable?
copy new() ...
-- you can swap tables in one command:
alter table old,new,swap rename to swap,old,new;
truncate new;
Extract data from Oracle(in .csv format) and load it using Vertica COPY command. Write a simple shell script to automate this process.
I used to use Talend(ETL), but it was very slow then moved to the conventional process and it has really worked for me. Currently processing 18M records, my entire process takes less than 2 min.

Exporting 8million records from Oracle to MongoDB

Now I have an Oracle Database with 8 millions records and I need to move them to MongoDB.
I know how to import some data to MongoDB with JSON file using import command but I want to know that is there a better way to achieve this regarding these issues.
Due to the limit of execution time, how to handle it?
The database is going up every seconds so what's the plan to make sure that every records have been moved.
Due to the limit of execution time, how to handle it?
Don't do it with the JSON export / import. Instead you should write a script that reads the data, transforms into the correct format for MongoDB and then inserts it there.
There are a few reasons for this:
Your tables / collections will not be organized the same way. (If they are, then why are you using MongoDB?)
This will allow you to monitor progress of the operation. In particular you can output to log files every 1000th entry or so to get some progress and be able to recover from failures.
This will test your new MongoDB code.
The database is going up every seconds so what's the plan to make sure that every records have been moved.
There are two strategies here.
Track the entries that are updated and re-run your script on newly updated records until you are caught up.
Write to both databases while you run the script to copy data. Then once you've done the script and everything it up to date, you can cut over to just using MongoDB.
I personally suggest #2, this is the easiest method to manage and test while maintaining up-time. It's still going to be a lot of work, but this will allow the transition to happen.

How can I speed up loading data in Oracle tables?

I have some very large tables (to me anyway), as in millions of rows. I am loading them from a legacy system and it is taking forever. Assuming hardware is ok that is fast. How can I speed this up? I have tried exporting from one system into CSV and used Sql loader - slow. I have also tried a direct link from one system to another so there is no middle csv file, just unload from one load into another.
One person said something about pre-staging tables and that somehow could make things faster. I don't know what that is or if it could help. I was hoping for input. Thank you.
Oracle 11g is what is being used.
update: my database is clustered so I don't know if I can do anything to speed things up.
What you can try:
disabling all constraints and only enabling them after the load process
CTAS (create table as select)
What you really should do: understand what is you bottleneck. Is it network, file I/O, checking constraints ... then fix that problem. For me looking at the explain plan is most of the time the first step.
As Jens Schauder suggested, if you can connect to your source legacy system via DB link, CTAS would be the best compromise between performance and simplicity, as long as you don't need any joins on the source side.
Otherwise, you should consider using SQL*Loader and tweaking some settings. Using direct path I was able to load 100M records (~10GB) in 12 minutes on a 6 year old ProLaint.
EDIT: I used the data format defined for the Datamation sort benchmark. The generator for it is available in the Apache Hadoop distribution. It generates records with fixed width fields with 99 bytes of data plus a newline character per line of file. The SQL*Loader control file I used for the numbers quoted above was:
OPTIONS (SILENT=FEEDBACK, DIRECT=TRUE, ROWS=1000)
LOAD DATA
INFILE 'rec100M.txt' "FIX 99"
INTO TABLE BENCH (
BENCH_KEY POSITION(1:10),
BENCH_REC_NBR POSITION(13:44),
BENCH_FILLER POSITION(47:98))
What is the configuration you are using?
Does the database where the data is imported have something like a standby database coupled to it? If so, it is very likely to have a configuration with force_logging enabled?
You can check this using
SELECT FORCE_logging from v$database;
It can also be enabled at tablespace level:
SELECT TABLESPACE_name,FORCE_logging from DBA_tablespaces
If your database is running ith force_logging, or your tablespace has force_logging, this will have impact on the import speed.
If this is not the case, check if archivelog mode is enabled.
SELECT LOG_mode from v$database;
If so, it could be that the archives are not written fast enough. In that case increase the size of the online redolog files.
If the database is not running archivelog mode, it still has to write to the redo files, if not using direct path inserts. In that case, check how quick the redo's can be written. Normally, 200GB/h is very well possible, when indexes are not playing a role.
Important is to find what link is causing the lack of performance. It could be the input, it could be the output. Here I focused on the output.

External Tables vs SQLLoader

So, I often have to load data into holding tables to run some data validation checks and then return the results.
Normally, I create the holding table, then a sqlldr control file and load the data into the table, then I run my queries.
Is there any reason I should be using external tables for thing instead?
In what way will they make my life easier?
The big advantage of external tables is that we can query them from inside the database using SQL. So we can just run the validation checks as SELECT statements without the need for a holding table. Similarly if we need to do some manipulation of the loaded data it is almost always easier to do this with SQL rather than SQLLDR commands. We can also manage data loads with DBMS_JOB/DBMS_SCHEDULER routines, which further cuts down the need for shell scripts and cron jobs.
However, if you already have a mature and stable process using SQLLDR then I concede it is unlikely you would realise tremendous benefits from porting to external tables.
There are also some cases - especially if you are loading millions of rows - where the SQLLDR approach may be considerably faster. Howver, the difference will not be as marked with more recent versions of the database. I fully expect that SQLLDR will eventually be deprecated in favour of external tables.
If you look at the External Table syntax, it looks suspiciously like SQL*Loader control file syntax :-)
If your external table is going to be repeatedly used in multiple queries it might be faster to load a table (as you're doing now) rather than rescan your external table for each query. As #APC notes, Oracle is making improvements in them, so depending on your DB version YMMV.
I would use external tables for their flexibility.
It's easier to modify the data source on them to be a different file alter table ... location ('my_file.txt1','myfile.txt2')
You can do multitable inserts, merges, run it through a pipelined function etc...
Parallel query is easier ...
It also establishes dependencies better ...
The code is stored in the database so it's automatically backed up ...
Another thing that you can do with external tables is read compressed files. If your files are gzip compressed for example, then you can use the PREPROCESSOR directive within your external table definition, to decompress the files as they are read.

Resources