How to use Oracle and PostgreSQL query in one select query - oracle

In my Java program, I am trying to take values from a PostgreSQL database and using this data I am using a Select query with an Oracle database.
Problem is, it is taking too much time to complete this task. First I am fetching data from Postgres table and load into variable.
Then with this variable I am executing a select query against an Oracle table.
But I want to make this process faster. Is it possible to perform this task in one query that takes data from PostgreSQL table and fetch data from Oracle table?
Postgres statement:
select filial_name
into f_name
from branch
where id=1;
Oracle statement:
select sum(credit)
from balance
where filial_n = f_name;
Above process continues in loop.

If you have to run a massive join between an Oracle table and a PostgreSQL table, that is never going to be very fast.
But you can do much better than performing the join in your application by defining an oracle_fdw foreign table in PostgreSQL and performing the join in PostgreSQL.

Related

data copy from oracle to postgres using hibernate

Am new to hiberante JPA. I am working on oracle to postgres migration and we are not using aws dms service for data migration. We would like to move ahead with Java for copying tables which have more than 1 million records. I have problem for below scenario.
Table A - Oracle
Table B - PostGres
Am extracting records from Oracle using ScrollableResults. Once i have the data from Oracle, i need to loop up a value in postgres database for data from Oracle before performing insert into postgres database.
I thought first #ColumnTransformer will help but it is not helping as i dont know how to reference data from oracle on ColumnTransformer expression.
So finally went ahead with writing normal insert query with values and subquery for lookup. Also set hibernate.jdbc.batch_size to 100.
I executed the program in this way and it took 5 mins for 10k records which i feel is slow.
is there any other solution for this problem to improve the performance.
Thanks for all your help
I found the solution. I solved it by storing postgres lookup table in list object then performing search in lookup table list object before performing insert. Now the speed is good.

How to convert CONNECT BY in greenplum

Can anyone suggest how to convert CONNECT BY Oracle query into Greenplum. Greenplum doesn't support recursive queries. So, we can not use WITH RECURSIVE. Is there any alternate solution to re-write the below query.
SELECT child_id, Parnet_id, LEVEL , SYS_CONNECT_BY_PATH (child_id,'/') as HIERARCHY
FROM pathnode
START WITH Parnet_id = child_id
CONNECT BY NOCYCLE PRIOR child_id = Parnet_id;
There are ways to do this but it will be a one-off per query. You will need to create a function that loops through your pathnode table and "return next" to return each row. You can search on this site to find examples of doing this with PostgreSQL 8.2.
Work is happening to rebase Greenplum to PostgreSQL 8.3, 8.4, and so on. Those later PostgreSQL versions support "with recursive" which is the ANSI SQL way to write your SQL but Greenplum doesn't support it yet. When it does get supported by Greenplum, I don't think it will perform all that well. The query will force looping and individual row lookups. This works great in an OLTP database but not so well for an MPP database.
I suggest you transform your data in Oracle with a VIEW and then just dump the view to a file to load into Greenplum. The DDL of having a self-referencing, N-level table will never be a good idea in an MPP database.

How to force oracle to use index or ordered hints for remote joins

I'm using Oracle 11g. I have a query that joins local table with remote tables using db links. I want the driving table to be the remote table as I primarily filter using remote table to get a few rows. I then want to join them with local table.
The problem is the optimizer ignores ORDERED and INDEX hints and does a full table scan of the local table. I am using the right indexes and have generated statistics. I run the queries individually with each table they use the correct indexes, but with the join, the local table always does a full table scan and acts as the driving table.
SELECT /*+ INDEX_RS_ASC(l) */
*
FROM remote_table#mylink r
JOIN local_table l USING (cont_id)
WHERE r.PRIME_VENDOR_ID = '12345'

Alter table on a running database

I am using Postgres 8.4
I need to execute an ALTER statement on a running database with ~4M data on the relevant table. My sql is like:
ALTER TABLE some_table ALTER a_row bigint;
Now, relevant row type is int
But what i wonder is data consistency, Nearly 3-4 records are written to that table and some more are being read per second.
What i need to do for avoiding data consistency and such other problems.
When you execute and ALTER TABLE sql, table will be locked and you shouldn't have any problems except some possible performance issues in INSERT sqls in your case. But if you are going to do this once, there is no reason to hesitate.

Best way to bulk insert data into Oracle database

I am going to create a lot of data scripts such as INSERT INTO and UPDATE
There will be 100,000 plus records if not 1,000,000
What is the best way to get this data into Oracle quickly? I have already found that SQL Loader is not good for this as it does not update individual rows.
Thanks
UPDATE: I will be writing an application to do this in C#
Load the records in a stage table via SQL*Loader. Then use bulk operations:
INSERT INTO SELECT (for example "Bulk Insert into Oracle database")
mass UPDATE ("Oracle - Update statement with inner join")
or a single MERGE statement
To keep It as fast as possible I would keep it all in the database.
Use external tables (to allow Oracle to read the file contents),
and create a stored procedure to do the processing.
The update could be slow, If possible, It may be a good idea to consider creating a new table based on all the records in the old (with updates) then switch the new & old tables around.
How about using a spreadsheet program like MS Excel or LibreOffice Calc? This is how I perform bulk inserts.
Prepare your data in a tabular format.
Let's say you have three columns, A (text), B (number) & C (date). In the D column, enter the following formula. Adjust accordingly.
="INSERT INTO YOUR_TABLE (COL_A, COL_B, COL_C) VALUES ('"&A1&"', "&B1&", to_date ('"&C1&"', 'mm/dd/yy'));"

Resources