I need to migrate the Oracle database to PostgreSQL.
tables in Oracle are partitioned. I need to migrate data from a partial list of partitions of specific tables.
Is this supported by Ora2Pg?
Thanks.
Related
I am asking because I will have a sink that will be in "upsert" mode and the target Oracle table which is partitioned. I wonder if the update performance will be good due to millions of records in the target table.
So long as you've set your table's partition key on the column that you're using for the incremental predicate (ID / timestamp) I don't see why Oracle won't be able to take advantage of partition pruning to improve the fetch performance—but this is on the Oracle and data model side, not something that's implemented by the connector.
The connector does not support anything like partition-exchange loading etc.
Is there a way to replicate data(like triggers or jobs) from oracle tables to postgres tables and vice versa(for different set of tables) without using external tools? Just one way replication for both the scenarios.
Just a hint:
You can think of create a DB link from Oracle to Postgres which is called heterogeneous connectivity which makes it possible to select data from Postgres with a select statement in Oracle.
Then use materialized views to schedule and store the results of those selects.
As you don't want to use any external tool otherwise the solution should have been much simpler
for 20 tables I need to replicate data from oracle to postgres. For 40 different tables, I need to replicate from postgres to oracle.
I could imagine the following setup:
For the Oracles tables that need to be accessible from Postgres, simply create foreign tables inside the Postgres server. They appear to be "local" tables in the Postgres server, but the FDW ("foreign data wrapper") will forward any request to the Oracle server. So no replication required. Whether or not this will be fast enough depends on how you access the tables. Some operations (WHERE clause, ORDER BY etc) can be pushed down to the Oracle server, some will be done by the Postgres server after all the rows have been fechted.
For the Postgres tables that need to be replicated to Oracle you could have a similar setup: create a foreign table that points to the target table in Oracle. Then create triggers on the Postgres table that will update the foreign table, thus sending the changes to Oracle.
This could all be managed on the Postgres side.
As part of my current project, we deployed 100+ hive tables. I am trying to find list of all hive tables in a particular database that are missing compute stats. For an individual table, I used SHOW PARTITIONS table_name. Is there anyway I can find for all hive tables that are missing stats?
I want to know how oracle created it because i want to be able to have all month partition in single TBS for back-up purposes ,example of partition name oracle automatically generated is 'SYS_P321847'
I have a problem in writing Query using HiveQL.
Is it possible to join a hive table with oracle table?
if yes how?
if no why?
To access data stored in your Hive tables, including joining on them, you will need Oracle Big Data connector.
From the documentation:
Using Oracle SQL Connector for HDFS, you can use Oracle Database to access and analyze data residing in HDFS files or a Hive table. You can also query and join data in HDFS or a Hive table with other database-resident data. If required, you can also load data into the database using SQL.
You first access Hive tables from Oracle Database via external tables . The The external table definition is generated automatically from the Hive table definition. Hive table data can be accessed by querying this external table. The data can be queried with Oracle SQL and joined with other tables in the database.
You can use the Hive table that uses data and can access this Hive table from Oracle Database.