How to compare data between two tables where one table is in oracle other is in postgres - oracle

I have two tables with the same table structure. One in oracle and the other in Postgres. I would like to compare the data between the two tables. I cannot use DB_Link, because of some connectivity issues.
I have copied both the contents to an excel sheet. But still having issues comparing the data.
Please suggest a suitable option to compare the data between the two tables.

Related

How to find list of all Hive tables in a database that are missing compute stats?

As part of my current project, we deployed 100+ hive tables. I am trying to find list of all hive tables in a particular database that are missing compute stats. For an individual table, I used SHOW PARTITIONS table_name. Is there anyway I can find for all hive tables that are missing stats?

How can I export/import subset data from/to Oracle Database?

I was wondering what would be the approach to get rid of a lot of records from an Oracle database in order to create a lighter database for developer's laptops.
We aim to reduce the exports from different production environments NOT excluding entities, but reducing the number of records in each table mantaining the referential integrity.
Is there a tool/script around?
I was also wondering if transforming all the FKs on a replica DB to "on delete cascade" and deleting a subset of record from the entities on the top of the relational hierarchy would do the job.
Any suggestion?
With Jailer you can export data to an SQL script which can traverse foreign key constraints to include all data needed to maintain referential integrity.
http://jailer.sourceforge.net
If you wanted to export/import limit object from/to database, then you could EXCLUDE the objects, which you don't wanted to be part of your dump.
You can exclude any specific table to be exported/imported by specify the object type and object name.
EXCLUDE=TABLE:"='<TABLE_NAME>'"
==Update==
AFAIK, I don't see, if Oracle provides such flexibility to export subset data, but Oracle does have option to export partitioned data from TABLES
TABLES=[schema_name.]table_name[:partition_name] [, ...]

oracle synchronize 2 tables

I have the following scenario and need to solve it in ORACLE:
Table A is on a DB-server
Table B is on a different server
Table A will be populated with data.
Whenever something is inserted to Table A, i want to copy it to Table B.
Table B nearly has similar columns, but sometimes I just want to get
the content from 2 columns from tableA and concatenate it and save it to
Table B.
I am not very familiar with ORACLE, but after researching on GOOGLE
some say that you can do it with TRIGGERS or VIEWS, how would you do it?
So in general, there is a table which will be populated and its content
should be copien to a different table.
This is the solution I came up so far
create public database link
other_db
connect to
user
identified by
pw
using 'tns-entry';
CREATE TRIGGER modify_remote_my_table
AFTER INSERT ON my_table
BEGIN INSERT INTO ....?
END;
/
How can I select the latest row that was inserted?
If the databases of these two tables are in two different servers, then you will need a database link (db-link) to be created in Table A schema so that it can access(read/write) the Table B data using db-link.
Step 1: Create a database link in Table A server db pointing to Table B server DB
Step 2: Create a trigger for Table A, which helps in inserting data to the table B using database link. You can customize ( concatenate the values) inside the trigger before inserting it into table B.
This link should help you
http://searchoracle.techtarget.com/tip/How-to-create-a-database-link-in-Oracle
Yes you can do this with triggers. But there may be a few disadvantages.
What if database B is not available? -> Exception handling in you trigger.
What if database B was not available for 2h? You inserted data into database A which is now missing in database B. -> Do crazy things with temporarily inserting it into a cache table in database A.
Performance. Well, the performance for inserting a lot of data will be ugly. Each time you insert data, Oracle will start the PL/SQL engine to insert the data into the remote database.
Maybe you could think about using MViews (Materialized Views) to replicate the data via database link. Later you can build your queries so that they access tables from database B and add the required data from database A by joining the MViews.
You can also use fast refresh to replicate the data (almost) realtime.
From perspective of an Oracle Database Admin this would make a lot more sense than the trigger approach.
try this code
database links are considered rather insecure and oracle own options are having licences associated these days, some of the other options are deprecated as well.
https://gist.github.com/anonymous/e3051239ba401e416565cdd912e0de8c
uses ora_rowscn to sync tables across two different oracle databases.

Hive table or view? Which should be the right approach?

I am new to HDFS/HIVE. Need some advice. I have a background of RDBMS Data modelling.
I have a requirement of a daily report. The report requires fetching of data from two staging Tables(HIVE).
What if I create a table in HIVE, write a view to fetch records from staging to populate HIVE table. create a HIVE view pointing to HIVE table with where clause of selecting one-day data?
HIVE staging tables ---> 2. View to populate HIVE table --> 3. HIVE table ----> 4. View to fetch data from HIVE table created in 3.
what if I create a view on top of two staging HIVE tables (joining two tables with where clause to fetch one-day data)?
HIVE staging tables ---> 2. View to fetch data from HIVE staging tables
I want to know HIVE best practice and solution strategies.
View or not View but you need ETL process to load tables. ETL process can join, aggregate, etc, so you will be able use finally joined and aggregated data in the form star/snowflake or report table. Why do you need Views here? To reuse some common queries, to reduce complexity of some long complex queries, make interfaces to data, create logical entities, etc. You do not necessarily need View simply to join tables and load data to another table. All depends on your requirements. If reports should query data fast then data should be precalculated by ETL process. View is just wrapper over query, it will be calculated each time you query data.
I think its best if you have zero views, 1 single table, and make your partition the date field (but you can't partition on the date, so you have to store it as a string) ... this make it easier for the end user to have only 1 table... fewer tables.
This gives your users the ability to engage only the latest date they want, or leverage the full table.

Why we need to move external table to managed hive table?

I am new to Hadoop and learning Hive.
In Hadoop definative guide 3rd edition page no. 428 last paragraph
I don't understand below paragraph regarding external table in HIVE.
"A common pattern is to use an external table to access an initial dataset stored in HDFS (created by another process), then use a Hive transform to move the data into a managed Hive table."
Can anybody explain briefly what above phrase says?
Usually the data in the initial dataset is not constructed in the optimal way for queries.
You may want to modify the data (like modifying some columns adding columns, making aggregation etc) and to store it in a specific way (partitions / buckets / sorted etc) so that the queries would benefit from these optimizations.
The key difference between external and managed table in Hive is that data in the external table is not managed by Hive.
When you create external table you define HDFS directory for that table and Hive is simply "looking" in it and can get data from it but Hive can't delete or change data in that folder. When you drop external table Hive only deletes metadata from its metastore and data in HDFS remains unchanged.
Managed table basically is a directory in HDFS and it's created and managed by Hive. Even more - all operations for removing/changing partitions/raw data/table in that table MUST be done by Hive otherwise metadata in Hive metastore may become incorrect (e.g. you manually delete partition from HDFS but Hive metastore contains info that partition exists).
In Hadoop definative guide I think author meant that it is a common practice to write MR-job that produces some raw data and keeps it in some folder. Than you create Hive external table which will look into that folder. And than safelly run queries without the risk to drop table etc.
In other words - you can do MR job that produces some generic data and than use Hive external table as a source of data for insert into managed tables. It helps you to avoid creating boring similar MR jobs and delegate this task to Hive queries - you create query that takes data from external table, aggregates/processes it how you want and puts the result into managed tables.
Another goal of external table is to use as a source data from remote servers, e.g. in csv format.
There is no reason to move table to managed unless you are going to enable ACID or other features supported only for managed tables.
The list of differences in features supported by managed/external tables may change in future, better use current documentation. Currently these features are:
ARCHIVE/UNARCHIVE/TRUNCATE/MERGE/CONCATENATE only work for managed
tables
DROP deletes data for managed tables while it only deletes
metadata for external ones
ACID/Transactional only works for
managed tables
Query Results Caching only works for managed
tables
Only the RELY constraint is allowed on external tables
Some Materialized View features only work on managed tables
You can create both EXTERNAL and MANAGED tables on top of the same location, see this answer with more details and tests: https://stackoverflow.com/a/54038932/2700344
Data structure has nothing in common with external/managed table type. If you want to change structure you do not necessarily need to change table managed/external type
It is also mentioned in the book.
when your table is external table.
you can use other technologies like PIG,Cascading or Mapreduce to process it .
You can also use multiple schemas for that dataset.
and You can also create data lazily if it is external table.
when you decide that dataset should be used by only Hive,make it hive managed table.

Resources