Oracle - Exchange partitions between two List partitioned tables - oracle

Need some help here.
I have an ETL process which loads the data into a target table A. We have created another table B which is same as the target table in structure and this table is accessed by reporting team to generate reports. This is done to minimize the downtime for report generation. This way the reports are always accessing the latest data.
Target table B is a List partition table, partitioned on Client ID. ETL team loads data for each client in the respective partition, in Table A.
I tried doing it with Exchange partition: exchange partition mechanism to swap the segments of A and partitioned table B. But couldn’t do it as both tables are List partitioned and Oracle doesn’t like that.
I created partitions so as to avoid creating multiple tables (TableA_ClientId) for each Client.
My other option: whenever data is loaded into target table A, rename the table B as table Temp, table B as table A and table A as table Temp.
Can you please suggest a better approach.

Related

Move Range Interval partition data from one table to history table in other database

We have a primary table that is Range partitioned by date with a 1-month interval. It's also a list sub-partitioned with 4 distinct values. So essentially it is one month partition having 4 sub-partitions.
Database: Oracle 19c
I need advice on how to effectively move the partition/sub-partition data from active schema to historical schema in another database.
Also, there are about 30 tables that are referenced partitioned on the primary table for which the data needs to be moved as well. Overall I'm looking to move about 2500 subpartitions
I'm not sure if an exchange partition would be the right approach in this scenario?
TIA
You could use exchange to get the data rapidly out of your active table, but you would still then to send that table over the wire to the remote history database to load it in.
In which case, using "exchange" probably is just adding more steps to the process for little gain. (There are still potential uses here depending on how you want to handle indexing etc).
But simplest is perhaps just transferring the data over, assuming a common structure between the two tables, ie
insert /*+ APPEND */ into history_table#remote_db
select * from active_table partition ( myparname )
I can't remember if partition naming syntax is supported over a db link, but if not, then the appropriate date predicates will do the same trick, and then just follow up with:
alter table active_table truncate partition myparname;

Generate Alter statements of partition of all existing tables from Oracle views in 12c

I want to generate dynamically the below alter code(the below one is an eg, it will differ table to table) for all the partitioned tables in 12c DB.
Some tables may be partitioned on RANGE, LIST etc.
The column name, partition type will also change as per the table.
ALTER TABLE EMP
MODIFY PARTITION BY RANGE (START_DATE)
( PARTITION P1 VALUES LESS THAN (date'2021-1-1') ) ONLINE;
I have already created tables without partition in another db and now want to partition those tables which were partitioned in the source db. So want a simple script which can create code to partition the tables in the target db. Note - all tables have different partition and my goal is to make them sync with source. Only data differs in both the DBs.

oracle synchronize 2 tables

I have the following scenario and need to solve it in ORACLE:
Table A is on a DB-server
Table B is on a different server
Table A will be populated with data.
Whenever something is inserted to Table A, i want to copy it to Table B.
Table B nearly has similar columns, but sometimes I just want to get
the content from 2 columns from tableA and concatenate it and save it to
Table B.
I am not very familiar with ORACLE, but after researching on GOOGLE
some say that you can do it with TRIGGERS or VIEWS, how would you do it?
So in general, there is a table which will be populated and its content
should be copien to a different table.
This is the solution I came up so far
create public database link
other_db
connect to
user
identified by
pw
using 'tns-entry';
CREATE TRIGGER modify_remote_my_table
AFTER INSERT ON my_table
BEGIN INSERT INTO ....?
END;
/
How can I select the latest row that was inserted?
If the databases of these two tables are in two different servers, then you will need a database link (db-link) to be created in Table A schema so that it can access(read/write) the Table B data using db-link.
Step 1: Create a database link in Table A server db pointing to Table B server DB
Step 2: Create a trigger for Table A, which helps in inserting data to the table B using database link. You can customize ( concatenate the values) inside the trigger before inserting it into table B.
This link should help you
http://searchoracle.techtarget.com/tip/How-to-create-a-database-link-in-Oracle
Yes you can do this with triggers. But there may be a few disadvantages.
What if database B is not available? -> Exception handling in you trigger.
What if database B was not available for 2h? You inserted data into database A which is now missing in database B. -> Do crazy things with temporarily inserting it into a cache table in database A.
Performance. Well, the performance for inserting a lot of data will be ugly. Each time you insert data, Oracle will start the PL/SQL engine to insert the data into the remote database.
Maybe you could think about using MViews (Materialized Views) to replicate the data via database link. Later you can build your queries so that they access tables from database B and add the required data from database A by joining the MViews.
You can also use fast refresh to replicate the data (almost) realtime.
From perspective of an Oracle Database Admin this would make a lot more sense than the trigger approach.
try this code
database links are considered rather insecure and oracle own options are having licences associated these days, some of the other options are deprecated as well.
https://gist.github.com/anonymous/e3051239ba401e416565cdd912e0de8c
uses ora_rowscn to sync tables across two different oracle databases.

What happens to data when a Hbase table is dropped?

I am new to Hbase and learning it day by day.
What happens to data when a Hbase table is dropped ? Are the data and schema lost or is there a scenario like Hive external table where the schema is lost and the data is preserved.
Do Hbase has the same concept of Hive as External table and managed table.
For a simple observation, HBase table data consists of two parts:
physical data blocks
meta information (how data is spread across nodes)
HBase tables can share same physical blocks, for example you can make a snapshot of table A and restore it into table B, so both table will refer to the same data. If you delete a row in table A, it will only 'delete' meta info for table A, but not delete physical data for this row, because it is still referenced by table B.
So, answering your question, when you drop table you first delete meta info. If physical data is not referenced by any other table or snapshot it will be deleted too.

How Hive Partition works

I wanna know how hive partitioning works I know the concept but I am trying to understand how its working and store the in exact partition.
Let say I have a table and I have created partition on year its dynamic, ingested data from 2013 so how hive create partition and store the exact data in exact partition.
If the table is not partitioned, all the data is stored in one directory without order. If the table is partitioned(eg. by year) data are stored separately in different directories. Each directory is corresponding to one year.
For a non-partitioned table, when you want to fetch the data of year=2010, hive have to scan the whole table to find out the 2010-records. If the table is partitioned, hive just go to the year=2010 directory. More faster and IO efficient
Hive organizes tables into partitions. It is a way of dividing a table into related parts based on the values of partitioned columns such as date.
Partitions - apart from being storage units - also allow the user to efficiently identify the rows that satisfy a certain criteria.
Using partition, it is easy to query a portion of the data.
Tables or partitions are sub-divided into buckets, to provide extra structure to the data that may be used for more efficient querying. Bucketing works based on the value of hash function of some column of a table.
Suppose you need to retrieve the details of all employees who joined in 2012. A query searches the whole table for the required information. However, if you partition the employee data with the year and store it in a separate file, it reduces the query processing time.

Resources