How to Move a whole partition to another tabel on another database? - oracle

Database: Oracle 12c
I want to take single partition, or a set of partitions, disconnect it from a Table, or set of tables on DB1 and move it to another table on another database. I would like to avoid doing DML to do this for performance reasons (It needs to be fast).
Each Partition will contain between three and four hundred million records.
Each Partition will be broken up into approximately 300 Sub-Partitions.
The task will need to be automated.
Some thoughts I had:
Somehow put each partition in it's own datafile upon creation, then detaching from the source and attaching it to the destination?
Extract the whole partition (not record-by-record)
Any other non-DML Solutions are also welcom
Example (Move Part#33 from both to DB#2, preferably with a single, operation):
__________________ __________________
| DB#1 | | DB#2 |
|------------------| |------------------|
|Table1 | |Table1 |
| Part#1 | | Part#1 |
| ... | | ... |
| Part#33 | ----> | Part#32 |
| Subpart#1 | | |
| ... | | |
| Subpart#300 | | |
|------------------| |------------------|
|Table2 | |Table2 |
| Part#1 | | Part#1 |
| ... | | ... |
| Part#33 | ----> | Part#32 |
| Subpart#1 | | |
| ... | | |
| Subpart#300 | | |
|__________________| |__________________|

Please read the document below with all the examples of exchanging partitions of table.
https://oracle-base.com/articles/misc/partitioning-an-existing-table-using-exchange-partition

Related

Efficient way to join by levenshtein in Hive or Impala

I have two tables one includes about 17K (NLIST) records while the other 57K (FNAMES).
I would like to join the both by comparing the records using levenshtein formula.
Here is the example for the content of tables:
Table NLIST:
+------+-------------+
| ID | S_NAME |
+------+-------------+
| 1 | Avi |
| 2 | Moshe |
| 3 | David |
....
Table FNAMES:
+------+-------------+
| ID | NICKNAMES |
+------+-------------+
| 1 | Avile |
| 2 | Dudi |
| 3 | Moshiko |
| 4 | Avi |
| 5 | DAVE |
....
The above tables are just examples. In the real case the names column can include more than one word.
The required result should be:
+------+-------------+--------+
| ID | NICKNAMES | S_NAME |
+------+-------------+--------+
| 1 | Avile | Avi |
| 2 | Dudi | David |
| 3 | Moshiko | Moshe |
| 4 | Avi | Avi |
| 5 | DAVE | David |
...
Here is the code I use:
select FNAMES.NICKNAMES, NLIST.S_NAME
from NICKNAMES
LEFT OUTER JOIN NLIST
ON(true)
WHERE levenshtein (FNAMES.NICKNAMES, NLIST.S_NAME) <=4
The above code runs for a very long time and I stopped its running.
How can I make it run in a reasonable time?
In addition, I think the levenshtein distance depends on the length of the words. How can I find the optimal value for the distance (in this case I chose 4 arbitrarily)?
Hive Table performance is depends upon various point .
Query enginee
File format
use VECTORIZATION set hive.vectorized.execution.enabled = true;set hive.vectorized.execution.reduce.enabled = true;
If you have good server you can try with Impala and definitely it is faster than Hive.
You can do the fine tuning of impala which will give you an edge to execute this query faster .Tuning Impala for Performance

Materialized View having UNKNOWN staleness - Oracle 11G

I am working on Oracle 11G.
One of my Materialized view has become UNKNOWN (MY_MAT_VW1). You can check the output of the ALL_MVIEWS below.
OWNER | MVIEW_NAME | CONTAINER_NAME | QUERY | QUERY_LEN | UPDATABLE | UPDATE_LOG | MASTER_ROLLBACK_SEG | MASTER_LINK | REWRITE_ENABLED | REWRITE_CAPABILITY | REFRESH_MODE | REFRESH_METHOD | BUILD_MODE | FAST_REFRESHABLE | LAST_REFRESH_TYPE | LAST_REFRESH_DATE | STALENESS | AFTER_FAST_REFRESH | UNKNOWN_PREBUILT | UNKNOWN_PLSQL_FUNC | UNKNOWN_EXTERNAL_TABLE | UNKNOWN_CONSIDER_FRESH | UNKNOWN_IMPORT | UNKNOWN_TRUSTED_FD | COMPILE_STATE | USE_NO_INDEX | STALE_SINCE | NUM_PCT_TABLES | NUM_FRESH_PCT_REGIONS | NUM_STALE_PCT_REGIONS
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
MY_DB | MY_MAT_VW1 | MY_MAT_VW1 | select.. | 6728 | N | | | | N | GENERAL | DEMAND | COMPLETE | IMMEDIATE | NO | COMPLETE | 14-Nov-16 | UNKNOWN | NA | N | Y | N | N | N | N | VALID | N | 0 | | |
MY_DB | MY_MAT_VW2 | MY_MAT_VW2 | select.. | 7074 | N | | | | N | TEXTMATCH | DEMAND | COMPLETE | IMMEDIATE | NO | COMPLETE | 13-Nov-16 | FRESH | NA | N | N | N | N | N | N | FRESH | N | 0 | 0 | |
The queries for the materialized view contain complex joins between multiple tables, inline views and unions.
As per my understanding (UNKNOWN_PLSQL_FUNC column) I guess there is a PLSQL Function which is causing the staleness to become UNKNOWN. However I am not sure which one.
I tried re-compiling and refreshing it but no luck.
Can anyone provide me some information on how to detect the root cause and make sure it does not become UNKNOWN again.
Also is there any implication of it on the data stored within it?
Below is just a sample I've created to replicate the scenario.
SELECT * FROM ENTITY_T;
ID | ENTITY_TYPE | FIRST_NAME | LAST_NAME | LEGAL_NAME
--------------------------------------------------
1 | INDIVIDUAL | JOHN | LESSEN |
2 | INDIVIDUAL | ROSAN | MEL |
3 | CORP | SIGMA | | SIGMA CORPORATION
--Function to get name base upon type
CREATE OR REPLACE FUNCTION GET_NAME (P_ID IN NUMBER)
RETURN VARCHAR2
DETERMINISTIC
AS
LV_NAME VARCHAR2(200);
BEGIN
SELECT CASE ENTITY_TYPE WHEN 'INDIVIDUAL' THEN FIRST_NAME ||' '|| LAST_NAME
WHEN 'CORP' THEN LEGAL_NAME
ELSE 'NONE'
END INTO LV_NAME
FROM ENTITY_T
WHERE ID=P_ID;
RETURN LV_NAME;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RETURN 'NO ID FOUND';
WHEN OTHERS THEN
RETURN 'OTHER ERROR';
END;
--Materialized view creation
CREATE MATERIALIZED VIEW TEST_MV
AS
SELECT ID,ENTITY_TYPE,GET_NAME(ID) NAME
FROM ENTITY_T;
SELECT MVIEW_NAME,STALENESS,AFTER_FAST_REFRESH,UNKNOWN_PLSQL_FUNC,COMPILE_STATE,STALE_SINCE
FROM ALL_MVIEWS WHERE MVIEW_NAME='TEST_MV';
MVIEW_NAME | STALENESS | AFTER_FAST_REFRESH | UNKNOWN_PLSQL_FUNC | COMPILE_STATE | STALE_SINCE
----------------------------------------------------------------------------------------------
TEST_MV | UNKNOWN | NA | Y | VALID |
The Oracle Issue/Doc ID 757537.1 mentioned by JSapkota states clearly, that this is not a bug, but correct/expected behaviour:
STALENESS of the mview, refering to PL/SQL function is set to UNKOWN
as one cannot determine PL/SQL function changes. Current behaviour is
correct as per the design & code.
I guess using DETERMINISTIC functions instead of the default scope could prevent it.
As per the My Oracle Support this could be a bug(7582462).
As there is no solution to this bug, you have to deal with fact that staleness will show unknown, or not use functions on Materialized View definition.
Reference:DBA_MVIEWS Shows STALENESS Value of UNKNOWN After Refresh (Doc ID 757537.1)

In RobotFramework, is it possible to run test cases in For-Loop?

So my issues might be of syntactic nature, maybe not, but I am clueless on how to proceed next. I am writing a test case on the Robot Framework, and my end goal is to be able to run ,multiple tests, back to back in a Loop.
In this cases below, the Log to Console call works fine, and outputs the different values passed as parameters. The next call "Query Database And Analyse Data" works as well.
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
But then, when I try to makes a test cases with documentation and tags with "Query Database And Analyse Data", I get the Error: Keyword Name cannot be Empty, which leads me to think that when the file gets to [Documentation tag], it doesn't understand that it is part of a test case. This is usually how I write test cases.
Please note here that the indentation tries to match the inside of the loop
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | | [Documentation] | Query DB.
| | | | [Tags] | query | voltagevariation
| | | Duplicates Test
| | | | [Documentation] | Packets should be unique.
| | | | [Tags] | packet_duplicates | system
| | | | Duplicates
| | | Chroma Output ON
| | | | [Documentation] | Setting output terminal status to ON
| | | | [Tags] | set_output_on | voltagevariation
| | | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
Now is this a syntax problem, indentation issue, or is it just plain impossible to do what I'm trying to do? If you have written similar cases, but in a different manner, please let me know!
Any help or input would be highly appreciated!
You are trying to use Keywords as Test Cases. This approach is not supported by Robot Framework.
What you could do is make one Test Case with a lot of Keywords:
*** Test Cases ***
| For-Loop-Elements
| | #{Items} = | Create List | ${120} | ${240} | ${240}
| | :FOR | ${ELEMENT} | IN | #{ITEMS}
| | | Log To Console | Running tests at Voltage: ${ELEMENT}
| | | Query Database And Analyse Data
| | | Duplicates
| | | ${chroma-status} = | Chroma Output On | ${HOST} | ${PORT}
*** Keywords ***
| Query Database And Analyse Data
| | Do something
| | Do something else
...
You can't really fit [Tags] anywhere useful. You can, however, fire meaningful fail messages (substituting the [Documentation]) if instead of using a Keyword directly you wrapped it in Run Keyword And Return Status.
Furthermore, please have a look at data driven tests to get rid of the :FOR-loop completely.

Display record count in listbox using multiple tables and fields

i need help with a query, can't get it to work correctly. What i'm trying to achieve is to have a select box displaying the number of records associated with a particular theme, for some theme it works well for some it displays (0) when infact there are 2 records, I'm wondering if someone could help me on this, your help would be greatly appreciated, please see below my actual query + table structure :
SELECT theme.id_theme, theme.theme, calender.start_date,
calender.id_theme1,calender.id_theme2, calender.id_theme3, COUNT(*) AS total
FROM theme, calender
WHERE (YEAR(calender.start_date) = YEAR(CURDATE())
AND MONTH(calender.start_date) > MONTH(CURDATE()) )
AND (theme.id_theme=calender.id_theme1)
OR (theme.id_theme=calender.id_theme2)
OR (theme.id_theme=calender.id_theme3)
GROUP BY theme.id_theme
ORDER BY theme.theme ASC
THEME table
|---------------------|
| id_theme | theme |
|----------|----------|
| 1 | Yoga |
| 2 | Music |
| 3 | Taichi |
| 4 | Dance |
| 5 | Coaching |
|---------------------|
CALENDAR table
|---------------------------------------------------------------------------|
| id_calender | id_theme1 | id_theme2 | id_theme3 | start_date | end_date |
|-------------|-----------|-----------|-----------|------------|------------|
| 1 | 2 | 4 | | 2015-07-24 | 2015-08-02 |
| 2 | 4 | 1 | 5 | 2015-08-06 | 2015-08-22 |
| 3 | 1 | 3 | 2 | 2014-10-11 | 2015-10-28 |
|---------------------------------------------------------------------------|
LISTBOX
|----------------|
| |
| Yoga (1) |
| Music (1) |
| Taichi (0) |
| Dance (2) |
| Coaching (1) |
|----------------|
Thanking you in advance
I think that themes conditions should be into brackets
((theme.id_theme=calender.id_theme1)
OR (theme.id_theme=calender.id_theme2)
OR (theme.id_theme=calender.id_theme3))
Hope this help

Linux - Postgres psql retrieving undesired table

I've got the following problem:
There is a Postgres database which I need to get data from, via a Nagios Linux distribution.
My intention is to make a resulting SELECT be saved to a .txt, that would be sent via email to me using MUTT.
Until now, I've done:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\d
\pset border 2
SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador
EOF
My problem is:
The .txt "saida.txt" is bringing me info about the database, as follows:
Lista de relações
Esquema | Nome | Tipo | Dono
---------+----------------------------------+-----------+------------
public | apns | tabela | jmsilva
public | config_imsis_centrais | tabela | thdroaming
public | config_imsis_sgsn | tabela | postgres
(3 Registers)
+---------+---------+----------+---------+---------+--------+------------+-------+---------+----------+-------+-------+------------------+-----------+
| central | imsi | mapver | camel | nrrg | plmn | inoper | natms | cba | cbaz | stall | ownms | imsi_translation | forbrat |
+---------+---------+----------+---------+---------+--------+------------+-------+---------+----------+-------+-------+------------------+-----------+
| MCTA02 | 20210 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20404 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20408 | | | | | INOPER-127 | | | | | | | |
| MCTA02 | 20412 | | | | | INOPER-127 | | | | | | | |
.
.
.
How could I make the first table not to be imported to the .txt?
Remove the '\d' portion of the script which causing listing the tables in the DB you see at the top of your output. So your script will become:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\pset border 2
SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador
EOF
To get the output to appear CSV formatted in a file named /tmp/output.csv do you can do the following:
#!/bin/sh
psql -d roaming -U thdroaming -o saida.txt << EOF
\pset border 2
COPY (SELECT central, imsi, mapver, camel, nrrg, plmn, inoper, natms, cba, cbaz, stall, ownms, imsi_translation, forbrat FROM vw_erros_mgisp_totalizador) TO '/tmp/output.csv' WITH (FORMAT CSV)
EOF

Resources