Oracle command to create a table from another schema, including triggers? - oracle

Using this command, I am able to create a table from another schema, but it does not include triggers. Is it possible to create a table from another schema, including triggers?
create table B.tablename unrecoverable as select * from A.tablename where 1 = 0;

First option is to run CREATE script for those objects, if you have a code repository. I suppose you don't.
If you use any GUI tool, things are getting simpler as they contain the SCRIPT tab that enables you to copy code from source and paste it into target user.
If you're on SQLPlus, it means that you should, actually, know what you're supposed to do. Here's a short demo.
SQL> connect hr/hr#xe
Connected.
SQL> create table detail (id number);
Table created.
SQL> create or replace trigger trg_det
2 before insert on detail
3 for each row
4 begin
5 :new.id := 1000;
6 end;
7 /
Trigger created.
SQL>
SQL> -- you'll have to grant privileges on table to another user
SQL> grant all on detail to scott;
Grant succeeded.
Connect as SCOTT and check what we've got:
SQL> connect scott/tiger#xe
Connected.
SQL> -- now, query ALL_SOURCE and you'll get trigger code
SQL> set pagesize 0
SQL> col text format a50
SQL> select text from all_source where name = 'TRG_DET' order by line;
trigger trg_det
before insert on detail
for each row
begin
:new.id := 1000;
end;
6 rows selected.
SQL>
Yet another option is to export & import table, which will get the trigger as well (I've removed parts that aren't relevant, as Oracle database version):
C:\>exp hr/hr#xe tables=detail file=detail.dmp
About to export specified tables via Conventional Path ...
. . exporting table DETAIL 0 rows exported
Export terminated successfully without warnings.
C:\>imp scott/tiger#xe file=detail.dmp full=y
. importing HR's objects into SCOTT
. importing HR's objects into SCOTT
. . importing table "DETAIL" 0 rows imported
Import terminated successfully without warnings.
C:\>
Check what's imported (should be both table and trigger):
SQL> desc detail
Name Null? Type
----------------------------------------- -------- ---------------
ID NUMBER
SQL> select * From detail;
no rows selected
SQL> insert into detail (id) values (-1);
1 row created.
SQL> select * From detail;
ID
----------
1000
SQL>
Cool; even the trigger works.
There might be some other options, but these 4 should be enough to get you started.

Related

Automatically get notified when a database record is updated

I have an oracle client database with 500k client records. Every month i run a batch process to produce some monthly analytical using the client data. But sometimes, the database owner tells me that they have updated the data and i need to run the batch again.
I would like to build a monitoring /notification service that will immediately tell me when a particular client record got updated and what was the update. That way i know if the update can be ignored or not.
I can of course run an hourly sql query that compares each client record with its previous snapshot but is there a better solution?
Does something like Kafka work in this scenario? How exactly?
You can use Continuous Query Notification (previously known as Database Change Notifications).
Read more about it:
DBMS_CQ_NOTIFICATION with examples
Continuous Query Notification for JDBC
#JustinCave also suggests a pretty good option to create simple trigger and enable it only when you really need it, but probably it would be easier just to create materialized view log and check it periodically for new changes. You can get changed rows from it.
Simple example:
SQL> create table t(id int primary key, a int, b int);
Table created.
SQL> create materialized view log on t with primary key, rowid;
Materialized view log created.
SQL> select log_table from user_mview_logs where master='T';
LOG_TABLE
--------------------------------
MLOG$_T
SQL> desc mlog$_t
Name Null? Type
----------------------------------------- -------- ----------------------------
ID NUMBER
M_ROW$$ VARCHAR2(255)
SNAPTIME$$ DATE
DMLTYPE$$ VARCHAR2(1)
OLD_NEW$$ VARCHAR2(1)
CHANGE_VECTOR$$ RAW(255)
XID$$ NUMBER
SQL> column M_ROW$$ format a20;
SQL> column CHANGE_VECTOR$$ format a10;
SQL> select * from mlog$_t;
no rows selected
SQL> insert into t(id, a, b) values(1,1,1);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from mlog$_t;
ID M_ROW$$ SNAPTIME$$ D O CHANGE_VEC XID$$
---------- -------------------- ------------------- - - ---------- ----------
1 AAASWNAAMAAAAEXAAA 4000-01-01 00:00:00 I N FE 2.8148E+15

Truncate local table only when Remote table is accessible or have complete data in oracle

I've a problem which I'm hard to find solution. Hope you guys in this community can solve.
On daily basis I'm copying table from one database(T_TAGS_REMOTE) to table on another database (T_TAGS_LOCAL) through DB links. For this I truncate T_TAGS_LOCAL table first and then perform insert.
Above task is done through Linux job.
Problem comes when
Sometimes T_TAGS_REMOTE from remote database is not accessible giving ORA error
Sometimes T_TAGS_REMOTE have not complete data rows (i,e SYSDATE COUNT < SYSDATE-1 COUNT)
Requirements:
STOP truncating STOP inserting when any of the above problem (1) or (2) has encountered
MyCode:
BEGIN
SELECT COUNT(1) AS OLD_RECORDS_COUNT FROM T_TAGS_LOCAL;
EXECUTE IMMEDIATE 'TRUNCATE TABLE T_TAGS_LOCAL';
INSERT /*+ APPEND */ INTO T_TAGS_LOCAL SELECT * FROM AK.T_TAGS_REMOTE#NETCOOL;
END;
/
Please suggest BETTER option for table copy or code to handle this problem.
I would not use the technique you are using, it would always generate issues. Instead, I think your use case fits a replication using materialized views. A materialized view log in source, and a materialized view using the dblink in target
You only need to decide the refresh method, that could be FAST ON COMMIT, as I guess your table is not very big as you are copying the whole table each and every single day.
Example
In Source
SQL> create table t ( c1 number primary key, c2 number ) ;
Table created.
SQL> declare
begin
for i in 1 .. 100000
loop
insert into t values ( i , dbms_random.value ) ;
end loop;
commit ;
end;
/ 2 3 4 5 6 7 8 9
PL/SQL procedure successfully completed.
SQL> create materialized view log on t with primary key ;
Materialized view log created.
SQL> select count(*) from t ;
COUNT(*)
----------
100000
In Target
SQL> create materialized view my_copy_of_t build immediate refresh fast on demand as
select * from your_source#your_db_link
-- To refresh in target
SQL> select count(*) from my_copy_of_t ;
COUNT(*)
----------
100000
Now, we change source
SQL> insert into t values ( 100001 , dbms_random.value );
1 row inserted
SQL> commit ;
Commit completed.
In target, for refreshing
SQL> exec dbms_mview.refresh('MY_COPY_OF_T');
The only requirement for FAST REFRESH ON DEMAND is that you must have a materialized view log for each of the tables that are part of the Materialized View. In your case, as you are replicating a table, you only need a materialized view log on the source table.
A better option might be using a materialized view. The way you do it now, you'd refresh it on demand using a database job scheduled via DBMS_JOB or DBMS_SCHEDULER.

Replace all USER Objects in a Database

My question is related to the meaning of USER in Oracle.
We have a database with many user, but R1S contains almost all the tables, sequence, etc. We want to load new tables data, but we also need to update the sequence values to be in phase with the table data.
ORA-31684: Object type USER:"R1S" already exists
ORA-31684: Object type SEQUENCE:"R1S"."RS2QNUNI" already exists
. . imported "R1S"."RSCIN" 13.16 MB 150346 rows
in the impdp I've noticed that the sequences hadn't been updated because they already exists. We want to force the load of this kind of data.
I've thought in do a DROP USER R1S CASCADE;
This USER used in the drop command is an SCHEMA. With the DROP USER command we are deleting the schema called R1S.
I've said that because in the impdp documentation i see i can force schme import :
SCHEMAS=R1S
Or the basic command will do the same job ?
impdp xxxxxx/******** FULL=Y CONTENT=ALL directory=EXPLOIT_DUMP_DIR dumpfile=expdp_X.exp LOGFILE=impdp_X.log
Simply put, schema = user + its objects (tables, views, procedures, sequences, ...) so - when you drop user, all its objects are also dropped.
If you are happy with the rest of import results (i.e. tables are correctly imported), and if there are not that many sequences there, perhaps it would be simpler to
recreate sequences (drop + create), or
alter
those sequences. The first option is easy, while the second requires a few commands. Increment it so that it reaches desired value, fetch from it, reset increment to its previous value (1, by default). Here's an example:
SQL> select s.nextval from dual;
NEXTVAL
----------
15028
SQL> alter sequence s increment by 100000;
Sequence altered.
SQL> select s.nextval from dual;
NEXTVAL
----------
115028
SQL> alter sequence s increment by 1;
Sequence altered.
SQL> select s.nextval from dual;
NEXTVAL
----------
115029
SQL> select s.nextval from dual;
NEXTVAL
----------
115030
SQL>

access sequence in another schema using a synonym

I am trying to access a sequence that is:
A. Located in another schema
B. Is actually a synonym to another database through a dblink.
What works:
select schema.sequence#dblink.nextval from dual;
What doesn't work:
select schema.synonym.sequence.nextval from dual;
The above returns a '%s: invalid identifier'
Is it possible to access the remote sequence without using the dblink annotation?
Yes, it is possible to use synonym for remote sequence object.
Database 1
SQL> conn jay
SQL> create sequence myseq increment by 1;
Sequence created.
Database 2
SQL> conn jay
SQL> create database link dbl_db1 connect to jay identified by jay using 'DB1';
Database link created.
SQL> create synonym myseq_syno for jay.myseq#dbl_db1;
Synonym created.
SQL> select myseq_syno.nextval from dual;
NEXTVAL
----------
1

Concatenate bind variable in oracle

I have a requirement where I have to select name of DB link (there are many DB links) from a table into bind variable and then fetch data from a table which is available in all DB links however data is different depending on DB link used. I am not getting a solution to use bind variables value as DB link.
This is my code:
select statement for fetching DB link into bind variable
SELECT DB_LINK into :v_db_link from reagions_db_links;
Then I have to use it for fetching data from table
SELECT reagion_id, region_name from Table_details#:v_db_link
I have tried to concatenate like below however its not working
SELECT reagion_id, region_name from Table_details#||:v_db_link
Please suggest me a solution, since I could have many DB links depending upon region selected by USER I am putting it into bind variable and then want to use it for fetching data from a table.
Substitution variables can be used for that. here is a quick example of how it can be done:
(Sql*plus environment).
-- set-up table that stores db_links
SQL> create table db_links(
2 dblink_name varchar2(31)
3 );
Table created.
--add a test dblink
SQL> insert into db_links(dblink_name) values ('TEST_DB_LINK');
1 row created.
SQL> commit;
Commit complete.
-- defining of a substitution variable dblink
SQL> column dblink_name new_value dblink noprint;
-- the value of the dblink_name column will be placed into the dblink
-- substitution variable declared previously
SQL> select dblink_name from db_links;
-- now we query a table using db link name stored
-- in the dblink substitution variable
-- prefacing it with ampersand.
SQL> select count(*) from dbusers#&dblink;
old 1: select count(*) from dbusers#&dblink
new 1: select count(*) from dbusers#TEST_DB_LINK
COUNT(*)
----------
351
SQL> spool off;

Resources