I am trying to DROP and CREATE a table using a scheduled job. How to do it please?
DROP TABLE REGISTRATION;
CREATE TABLE REGISTRATION AS
SELECT *
FROM REGISTRATION_MV#D3PROD_KMDW
perhaps you are looking for a materialized view instead of the table?
create materialized view registration as
select *
from registration_mv#d3prod_kmdw
and then refresh it in job
dbms_mview.refresh('REGISTRATION');
This is much better than dropping and creating the table because PL/SQL objects will become invalid after dropping the table. With mview it will be silent and without harm.
You should use DBMS_SCHEDULER to schedule jobs in Oracle.
Refer to the documentation for how to create a sample scheduler job
Don't drop and recreate the table. If you do this on a regular basis you will end up with a huge number of items in your recycle bin, which can lead to problems. Also as above you can invalidate pl/sql when you drop objects, and can break your app if they fail to recompile.
Instead you should truncate the table to delete all rows, and then repopulate it:
truncate table registration;
insert into registration (select * from registration_mv#D3PROD_KMDW);
Related
I want to ask you if there is a solution to auto-synchronize a table ,e.g., every one minute based on view created in oracle.
This view is using data from another table. I've created a trigger but I noticed a big slowness in database whenever a user update a column or insert a row.
Furthermore, I've tested to create a job schedule on the specified table (Which I wanted to be synchronized with the view), however we don't have the privilege to do this.
Is there any other way to keep data updated between the table and the view ?
PS : I'm using toad for oracle V 12.9.0.71
A materialized view in Oracle is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table's data. Materialized views, which store data based on remote tables, are also known as snapshots.
Example:
SQL> CREATE MATERIALIZED VIEW mv_emp_pk
REFRESH FAST START WITH SYSDATE
NEXT SYSDATE + 1/48
WITH PRIMARY KEY
AS SELECT * FROM emp#remote_db;
You can use cronjob or dbms_jobs to schedule a snapshot.
I have queries that take an existing large table and build tables off of them for reporting. The problem is that the source tables are 60-80MM+ records and it takes a long time to recreate. I'd like to be able to identify which records are new so I can build just add the new records to the reporting tables.
To me, the best way to identify this is to have an identity column. Is there any significant cost to creating this and adding it to the table?
Separately, is it possible to create a materialized view that takes data from one of these tables but add a sequence as part of the materialized view? That is, something like
create materialized view some_materialized_view as
select somesequence.nextval, source_table.*
from source_table?
You can add a sequence based column to your table, but as Gary suggests I wouldn't do that.
The task you are about to solve is so common that other solutions have been already implemented.
The first built-in option that comes to mind is the system change number SCN, a kind of Oracle internal clock. By default, tables are set up to record the SCN of the whole (usually 8K) block, containing usually many rows, but you can set a table to keep a record of the SCN that changed every row. Then you can track the columns that are new or change and have not been copied to your reporting tables.
CREATE TABLE t (c1 NUMBER) ROWDEPENDENCIES;
INSERT INTO t VALUES (1);
COMMIT;
SELECT c1, ora_rowscn FROM t;
Secondly, I would think of adding a date column. With 60-80 mio rows I wouldn't do this with ALTER TABLE xxx ADD (d DATE DEFAULT SYSDATE), but with rename, create as select, drop:
CREATE TABLE t AS SELECT * FROM all_objects;
RENAME t TO told;
CREATE TABLE t AS SELECT sysdate AS d, told.* FROM told;
ALTER TABLE t MODIFY d DATE DEFAULT SYSDATE;
DROP TABLE told;
Thirdly, I would read up on materialized views. I never had the chance to use this a work, but in theory, you should be able to set up a materialized view log on your 80 m table that records changes and updates dependent materialized views.
And forthly, I'd look into partitioning your large table on the (newly introduced) date column, so that identifying the new rows will become faster. That sadly depends on your version and Oracle license, though.
I want to insert a record in tbl_partition_info through trigger when new partition gets created of table tbl_payload_info. I mean want to write a trigger to insert a record in tbl_partition_info table after new partition of tbl_payload_info table gets created.
You can do it with DDL Trigers.
Check out this link
One important note from the author you should consider
adding a partition is not DDL if Oracle decides to do it internally,
it’s only DDL if it’s an end-user statement
Implicit partition creation is a new feature of 11g and it reffers to the interval partition option.
Say that you have two Oracle databases, DB_A and DB_B. There is a table named TAB1 in DB_A with a materialized view log, and a materialized view named SNAP_TAB1 in DB_B created with
CREATE SNAPSHOT SNAP_TAB1
REFRESH FAST
AS SELECT * FROM TAB1#DB_A;
Is there a way to query in DB_B the changes made to SNAP_TAB1 after each call to fast-refresh the materialized view ?
DBMS_SNAPSHOT.REFRESH( 'SNAP_TAB1', 'F' );
In DB_A, prior to the refresh, you can query the materialized view log table, MLOG$_TAB1, to see which rows have been changed in TAB1. I'm looking for a way to query in DB_B, after each refresh, which rows have been refreshed in SNAP_TAB1.
Thanks!
I think the lines below work with prebuilt table:
You can add a column in the table SNAP_TAB1.
For inserts You can put it on default sysdate => for every insert you'll have the timestamp of the insert.
For updates you can use a trigger. Because the column is not involved in the Materialized View, updating the column with the trigger won't be a problem.
Probaly better, with the trigger you can use an unique id to store in that column, incremented before every new refresh.(Obtaining the unique id may have different aproaches.)
Obviously, you can't track deletes with this idea.
I currently have 2 schemas, A and B.
B has a table, and A executes selects inserts and updates on it.
In our sql scripts, we have granted permissions to A so it can complete its tasks.
grant select on B.thetable to A
etc,etc
Now, table 'thetable' is dropped and another table is renamed to B at least once a day.
rename someothertable to thetable
After doing this, we get an error when A executes a select on B.thetable.
ORA-00942: table or view does not exist
Is it possible that after executing the drop + rename operations, grants are lost as well?
Do we have to assign permissions once again ?
update
someothertable has no grants.
update2
The daily process that inserts data into 'thetable' executes a commit every N insertions, so were not able to execute any rollback. That's why we use 2 tables.
Thanks in advance
Yes, once you drop the table, the grant is also dropped.
You could try to create a VIEW selecting from thetable and granting SELECT on that.
Your strategy of dropping a table regularly does not sound quite right to me though. Why do you have to do this?
EDIT
There are better ways than dropping the table every day.
Add another column to thetable that states if the row is valid.
Put an index on that column (or extend your existing index that you use to select from that table).
Add another condition to your queries to only consider "valid" rows or create a view to handle that.
When importing data, set the new rows to "new". Once the import is done, you can delete all "valid" rows and set the "new" rows to "valid" in a single transaction.
If the import fails, you can just rollback your transaction.
Perhaps the process that renames the table should also execute a procedure that does your grants for you? You could even get fancy and query the dictionary for existing grants and apply those to the renamed table.
No :
"Oracle Database automatically transfers integrity constraints, indexes, and grants on the old object to the new object."
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_9019.htm#SQLRF01608
You must have another problem
Another approach would be to use a temporary table for the work you're doing. After all, it sounds like it is just the data is transitory, at least in that table, and you wouldn't keep having to reapply the grants each time you had a new set of data/create the new table