I have a series of materialized views in Oracle. The content of next mview depends on content in previous one. So I'd like to create a schedule for these mviews where the next rebuild process starts when previous mview finishes its own rebuilding process. I cannot predict how long rebuilding process will take so I'd like to put them in kinda rebuilding que.
How to do that?
Use Oracle Scheduler to build a job chain to refresh them in the order desired. You can even have steps in the chain execute in parallel if there are dependencies that allow it, then have later steps wait for all of the parallel steps to complete before proceeding.
Create stored procedures to refresh each materialized view.
Create Scheduler Programs for each stored procedure.
Create a Scheduler Job Chain with Steps for each Scheduler program. Include Rules to control the order of Step execution.
Create a Scheduler Job to start the Chain execution at the desired time/interval.
See here for documentation and examples:
Scheduling Jobs with Oracle Scheduler
How To Do It In Parallel
Scheduler in Oracle Database 10g Onward
Related
I'm working on redesigning DWH solution previously based on Teradata with lots of BTEQ scripts performing transformations on mirror tables loaded from source DBs. New solutions will be based on Snowflake and as a transformation tool set of SQL (Snowflake) scripts is being prepared.
Is that a right approach to use in ETL scripts DDL statements which create e.g. temporary table which than et the end of script is dropped?
In my opinion such table should be created before running this script instead of creating it in the script on fly. One argument opts that DDL statements on Snowflake commits transaction and that's why I want to avoid DDL statements in transformation scripts. Please help me finding pros and cons of using DDL statements in ETL process and back me up that I'm right or convince that I'm wrong.
If you are wanting transactions to cover all your SELECT/INSERT/MERGE steps of your transformation step of your ELT, you need to not create/drop any tables, as those will commit your open transactions.
We get around this by have pre-existing worker tables per task/deployment that are create/truncated prior to the transaction section of our ELT process. And our tooling does not allow a task to run at the same time.
Thus we load into a landing table, we transform into temporary tables, then we multi-table merge into the final tables. With only the last steps needing to be in transactions.
I'm looking for a simple way to communicate between two databases, there currently exists a database link between both database.
I want to process a job on database 1 for a batch of records (batch code for each batch of records), once the process has finished on database 1 and all the batches of records have been processed. I want database 2 to see that database 1 has processed a number of batches (batch codes) either by querying a oracle table or an Oracle advanced queue which sits on either database 1 or database 2.
Database 2 will process the batches of records that are on database 1 through a database linked view using each batch code and update the status of that batch to complete.
I want to be able to update the Oracle Advanced Queue or database table of its batch no, progress status ('S' started, 'C' completed), status date
Table name.
batch_records
Table columns
Batch No,
Status,
status date
Questions:
Can this be done by a simple database table rather than a complex Oracle Advanced Queue?
Can a table be updated over a database link?
Are there any examples of this?
To answer your question first:
yes, I believe so
yes, it can. But, if there are many rows involved, it can be pretty slow
probably
Database link is the way to communicate between two databases. If those jobs run on the database 1 (DB1), I'd suggest you to keep it there - in the DB1. Doing stuff over a database link calls for problems of different kinds. Might be slow, you can't do everything over the database link (LOBs, for example). One option is to schedule a job (using DBMS_SCHEDULER or DBMS_JOB (which is quite OK for simple things)). Let the procedure maintain job status in some table (that would be a "simple table" from your 1st question) in DB1 which will be read by the DB2.
How? Do it directly, or create a materialized view which will be refreshed in a scheduled manner (e.g. every morning at 07:00) or on demand (not that good idea) or on commit (once the DB1 procedure does the job and commits changes, materialized view will be refreshed).
If there aren't that many rows involved, I'd probably read the DB1 status table directly, and think of other options later (if necessary).
When I was trying TiDB, I ran into the condition that it was slow to add indexes. I want to view the progress, but how to view the DDL jobs in TiDB?
You can use admin show ddl to view the current job of adding an index or the running DDL jobs.
As for viewing DDL jobs, you can also use two other commands as follows:
admin show ddl jobs: to view all the results in the current DDL job queue (including tasks that are running and waiting to run) and the last ten results in the completed DDL job queue
admin show ddl job queries 'job_id' [, 'job_id'] ...: to view the original SQL statement of the DDL task corresponding to the job_id; the job_id only searches the running DDL job and the last ten results in the DDL history job queue
I'm new to oracle and I saw Oracle triggers can trigger some action after an update or insert is done on oracle table.
Is it possible to trigger a SAS program after every update or insert on Oracle table.
There's a few different ways to do this but a problem like this is an example of the saying "Just because you can, doesn't mean you should".
So sure, your trigger can be fired on update or insert and that can call a stored procedure in a package which can use the oracle host command to call an operating system command which can call SAS.
Here are some questions:
do you really want to install SAS on the same machine as your Oracle database?
do you really want every transaction that inserts or updates to have to wait until the host command completes? What if SAS is down? Do you want the transaction to complete or.....?
do you really want the account that runs the database to have privileges to start up or send information to other executables? Think security risks.
if an insert does one record the action is clear. What if an update affects a thousand records? What message do you want to send to SAS? One thousand update statements? One update statement?
There are better ways to do this but a complete answer needs more details from you as to the end goal and business logic involved. Some ways I have used include:
trigger inserts data into an Oracle advanced queue. At predetermined intervals take the changes off the queue and write them to a flat file. Write a file watcher to look for the files and send the info to SAS.
write a Java program to take the changes and ship them
use the APEX web service and expose the changes as a series of JSON or REST packets.
If a trigger is already in function what will be the effect if i will replace a small part of it ,so many records are getting inserted in table A continuously in which the trigger is applied.
Trigger X is running in table A,
total records are getting insterted per minute are 1000
If i have replaced the trigger what will be the Impact of It for those service who are accesing the table.
Thanks in advance
A CREATE TRIGGER DDL should have no impact on your running DML transactions. Exclusive locks are not required on the table in order to add the trigger.
A CREATE OR REPLACE DDL is slightly different. It has to change an existing object. If the trigger is actively firing, the new trigger will try to lock the trigger object in the library cache before altering it. No impact on the table.
I generally observe that triggers execute immediately.
If you have tested your trigger, you should have no issues. If the trigger is correct, it will go into effect, and the impact shall be according to the logic that you've written in the trigger. The act of creating the trigger is not of concern, but the correctness of the trigger code is. So test it well.
Any transactions underway at the time you create the trigger will finish without firing the trigger.
Any future transactions will fire the trigger.