How are FORCED views significant if their base tables don't exist? - oracle

In context of Oracle 9i database, usually for a view to be created an underlying base table is mandatory, however this constraint can be violated if the FORCED clause is used while creating the view.
What is the significance of these views if their base tables does not exist?
In which conditions these views will be of help?

you would use the FORCE keyword if you are creating several views that reference each other and you don't want to spend time to determine in which order they should be created.
See also:
streamlining creating of packages and views

On a large project this can be very useful: I create a table, someone else creates a packaged function, and you create a view that accesses the table and the packaged function. We then all put our DDL into the source control / release system and the DBA installs all the code on the test / production system. The FORCE keyword ensures that the view gets created, though possibly in an invalid state, even if its DDL is run before the table and/or package DDL.

Related

Creating Hive View - Turn off metadata lookup from Hive Metastore

Is it possible to create a hive view on top of a nonexistent hive table or views?. This ability will help us deploy the hive DDL without any order at the time of refresh (migrating tables or views from one environment to another). In our environment, we have views built on top of another view. If we deploy them in any order, with the default setup some of the views may fail saying the underlying table/view doesn't exist. Looking to see if we can turn off the metadata lookup from hive metastore so that the type checks are not done at the time of view creation. It can be enforced after the deployment or at the time of querying the view for data retrieval because by that time all the views/tables will be completely deployed and there won't be any type checking related errors.
I checked on the internet for pointers but I couldn't find any. Any suggestions in this regard will be helpful to us.
Thanks in advance.
Add IF NOT EXISTS to all create statements and run all several times until errors disappear.
If executed 2 times in the wrong order like this, second run will succeed without any error:
drop view if exists my_view;
create view if not exists my_view as select from table1; --fails first time, succeeds on second run
drop table if exists table1;
create table if not exists table1(id int);

In Oracle 19c database, when we drop a table what happens to the procedures, triggers, index that uses this table?

In Oracle 19c database, when we drop a table what happens to the procedures, triggers, index that uses this table?
Will the triggers, procedures, index gets dropped automatically or
will it become INVALID status?
I want to know what is the correct process that needs to be
followed while dropping the table when you know you already have
the triggers, procedures, index associated with that particular
table?
Please help me out.
Indexes and triggers on the table will be dropped (as will grants)
Synonyms and views will become invalid
Hard-coded references to the table in procedures, packages, functions and triggers will make them invalid. References via dynamic SQL won't result in invalidation, but would fail when executed.
Query the DBA_DEPENDENCIES view to see which objects have dependencies and will get invalidated. There can be knock on impacts (dropping a table invalidates a procedure and a package that calls that procedure will be invalidated even if it doesn't reference the table directly).
If all usages are within the same user/schema, you can query USER_DEPENDENCIES instead. Don't bother with ALL_DEPENDENCIES view as, if another user has created objects referencing the victim table, you might not have privileges to see that object anyway.

What's a way I can save a trigger "template" in oracle?

Let's say I created a table test_table in development just to test a trigger, this trigger would then be reused in many other tables (future and existing).
So I code the trigger, test it, all good! But at the moment, if I want to replicate it, I will have to copy it from test_table's triggers and edit it.
So if someone deletes the table accidentally, the trigger is gone, and I don't have it saved nowhere else. Or if I just want to delete random test tables in our database, I can't.
What's a recommended way to save a trigger as a "template" in oracle? So I can reuse it in other tables and have it not be dependant of a random test table, or any table.
There are a lot of ways you can keep a copy of your TRIGGER SQLText.
Here's a few examples.
In Version Control:
You can use any of the many version control tools to maintain a versioned history for any code you like, including SQL, PL/SQL, etc. You can rewind time, view differences over time, track changes to the template, even allow concurrent development.
As a Function:
If you want the template to live in the database, you can create a FUNCTION (or PACKAGE)that takes as parameters the target USER and TABLE, and it replaces the USER and TABLE values in its template to generate the SQLTEXT required to create or replace the template TRIGGER on the target TABLE. You can make it EDITIONABLE as needed.
In a Table:
You can always just create a TABLE that holds template TRIGGER SQLText as a CLOB or VARCHAR2. It would need to be somewhere where it isn't likele to be "randomly" deleted, though. You can AUDIT changes to the TABLE's data, to see the template change over time. Oracle has tons of auditing options.
In the logs:
You can just log (all) DDL out. If you ENABLE_DDL_LOGGING, the log xml will have a copy of every DDL statement, categorized, along with when and where it came from.

Oracle Package erroring when a table doesn't exist

I'm using Oracle 9g at the moment and writing a package, which I'm fairly new to. I have some procedures in the package that load data into tables from external tables. It drops those tables first, recreates it with some transformations from the external table, and then creates the indexes for it. I can't just reference the external tables as I need indexes and a few changes to the data.
I then have some other procedures later on in the package that reference these temporary tables to do their work. It all works fine, except if, when running the procedures in order, the procedure that creates the table gets interrupted after it drops it, but before it creates it.
No if I make a change to the package body it compiles with errors as the later procedure that reference that table inform me that the table does not exist. Nor can I now run any of the procedures due to this validation error.
Can anyone advise of any best practices or how best to do these without getting these validation errors? Or is there a way to turn off this validation somehow?
Many thanks,
Dan
Make two packages. One that drops and creates the tables and the other with the data manipulation code. The second package will become invalid once you drop the tables but the first one will still be usable.

Creating re-runnable Oracle DDL SQL Scripts

Our development team does all of their development on their local machines, databases included. When we make changes to schema's we save the SQL to a file that is then sent to the version control system (If there is a better practice for this I'd be open to hearing about that as well).
When working on SQL Server we'd wrap our updates around "if exists" statements to make them re-runnable. I am not working on an Oracle 10g project and I can't find any oracle functions that do the same thing. I was able to find this thread on dbaforums.org but the answer here seems a bit kludgy.
I am assuming this is for some sort of Automating the Build process and redoing the build from scratch if something fails.
As Shannon pointed out, PL/SQL objects such as Procedures, functions and Packages have the "create or replace" option, so a second recompile/re-run would be ok. Grants should be fine too.
As for Table creations and DDLs, you could take one of the following approaches.
1) Do not add any drop commands to the scripts and ask your development team to come up with the revert-script for the individual modules.
So for each create table that they add to the build, they will have an equivalent "DROP TABLE.." added to a script say."build_rollback.sql". If your build fails , you can run this script before running the build from scratch.
2)The second (and most frequently used approach I have seen) is to include the DROP table just before the create table statement and then Ignore the"Table or view does not exist" errors in the build log. Something like..
DROP TABLE EMP;
CREATE TABLE EMP (
.......
.......
);
The thread you posted has a major flaw. The most important one is that you always create tables incrementally. Eg, your database already has 100 tables and you are adding 5 more as part of this release. The script spools the DROP Create for all 100 tables and then executes it which does not make a lot of sense (unless you are building your database for the first time).
An SQL*Plus script will continue past errors unless otherwise configured to.
So you could have all of your scripts use :
DROP TABLE TABLE_1;
CREATE TABLE TABLE_1 (...
This is an option in PowerDesigner, I know.
Another choice would be to write a PL/SQL script which scrubs a schema, iterating over all existing tables, views, packages, procedures, functions, sequences, and synonyms in the schema, issuing the proper DDL statement to drop them.
I'd consider decomposing the SQL to create the database; one giant script containing everything for the schema sounds murderous to maintain in a shared environment. Dividing at a Schema / Object Type / Name level might be prudent, keeping fully dependent object types (like Tables and Indexes) together.

Resources