The DBA_OBJECTS object is a view. I was trying to look and see what the underlying tables of that view is. Can those base tables be updated manually, and it be reflected in the view?
Can obj$ be updated manually?
No. obj$ cannot be updated manually. Only indirectly through execution of DDL statements that create, alter, or drop database objects.
You can see the underlying objects by querying dba_views:
SELECT text
FROM dba_views
WHERE view_name = 'DBA_OBJECTS'
You will find that it is quite complex, undocumented and absolutely off limits for direct modification. Messing with dictionary objects can easily corrupt your dictionary and will earn you a dead-end in any SR you open with Oracle. But there's no harm in exploring the view definitions to understand the dictionary better, and in some cases, such as with advanced monitoring tools such as I write, there is benefit to querying base dictionary and fixed x$ objects directly, but never modifying anything.
The only exception to the above rule I have found is that there is a legitimate need for a DBA at times to modify the ptime column in sys.user$ in order to spread out password expirations caused by creating too many users all at the same time (at database creation time, for example). But this is a rare situation and is only justified by the lack of any other means provided by Oracle to do this.
Related
I am new to schema and not sure how the table is being populated (How the data is being inserted into the table ). How can we find out ?
This should work.
select *
from dba_source
where upper(text) like '%TABLE_NAME%'
But as I do not have DBA rights , can not execute this command. What is the other way to find this out ?
To see dependencies between objects you have access to you can query the all_dependencies data dictionary view. In this case:
select * from all_dependencies where referenced_name = 'YOUR_TABLE_NAME';
If the objects are in your own schema when you can use the user_dependencies view. If you want to see objects you don't have privileges against then you can use dba_dependencies, but it sounds like you are unlikely to have the privileges required to query that view, since you can't see dba_source.
Of course, that will only identify references within your stored PL/SQL code; it won't tell you about any external application code that is performing inserts directly against the database (as opposed to via CRUD procedures) or manual inserts.
And it will only tell you which objects have dependencies, you'll still need to dig through the object source code, either by querying all_source (or user_source if you're the owner) for the relevant type and name. I would avoid the possibility of false-positives from, say, comments that happen to mention the table name in code which doesn't access it. You could also do that outside the database - hopefully your code is under source control (right!?).
If you know the query you need to run but do not have the necessary privileges then perhaps you can write the query using USER_ or ALL_ views to validate the syntax then change the view to DBA_ and ask the DBA to run the query for you.
It occurred to me that I have a fundamental issue with respect to privileges.. Anyone who is granted access to my data warehouse, will be given privileges to objects in the reporting schema. However, whenever we drop objects, those privileges are lost.
The fundamental requirements that should be met with the approach are:
Indexes not populated during load of data (dropped, disabled?) to avoid populating while inserting
Retain existing privileges.
What do you guys think is the best approach based on the requirements above?
For requirement 1: depending on the version of Oracle you're running, you may be able to alter the indexes as invisible. Making indexes invisible will cause the optimizer to ignore them, but it can come in handy because you can simply make them visible again after whatever operation you're performing. If that won't work, you could alter them unusable instead. More info here: https://oracle-base.com/articles/11g/invisible-indexes-11gr1
For requirement 2: Once an object is dropped, the privileges are dropped along with it. There's not really any straightforward way to retain the grants as they are when an object is dropped, however, you could use a number of different methods to "save" the privileges when a table is dropped. These are just some ideas to get you going, not a guaranteed method of success.
Method 1: Using Triggers and DBMS_SCHEDULER to issue the grants. Triggers can be very powerful, and if you create a trigger that is set to run when a table of a specific name is created under a specific schema, you can use DBMS_SCHEDULER to run a job that will issue the missing grants.
Method 2: Per Littlefoot's suggestion, you can save the grant statements in a SQL script and run it manually every time the table is created (or create a trigger for it!)
Method 3: Work with the business and implement a process wherein the table does not need to be dropped, and instead is altered to fit business needs. To use this method, you'll have to understand why the object is being dropped in the first place. Is a drop really necessary to accomplish the desired outcome? I've seen teams request that tables be dropped when they really just wanted the tables to be truncated. If this is one of those scenarios, truncating instead of dropping will let you keep the object and its grants intact.
In any scenario, you'll also want to make sure that you are managing permissions via roles whenever possible, rather than issuing grants to individual users/schemas. Utilizing roles will make managing permissions a lot easier in just about any scenario.
If you DROP an object, the grants are gone. However:
Indexes not populated during load of data (dropped, disabled?) to avoid
populating while inserting
Retain existing privileges.
Here is one common approach. There are others. If you have partitioning there are better ways.
ALTER INDEX my_index1 UNUSABLE;
ALTER INDEX my_index2 UNUSABLE;
...
ALTER INDEX my_indexn UNUSABLE;
TRUNCATE TABLE my_table_with_n_indexes; -- OPTIONAL (depends if you need to start empty)
INSERT /*+ APPEND */ INTO my_table_with_n_indexes; -- Do your load here. APPEND hint optional, depending on what you are doing
ALTER INDEX my_index1 REBUILD;
ALTER INDEX my_index2 REBUILD;
...
ALTER INDEX my_indexn REBUILD;
I'd like to know if there is some performance advantage of using DBA_ catalog views over ALL_ views for querying database metadata and in which situations (if any) this advantage would manifest.
Oracle dictionary is slow so the general rule of thumb is not to use dictionary in performance critical code. If you terribly need metadata in your application, you can create materialized view based on dictionary tables and refresh it manually after executing DDL. Of course, not every change in dictionary is made by user DDL, but I don't know what kind of meatadata exactly you need.
As for difference in ALL_ and DBA_ views I believe it is marginal. For obvious reasons, DBA_ views have more data than ALL_ views. In each case you can try to obtain DDL for them using DBMS_METADATA package.
select dbms_metadata.get_ddl('VIEW', 'ALL_TABLES', 'SYS')
from dual
I will not attach the output since it's pretty verbose and useless. In case of ALL_TABLES list of tables used in FROM is the same. Since you access much the same data, perform same joins and just filter rows a bit differently, performance-wise the result should be about the same.
Will moving views from one schema to another have any adverse effect on performance?
I have about 40 views in one schema. I want to create a new schema which will have all the correct permissions. Suppose TableA resides in schema A. So my view will be in schema A. So I would do simply select * from TableA. Now I move this view to schema B. Since the table is in schema A, I would need to do select * from A.TableA. Will this cross-schema query cause any performance issues?
this is not where you might start in a performance review.
the sql of the actual view is probably far more important than which schema you place it in.
edit:
where the view resides should not affect performance. (aside from how the schema is laid out across blocks and datafiles)
If it's not a materialized view, it should have very little effect on performance.
In Informix, I can do a select from the systables table, and can investigate its version column to see what numeric version a given table has. This column is incremented with every DDL statement that affects the given table. This means I have the ability to see whether a table's structure has changed since the last time I connected.
Is there a similar way to do this in Oracle?
Not really. The Oracle DBA/ALL/USER_OBJECTS view has a LAST_DDL_TIME column, but it is affected by operations other than structure changes.
You can do that (and more) with a DDL trigger that keeps track of changes to tables. There's an interesting article with example here.
If you really want to do so, you'd have to use Oracle's auditing functions to audit the changes. It could be as simple as:
AUDIT ALTER TABLE WHENEVER SUCCESSFUL on [schema I care about];
That would at least capture the successfuly changes, ignoring drops and creates. Unfortunately, unwinding the stack of the table's historical strucuture by mining the audit trail is left as an exercise to the reader in Oracle, or to licensing the Change Management Pack.
You could also roll your own auditing by writing system-event triggers which are invoked on DDL statements. You'd end up having to write your own SQL parser if you really wantedto see what was changing.