Does creating index in Oracle locks the table for reads? - oracle

If we specify ONLINE in the CREATE INDEX statement, the table isn't locked during creation of the index. Without ONLINE keyword it isn't possible to perform DML operations on the table. But is the SELECT statement possible on the table meanwhile? After reading the description of CREATE INDEX statement it still isn't clear to me.
I ask about this, because I wonder if it is similar to PostgreSQL or SQL Server:
In PostgreSQL writes on the table are not possible, but one can still read the table - see the CREATE INDEX doc > CONCURRENTLY parameter.
In SQL Server writes on the table are not possible, and additionally if we create a clustered index reads are also not possible - see the CREATE INDEX doc > ONLINE parameter.

Creating an index does NOT block other users from reading the table. In general, almost no Oracle DDL commands will prevent users from reading tables.
There are some DDL statements that can cause problems for readers. For example, if you TRUNCATE a table, other users who are in the middle of reading that table may get the error ORA-08103: Object No Longer Exists. But that's a very destructive change that we would expect to cause problems. I recently found a specific type of foreign key constraint that blocked reading the table, but that was likely a rare bug. I've caused a lot of production problems while adding objects, but so far I've never seen adding an index prevent users from reading the table.

Related

Retain privileges when dropping objects in Oracle

It occurred to me that I have a fundamental issue with respect to privileges.. Anyone who is granted access to my data warehouse, will be given privileges to objects in the reporting schema. However, whenever we drop objects, those privileges are lost.
The fundamental requirements that should be met with the approach are:
Indexes not populated during load of data (dropped, disabled?) to avoid populating while inserting
Retain existing privileges.
What do you guys think is the best approach based on the requirements above?
For requirement 1: depending on the version of Oracle you're running, you may be able to alter the indexes as invisible. Making indexes invisible will cause the optimizer to ignore them, but it can come in handy because you can simply make them visible again after whatever operation you're performing. If that won't work, you could alter them unusable instead. More info here: https://oracle-base.com/articles/11g/invisible-indexes-11gr1
For requirement 2: Once an object is dropped, the privileges are dropped along with it. There's not really any straightforward way to retain the grants as they are when an object is dropped, however, you could use a number of different methods to "save" the privileges when a table is dropped. These are just some ideas to get you going, not a guaranteed method of success.
Method 1: Using Triggers and DBMS_SCHEDULER to issue the grants. Triggers can be very powerful, and if you create a trigger that is set to run when a table of a specific name is created under a specific schema, you can use DBMS_SCHEDULER to run a job that will issue the missing grants.
Method 2: Per Littlefoot's suggestion, you can save the grant statements in a SQL script and run it manually every time the table is created (or create a trigger for it!)
Method 3: Work with the business and implement a process wherein the table does not need to be dropped, and instead is altered to fit business needs. To use this method, you'll have to understand why the object is being dropped in the first place. Is a drop really necessary to accomplish the desired outcome? I've seen teams request that tables be dropped when they really just wanted the tables to be truncated. If this is one of those scenarios, truncating instead of dropping will let you keep the object and its grants intact.
In any scenario, you'll also want to make sure that you are managing permissions via roles whenever possible, rather than issuing grants to individual users/schemas. Utilizing roles will make managing permissions a lot easier in just about any scenario.
If you DROP an object, the grants are gone. However:
Indexes not populated during load of data (dropped, disabled?) to avoid
populating while inserting
Retain existing privileges.
Here is one common approach. There are others. If you have partitioning there are better ways.
ALTER INDEX my_index1 UNUSABLE;
ALTER INDEX my_index2 UNUSABLE;
...
ALTER INDEX my_indexn UNUSABLE;
TRUNCATE TABLE my_table_with_n_indexes; -- OPTIONAL (depends if you need to start empty)
INSERT /*+ APPEND */ INTO my_table_with_n_indexes; -- Do your load here. APPEND hint optional, depending on what you are doing
ALTER INDEX my_index1 REBUILD;
ALTER INDEX my_index2 REBUILD;
...
ALTER INDEX my_indexn REBUILD;

Dynamic Audit Trigger

I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.

How to safely update hive external table

I have an external hive table and I would like to refresh the data files on a daily basis. What is the recommended way to do this?
If I just overwrite the files, and if we are unlucky enough to have some other hive queries to execute in parallel against this table, what will happen to those queries? Will they just fail? Or will my HDFS operations fail? Or will they block until the queries complete?
If availability is a concern and space isn't an issue, you can do the following:
Make a synonym for the external table. Make sure all queries use this synonym when accessing the table.
When loading new data, load it to a new table with a different name.
When the load is complete, point the synonym to the newly loaded table.
After an appropriate length of time (long enough for any running queries to finish), drop the previous table.
First of all.. if you are accessing any table it may have two types of locks:
exclusive(if data is getting added) and shared(if data is getting read)..
so if you insert overwrite and add data into the table then at that time if you access the table with other queries, they wont get executed because there will be an exclusive lock on it and once the insert overwrite query completes then you may access the table.
Please refer to the following link:
https://cwiki.apache.org/confluence/display/Hive/Locking

Operations on certain tables won't finish

We have a table TRANSMISSIONS(ID, NAME) which behaves funny in the following ways:
The statement to add a foreign key in another table referencing TRANSMISSIONS.ID won't finish
The statement to add a column to TRANSMISSIONS won't finish
The statement to disable/drop a unique constraint won't finish
The statement to disable/drop a trigger won't finish
TRANSMISSION's primary key is ID, there is also a unique constraint on NAME - therefore there are indexes on ID and NAME. We also have a trigger which creates values for column ID using a sequence, so that INSERT statements do not need to provide a value for ID.
Besides TRANSMISSIONS, there are two more tables behaving like this. For other tables, the above-mentioned statements work fine.
The database is used in an application with Hibernate and due to an incorrect JPA configuration we produced high values for ID during a time. Note that we use the trigger only for "manual" INSERT statements and that Hibernate produces ID values itself, also using the sequence.
The first thought was that the problems were due to the high IDs but we have the problems also with tables that never had such high IDs.
Anyways we suspected that the indexes might be fragmented somehow and called ALTER INDEX TRANSMISSIONS_PK SHRINK SPACE COMPACT, which ran through but showed no effect.
Also, we wanted to call ALTER TABLE TRANSMISSIONS SHRINK SPACE COMPACT which didn't work because we needed to call first ALTER TABLE TRANSMISSIONS ENABLE ROW MOVEMENT which never finished.
We have another instance of the database which does not behave in such a funny way. So we think it might be that in the course of running the application the database got somehow into an inconsistent state.
Does someone have any suggestions what might have gone out of control/into an inconsitent state?
More hints:
There are no locks present on any objects in the database (according to information in v$lock and v$locked_object)
We tried all these statements in SQL Developer and also using SQLPlus (command-line tool).

Is okay to create indexes on a table while insertion

Is it okay to create an index on a table while , lets say when are there some tasks which creates some new rows into the table at the same time?? Would there be any locking issues???
EX: FEEDBACK TABLE --> creating an index on (Name, feedbackrule) while there are any inserts happening simultaneously , is this BAD?? if so what.
I'm assuming, Oracle will just not use this index when the inserts are happening, later this will be used.
Normally, creating an index requires locking the table, so all the DML operations would block; and if there are active transactions on the table when you initiate the index creation, you'd likely get the error "ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired".
If the table is small, this may not be much of an issue - transactions would just be blocked for a few moments. But if it is very large it would be a bad idea to try creating an index while the table is in use.
However, if you using Enterprise Edition, you can add the ONLINE keyword to your CREATE INDEX statement, which will allow transactions to proceed against the table while the index is building. It may still cause slower performance.

Resources