Rename index influence - oracle

When I'm execute an alter index <owner>.<name> rename to <new_name>,
in goal to rename index, is that preventing any use of that index\table in queries?
Is that cause any lock?
I'm using Oracle 11g.

No, the name of an index has no bearing on how it is used by the Cost Based Optimizer. Once renamed, the index will show up in the new explain plan and will be used in the same way it was used before.
Use the ONLINE parameter at the end of your ALTER INDEX in order to greatly reduce the chances of locks.

Related

Can Oracle database transactions help in this scenario?

I am working with an Oracle database (11g Release 2). Imagine multiple connections doing the following simultaneously:
Start transaction
Check if a specific value exists in a table of unique values
If the value does not exist, insert it
Commit transaction
It seems to me that the only way to prevent conflicts is to block connections from performing the above 4-step sequence while any other connection is currently performing the 4-step sequence.
Can transactions achieve this kind of broad locking/blocking in Oracle?
Thanks in advance for your answers and advice on how to best deal with this scenario.
Add a unique check constraint, and implement an exception handler to get the next sequence and try again.
This is assuming you're using pl/sql.
An alternative would be using an Oracle sequence, with cache size 1. This will also ensure no gaps in the sequence
2. SELECT * FROM table_name FOR UPDATE to block all reads from other sessions...

In Oracle, what dictionary table tells me the "store in" value for partitioned indexes?

We are running Oracle 11g and have some partitioned tables. I am trying to write an automated process to script out the indexes on these tables. (Basically when we do bulk loads, we want to drop all the indexes beforehand and recreate them afterward.)
The problem I have is knowing how to script out the partitioned indexes. Some are created with "LOCAL STORE IN (tablespacename)" and others just with "LOCAL" (which stores index extents in the same partition as the data). In either case, dba_indexes.tablespace_name is null, and I have having a heck of a time scripting out the two different cases correctly.
I know I can simply re-run the original DDL to recreate the indexes, but multiple parts of the organization can make changes, and there would be less risk if the loader tool could be self-contained and simply rebuild whatever was there to begin with.
I can query dba_ind_subpartitions, and if the tablespace_name values for every subpartition all match, then I can assume/infer that I should STORE IN that tablespace name. But, if the table is in a small single-partition state (e.g. newly created or just after archival), then the ones created with just LOCAL also match this test, so this is also not a perfect way of telling them apart.
I can compare the names of the index subpartition tablespaces to the data table partition tablespaces, and if they match, then I can assume/infer that those should be created with just LOCAL. But, that drags a bunch of extra tables into my query and makes it really hard to read, so I am worried about maintainability going forward. Plus, it just seems like a kludge.
It seems like there should be someplace in Oracle's data dictionaries where it is simply keeping track of this, and where I can just directly look it up instead of having to do a bunch of math and rely on assumptions. But, I have done a good deal of digging and haven't yet found it. So, any help would be much appreciated.
Although an insert alone is faster without the presence of indexes, have you benchmarked a load into tables with indexes enabled and established that it is slower than disabling (more robust than dropping!) and rebuilding them?
When you direct path insert into a table with indexes, Oracle optimises the index maintenance process by creating temporary segments to hold just the data required for the index builds. This generally allows the index maintenance to scan much smaller segments than otherwise required -- the temp segments plus the existing indexes.
Well, as jonearles describes, the dbms_metadata package is the way to generate DDL for existing objects.
But, it seems to me, this is more work than is required for what you're trying to achieve. If this is all for loading data, I recommend you simply alter the indexes to be unusable, set 'skip_unusable_indexes=true', do the data load, and the rebuild the indexes.
This should achieve what you want, without having to drop and re-create the indexes.
DBMS_METADATA.GET_DDL is easier than querying the data dictionary:
--Sample table and index.
create table test1(a number);
create index test1_idx on test1(a);
--Store the DDL, drop the index, then re-create it.
declare
ddl_before clob;
begin
ddl_before := dbms_metadata.get_ddl('INDEX', 'TEST1_IDX');
execute immediate 'drop index test1_idx';
--Do some processing here.
execute immediate ddl_before;
end;
/

Oracle flashbacks, query for past data

Did you know how exactly query for past data works?
The version of oracle is 10G
With this query I can recover some data, but sometimes this query
select *
from table as of timestamp systimestamp - 1
retrieve an error (too old snapshot).
Is possible to augment time for this work and retrieve data about 24 hour? Thanks!
The key issue here is the sizing of the undo segments, and the undo retention and guarantee.
The long and short of it is that you need your undo tablespace sized to hold all of the changes that can be made withing the maximum period that you want to flashback over, and you'd want to set the undo retention parameter to that value. If it is really critical to your application that the undo is preserved then set the undo guarantee on the undo tablespace.
Useful docs: http://docs.oracle.com/cd/B12037_01/server.101/b10739/undo.htm#i1008577
Be aware that performance of flashback is rather poor for bulk data, as the required undo blocks need to be found in the tablespace. 11g has better options for high performance flashback.
What the error means is that the rollback segment became invalidated because,
usually, the query took too long. There are other causes. Like rollback segment sizing.
How many rows are in the table? - you can get an idea from this
select num_rows
from all_tables
where table_name='MYTABLE_NAME_GOES_HERE';
If there are LOTS of rows, you may need to look at adding some kind index to support your query. Because a full table scan takes too long. If not then it is a DBA issue. Maybe adding an index is a DBA issue in your shop as well.
If this worked well a few days ago, and started happening lately, you probably just passed the threshold for the rollback.

Oracle drop and create index

I would like to know if dropping Oracle index and recreating them will pose any data issues if assuming these are done during scheduled downtime.
Recently discovered that some indexes were parked on incorrect table space, would like to correct it by dropping the index and recreating it on the correct table space.
Please kindly advise.
I don't see a problem with that, but instead of drop/create you could also use the syntax below:
alter index <INDEX_NAME> rebuild tablespace <TABLESPACE_NAME>
To address what you asked in the comment below, the alter index rebuild should be faster. The reason for that is when you drop the index and create it again, index tree will be built from the table itself. But with alter index rebuild, Oracle reads the index itself, thus resulting in a smaller amount of I/O.

What is the fastest way to insert data into an Oracle table?

I am writing a data conversion in PL/SQL that processes data and loads it into a table. According to the PL/SQL Profiler, one of the slowest parts of the conversion is the actual insert into the target table. The table has a single index.
To prepare the data for load, I populate a variable using the rowtype of the table, then insert it into the table like this:
insert into mytable values r_myRow;
It seems that I could gain performance by doing the following:
Turn logging off during the insert
Insert multiple records at once
Are these methods advisable? If so, what is the syntax?
It's much better to insert a few hundred rows at a time, using PL/SQL tables and FORALL to bind into insert statement. For details on this see here.
Also be careful with how you construct the PL/SQL tables. If at all possible, prefer to instead do all your transforms directly in SQL using "INSERT INTO t1 SELECT ..." as doing row-by-row operations in PL/SQL will still be slower than SQL.
In either case, you can also use direct-path inserts by using INSERT /*+APPEND*/, which basically bypasses the DB cache and directly allocates and writes new blocks to data files. This can also reduce the amount of logging, depending on how you use it. This also has some implications, so please read the fine manual first.
Finally, if you are truncating and rebuilding the table it may be worthwhile to first drop (or mark unusable) and later rebuild indexes.
Regular insert statements are the slowest way to get data in a table and not meant for bulk inserts. The following article references a lot of different techniques for improving performance: http://www.dba-oracle.com/oracle_tips_data_load.htm
Drop the index, then insert the rows, then re-create the index.
If dropping the index doesn't speed things up enough, you need the Oracle SQL*Loader:
http://www.oracle.com/technology/products/database/utilities/htdocs/sql_loader_overview.html
Suppose you have taken eid,ename,sal,job. So create a table first as:
SQL>create table tablename(eid number, ename varchar2(20),sal number,job char(10));
Now insert data:-
SQL>insert into tablename values(&eid,'&ename',&sal,'&job');
Check this link
http://www.dba-oracle.com/t_optimize_insert_sql_performance.htm
main points to consider for your
case is to use Append hint as this
will directly append into the table
instead of using freelist. If you can afford to turn off logging than use append with nologging hint to do it
Use a bulk insert instead instead of iterating in PL/SQL
Use sqlloaded to load the data directly into the table if you are getting data from a file feed
Here are my recommendations on fast insert.
Trigger - Disable any triggers associated with a table. Enable after Inserts are complete.
Index - Drop Index and re-create it after your Inserts are complete.
Stale stats - Re-analyze table and index stats.
Index de-fragmentation - Rebuild Index if needed
Use No Logging -Insert using INSERT APPEND (Oracle only). This approach is very risky approach, no redo logs are generated therefore you can’t do a rollback - make a backup of table before you start and don't try on live tables. Check if your db has similar option
Parallel Insert: Running parallel insert will get the job faster.
Use Bulk Insert
Constraints - Not much overhead during inserts but still a good idea to check, if it is still slow after even after step 1
You can learn more on http://www.dbarepublic.com/2014/04/slow-insert.html
Maybe one of your best option is to avoid Oracle as much as possible actually.
I've been baffled by this myself, but very often a Java process can outperform many of the Oracle's utilities which either use OCI (read: SQL Plus) or will take up so much of your time to get right (read: SQL*Loader).
This doesn't prevent you to use specific hints either (like /APPEND/).
I've been pleasantly surprised each time I've turned to that kind of solution.
Cheers,
Rollo

Resources