Oracle different procedure speeds on same instance but different schemas (duplicate) - performance

We have an Oracle instance with duplicate schemas. Same procedure runs on one schema in 7 secondes but on the copied schema takes more than 7 hours to complete.
We have rebuild the indexes and tables spaces (an in-house tool), it speeds up a little but still hours to complete.
The dbf (data & index) files are the same for both schemas.
After one hour and 30 mn the alert_bdora10.log file contains these new lines
Thread 1 advanced to log sequence 35514 (LGWR switch)
Current log# 3 seq# 35514 mem# 0: D:\ORACLE\ORADATA\BDORA10\REDO03.LOG
Fri Aug 25 16:08:57 2017
Time drift detected. Please check VKTM trace file for more details.
Fri Aug 25 17:04:44 2017
Thread 1 cannot allocate new log, sequence 35515
Private strand flush not complete
Current log# 3 seq# 35514 mem# 0: D:\ORACLE\ORADATA\BDORA10\REDO03.LOG
Thread 1 advanced to log sequence 35515 (LGWR switch)
Current log# 1 seq# 35515 mem# 0: D:\ORACLE\ORADATA\BDORA10\REDO01.LOG
I am a little bit lost and don't know where to investigate first.
Sorry I am a noob at Oracle SQL and any help will be welcome
Thanks
Jluc

After removing lines after lines, finally I discovered a filter which is time consuming
select brm.loc_id , cli.cli_nompatr, cli.cli_nom, cli.cli_prenom, cli.cli_datenaiss
from V_BORDMIXTES brm
INNER JOIN BORDSOMENCSLIGNES bel ON bel.bse_id = brm.bse_id
INNER JOIN REVERSIONS rev ON rev.rev_id = bel.rev_id
INNER JOIN CLISANTES cls ON cls.cls_id = rev.cls_id
INNER JOIN CLIENTS cli ON cli.cli_id = cls.cli_id
where brm.brm_id = 39328
and cli.cli_id = 44517 -- If I add this filter clause, the query takes hours, without 55 ms

Related

Which SCN should i refer if there are two thread# in v$archive_log oracle

I am taking oracle backup using RMAN and saving current scn number i am getting scn number from below command
select max(next_change#) from v$archived_log where archived = 'YES' group by thread#;
It gives outut as below
MAX(NEXT_CHANGE#)
-----------------
3911392
3903950
i found from oracle documentation is , If the log is archived twice, there will be two archived log records with the same THREAD#, SEQUENCE#, and FIRST_CHANGE#, but with a different name.
Que 1)Which SCN should i refer while restore in SET Until SCN commnd
Que 2)There is one more command for getting SCN as below
select current_scn from v$database;
its output is
CURRENT_SCN
-----------
3914145
Whats the difference between these two commands output SCN?
Que 3) I have a RAC setup which has two oracle machine, Does those two thread# has something to do with this?
Let's start with the fact that Oracle has only one system SCN (it does not matter how many nodes you have in your RAC). Each transactions has it's own SCN that is why you have different SCNs on different threads (this is obvious as each thread manage it's own transactions). Now in reply to you questions:
This question is a little strange because you should refer the SCN you need, if you want "point in time" restore from 3 days ago you refer the SCN from 3 days ago. If you want to restore/recover your DB up to last commited transaction then you do not have to refer to any SCN at all.
Yes - "select dbms_flashback.get_system_change_number from dual" and maybe some other method exists.
The difference between
select max(next_change#) from v$archived_log where archived = 'YES' group by thread#; and select current_scn from v$database; is that in first case you get max(SCN) which was archived (into archivelog) the second is current DB SCN which always will be greater than first SELECT.
Yes, each thread manage its own transaction (and implicit SCNs)
In general you should refer SCN in your restore/recovery scenario only when you want "point in time" recovery. select max(next_change#) from v$archived_log where archived = 'YES' group by thread#; does not means you latest, system wide SCN this means MAX archived SCN (take into consideration that you also have a lot of SCNs in your current online redo logs). Also think about if your DB were in NO_ARCHIVE mode - in this case that select will return nothing...

Magento 1.7 / 1.8 deadlocks from index_process table

I'm having greate problems with Magento last friday we upgraded Magento from 1.7 to 1.8..
The issue is that we're having a lot of deadlocks in the MySQL database.
Our server setup is
1 Load Balancer
4 Webservers (Apache, PHP5, APC)
2 MySQL Servers (64 GB Ram, 30 cores SSD HDD) - 1 Master (Has Memcache for sessions) - 1 Slave (Has Redis for caching)
The deadlock's is less on Magento 1.8 than 1.7 but the still appear from time to time ..
Any one has some good ideas on how to get pass this problem.
Heres some data from SHOW ENGINE INNODB STATUS;
LATEST DETECTED DEADLOCK
130930 12:03:35
* (1) TRANSACTION:
TRANSACTION 918EEC3B, ACTIVE 37 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 41 lock struct(s), heap size 6960, 50 row lock(s), undo log entries 6
MySQL thread id 51899, OS thread handle 0x7f9774169700, query id 2583719 xxx.xx.xxx.47 dbxxx Updating
UPDATE m17_index_process SET started_at = '2013-09-30 10:03:36' WHERE (process_id='8')
* (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 594 page no 3 n bits 208 index PRIMARY of table xxx.xx.xxx.47 dbxxx.m17_index_process trx id 918EEC3B lock_mode X locks rec but not gap waiting
* (2) TRANSACTION:
TRANSACTION 918EE3E7, ACTIVE 72 sec starting index read
mysql tables in use 1, locked 1
680 lock struct(s), heap size 80312, 150043 row lock(s), undo log entries 294
MySQL thread id 51642, OS thread handle 0x7f8a336c7700, query id 2586254 xxx.xx.xxx.47 dbxxx Updating
UPDATE m17_index_process SET started_at = '2013-09-30 10:03:40' WHERE (process_id='8')
(2) HOLDS THE LOCK(S):
RECORD LOCKS space id 594 page no 3 n bits 208 index PRIMARY of table dbxxx.m17_index_process trx id 918EE3E7 lock mode S locks rec but not gap
(2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 594 page no 3 n bits 208 index PRIMARY of table dbxxx.m17_index_process trx id 918EE3E7 lock_mode X locks rec but not gap waiting
* WE ROLL BACK TRANSACTION (1)
Best Regards.
Rasmus
Seems deadlocks are due to indexing processes. Try disabling automatic indexes Magento - Programmatically Disable Automatic Indexing
and doing them manually.
Also try disabling cron for some time and check if issues reoccur.
Its possible that many store admins saving products from different stores. In that case product save may be causing deadlock with index processes.
Thanks

Find table name from v$datafile . name colum

When you look at wait events (i.e. with Toad), you see a file# parameter.
How can I get more useful information as the table name.
Is it possible to know even the number of records that are read by that table?
In another forum I found this advice, but it doesn't seem to work.
select segment_name
from dba_extents ext
where ext.file_id = 828
and 10711 between ext.block_id and ext.block_id + ext.blocks - 1
and rownum = 1
Let's talk files, blocks, segments and extents.
A segment is a database object that is stored. It may be a table, index, (sub)partition, cluster or LOB. Mostly you'll be interested in tables and indexes.
A segment is made up of extents. If you think of a segment as a book, an extent is a chapter. A segment (generally) starts with at least one extent. When it needs to store more data and it doesn't have room in the existing extents, it adds another extent to the segment.
An extent lives in a datafile. A datafile can have lots of extents each starting at a different point in the file and having a size. You may have one extent of 15 blocks starting in file 1 at block 10.
A wait event should identify the file and block (and row). If your wait event is for file #1 and block 12 you go off to USER_EXTENTS (or DBA_EXTENTS) and look for the extent in file# 1 where 12 is between the starting block location and the starting block location plus the number of blocks. So block 12 would between starting block 10 and end block 25 (start plus size).
Once you've identified the extent, you track it back to its parent segment (USER_SEGMENTS / DBA_SEGMENTS) which will give you the table/index name.
A theoretical SQL is as follows :
select username, sid, serial#,
row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
ext.*
from v$session s
join dba_extents ext on ext.file_id = row_wait_file#
and row_wait_block# between ext.block_id and ext.block_id + ext.blocks - 1
where username = 'HR'
and status = 'ACTIVE'
For this one I purposefully blocked a session so that it was waiting on a row lock.
828 is a rather large file id. It isn't impossible, but it is unusual. Do a select from DBA_DATA_FILES and see if you have such a file. If not, and you've only got a few files, look at all the objects that match the "10711 between ext.block_id and ext.block_id + ext.blocks - 1" criteria without the file id. You should be able to find a likely candidate from there.
The exception is if the problem was on a temporary segment. Since these get disposed of at the end of the operation, there's no permanent object recorded. In that cases the 'name' of the table/index isn't applicable and you need to tackle any performance issue another way (eg look at the SQL and its explain plan and work out whether it is correct in using lots of temp space).

Oracle insert performs too long

I'm confused about time Oracle 10g XE performs insert. I implemented bulk insert from xml file into several tables with programmatical transaction managment. Why one insert performs in a moment and another more than 10 minutes! I can't wait more and stop it. I think there's something more complex I have not payed attention yet.
Update:
I found lock using Monitor.
Waits
Event enq: TX - row lock contention
name|mode 1415053316
usnusnusnusn<<16 | slot 327711
sequence 162
SQL
INSERT INTO ESKD$SERVICESET (ID, TOUR_ID, CURRENCY_ID) VALUES (9, 9, 1)
What does it mean and how should I resolve it?
TX- Enqueues are well known and a quick google will give you a clear answer.
From that article:
1) Waits for TX in mode 6 occurs when a session is waiting for a row level lock that is already held by another session. This occurs when one user is updating or deleting a row, which another session wishes to update or delete. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.
If you have lots of simultaneous inserts and updates to a table you want each transaction to be a short as possible. Get in, get out... the longer things sit in between, the longer the delays for OTHER transactions.
PURE GUESS:
I have a feeling that your mention of "programmatical transaction managment" is that you're trying to use a table like a QUEUE. Inserting a start record, updating it frequently to change the status and then deleting the 'finished' ones. That is always trouble.
This question will be really hard to answer with so little specific information. All that I can tell you is why this could be.
If you are doing an INSERT ... SELECT ... bulk insert then perhaps your SELECT query is performing poorly. There may be a large number of table joins, innefficient use of inline views and other resources that may be negatively impacting the performance of your INSERT.
Try executing your SELECT query in an Explain Plan to see how the Optimizer is deriving the plan and to evaluation the COST of the query.
The other thing that you mentioned was a possible lock. This could be the case however you will need to analyze this with the OEM tool to tell for sure.
Another thing to consider may be that you do not have indexes on your tables OR the statistics on these tables may be out of date. Out of date statistics can GREATLY impact the performance of queries on large tables.
see sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues
The locking wait indicates a conflict that could easily be the cause of your performance issues. On the surface it looks likely that the problem is inserting a duplicate key value while the first insert of that key value had not yet committed. The lock you see "enq: TX - row lock contention" happens because one session is trying to modify uncommited data from another session. There are 4 common reasons for this particular lock wait event:
update/delete of the same row
inserting the same uniq key
modifying the same bitmap index chunk
deleting/updating a parent value to a foreign key
We can eliminate the first and last case are you are doing an insert.
You should be able to identify the 2nd if you have no bitmap indexes involved. If you have bitmap indexes involved and you have uniq keys involved then you could investigate easily if you had Active Session History (ASH) data, but unfortunately Oracle XE doesn't. On the other hand you can collected it yourself with S-ASH, see : http://ashmasters.com/ash-simulation/ . With ASH or S-ASH you can run a query like
col event for a22
col block_type for a18
col objn for a18
col otype for a10
col fn for 99
col sid for 9999
col bsid for 9999
col lm for 99
col p3 for 99999
col blockn for 99999
select
to_char(sample_time,'HH:MI') st,
substr(event,0,20) event,
ash.session_id sid,
mod(ash.p1,16) lm,
ash.p2,
ash.p3,
nvl(o.object_name,ash.current_obj#) objn,
substr(o.object_type,0,10) otype,
CURRENT_FILE# fn,
CURRENT_BLOCK# blockn,
ash.SQL_ID,
BLOCKING_SESSION bsid
--,ash.xid
from v$active_session_history ash,
all_objects o
where event like 'enq: TX %'
and o.object_id (+)= ash.CURRENT_OBJ#
Order by sample_time
/
Which would output something like:
ST EVENT SID LM P2 P3 OBJ OTYPE FN BLOCKN SQL_ID BSID
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
showing that the object name "OBJ" and the object type "OTYPE" with the contention and that the type is an INDEX. From there you could look up the type of INDEX to verify that it is bitmap.
IF the problem is a bitmap index, then you should probably re-evaluate using bitmap indexes or revisit the way that data is loaded and/or modify to reduce conflicts.
If the problem isn't BITMAP indexes, then it's trying to insert a duplicate key. Some other process had inserted the same key value and not yet committed. Then your process tries to insert the same key value and has to wait for the first session to commit or rollback.
For more information see this link: lock waits
It means, your sequence cache is to small. Increase it.

Power failure and Oracle data recovery

Database is OracleXE and here is the problem:
data gets entered in tables
UPS does not survive power shock
Oracle server reboots after power failure
everything seems normal
after some time we realize that some data is missing from few tables (this is ok, because all inserts happened in one transaction), and some data seems like half-committed
couple of reboots done by employee
strangest thing, half-committed data recovered to normal!
I guess data loss is possible, but is it possible to loose some part of transaction?
Does Oracle have some sort of recovery after these situations?
Scenario is written on the basis of my app logs and Oracle logs because it's remote system.
[EDIT]
My DBA is at home sick.
listener.log seems ok and I'm not much of a reader of alert_xe.log :)
I guess this is relevant info:
Oracle Data Guard is not available in this edition of Oracle.
Thu Oct 15 10:52:05 2009
alter database mount exclusive
Thu Oct 15 10:52:09 2009
Setting recovery target incarnation to 2
Thu Oct 15 10:52:09 2009
Successful mount of redo thread 1, with mount id 2581406229
Thu Oct 15 10:52:09 2009
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Thu Oct 15 10:52:09 2009
alter database open
Thu Oct 15 10:52:10 2009
Beginning crash recovery of 1 threads
Thu Oct 15 10:52:10 2009
Started redo scan
Thu Oct 15 10:52:10 2009
Completed redo scan
3923 redo blocks read, 520 data blocks need recovery
Thu Oct 15 10:52:10 2009
Started redo application at
Thread 1: logseq 649, block 88330
Thu Oct 15 10:52:12 2009
Recovery of Online Redo Log: Thread 1 Group 2 Seq 649 Reading mem 0
Mem# 0 errs 0: C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_558PBOPG_.LOG
Thu Oct 15 10:52:14 2009
Completed redo application
Thu Oct 15 10:52:14 2009
Completed crash recovery at
Thread 1: logseq 649, block 92253, scn 7229931
520 data blocks read, 498 data blocks written, 3923 redo blocks read
Thu Oct 15 10:52:15 2009
Thread 1 advanced to log sequence 650
Thread 1 opened at log sequence 650
[EDIT:]
"Write Caching" was left by mistake.
That explains data loss.
Sounds very odd to me. Data either is or is not commited. I suspect skulduggery by one of your collegues.
From your alert log, it looks like a normal automatic instance recovery. The last two lines indicate to me that the database is open and writing redo logs. There's no way I'd believe that a partial transaction existed. It's either committed or not - no in-between state exists.

Resources