Database is OracleXE and here is the problem:
data gets entered in tables
UPS does not survive power shock
Oracle server reboots after power failure
everything seems normal
after some time we realize that some data is missing from few tables (this is ok, because all inserts happened in one transaction), and some data seems like half-committed
couple of reboots done by employee
strangest thing, half-committed data recovered to normal!
I guess data loss is possible, but is it possible to loose some part of transaction?
Does Oracle have some sort of recovery after these situations?
Scenario is written on the basis of my app logs and Oracle logs because it's remote system.
[EDIT]
My DBA is at home sick.
listener.log seems ok and I'm not much of a reader of alert_xe.log :)
I guess this is relevant info:
Oracle Data Guard is not available in this edition of Oracle.
Thu Oct 15 10:52:05 2009
alter database mount exclusive
Thu Oct 15 10:52:09 2009
Setting recovery target incarnation to 2
Thu Oct 15 10:52:09 2009
Successful mount of redo thread 1, with mount id 2581406229
Thu Oct 15 10:52:09 2009
Database mounted in Exclusive Mode
Completed: alter database mount exclusive
Thu Oct 15 10:52:09 2009
alter database open
Thu Oct 15 10:52:10 2009
Beginning crash recovery of 1 threads
Thu Oct 15 10:52:10 2009
Started redo scan
Thu Oct 15 10:52:10 2009
Completed redo scan
3923 redo blocks read, 520 data blocks need recovery
Thu Oct 15 10:52:10 2009
Started redo application at
Thread 1: logseq 649, block 88330
Thu Oct 15 10:52:12 2009
Recovery of Online Redo Log: Thread 1 Group 2 Seq 649 Reading mem 0
Mem# 0 errs 0: C:\ORACLEXE\APP\ORACLE\FLASH_RECOVERY_AREA\XE\ONLINELOG\O1_MF_2_558PBOPG_.LOG
Thu Oct 15 10:52:14 2009
Completed redo application
Thu Oct 15 10:52:14 2009
Completed crash recovery at
Thread 1: logseq 649, block 92253, scn 7229931
520 data blocks read, 498 data blocks written, 3923 redo blocks read
Thu Oct 15 10:52:15 2009
Thread 1 advanced to log sequence 650
Thread 1 opened at log sequence 650
[EDIT:]
"Write Caching" was left by mistake.
That explains data loss.
Sounds very odd to me. Data either is or is not commited. I suspect skulduggery by one of your collegues.
From your alert log, it looks like a normal automatic instance recovery. The last two lines indicate to me that the database is open and writing redo logs. There's no way I'd believe that a partial transaction existed. It's either committed or not - no in-between state exists.
Related
I am using SqlLoader to load huge file for this I need to change commit point.
I have used rows=1000 but it is not reflecting while executing. I have tried below command to do this :
>sqlldr user/pass#db control=myctl.ctl log=mylog.log rows=1000
The above command is not changing the commit point for me. Is there any oracle environment file do I need to modify to change the commit point?
There is also the bindsize to consider. Bump that up to ensure that you get the rows values you want. eg
C:\temp>sqlldr control=emp.ctl userid=/#db18_pdb1
SQL*Loader: Release 18.0.0.0.0 - Production on Mon Oct 7 16:00:54 2019
Version 18.6.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Path used: Conventional
Commit point reached - logical record count 250
Commit point reached - logical record count 500
Commit point reached - logical record count 750
Commit point reached - logical record count 1000
....
C:\temp>sqlldr control=emp.ctl userid=/#db18_pdb1 rows=1000 bindsize=8000000
SQL*Loader: Release 18.0.0.0.0 - Production on Mon Oct 7 16:01:19 2019
Version 18.6.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Path used: Conventional
specified value for readsize(1048576) less than bindsize(8000000)
Commit point reached - logical record count 1000
Commit point reached - logical record count 2000
Commit point reached - logical record count 3000
Commit point reached - logical record count 4000
I need code help in Oracle
I have table with the following structure. This is an aggregate table grouped by Account_ID and month (month rolled to the start date of the month) and number of transactions performed in that month.
Account_ID Date Trans_cnt
------------------------------
A00001 01Jan2018 12
A00002 01Jan2018 14
A00002 01Feb2018 01
A00003 01Feb2018 02
A00001 01Mar2018 12
I need to find accounts which have had continuous 6 months of transaction and 3 months of transactions
an example if account A00001 is analyzed for 6 months, then in a given year's time say from Jan 2018 to Dec 2018, this account should have continuous transactions from Jan to Jun or Feb to Jul; or Mar to Aug likewise.
Let me know how to come up with a sql for the same.
When i trying to start oracle db, it says
ERROR at line 1:
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/u01/oradata/oracle/system01.dbf'
When I trying to recover using redo logs, i've got
SQL> recover database using backup controlfile;
ORA-00279: change 4925223599 generated at 02/05/2018 10:24:32 needed for thread
1
ORA-00289: suggestion :
/mnt/backup/oracle/ORACLE/archivelog/2018_02_05/o1_mf_1_186975_%u_.arc
ORA-00280: change 4925223599 for thread 1 is in sequence #186975
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/u01/oradata/oracle/redo01.log
ORA-00310: archived log contains sequence 186973; sequence 1
86975 required
ORA-00334: archived log: '/u01/oradata/oracle/redo01.log'
So, in the redo log i only have 186973 sequence. How can i revert all the oracle world to 186973 sequence and forget about next 2 seqs? I need to bring up the db anyhow and some chunk of last data i can lose.
Problem is solved by running
recover automatic database;
Yeah! That simple, thank you all
so I have oracle 12c installed in cantos, the RAM 96GB and the disk are SSD which is pretty fast, and also an application is running on the same server with a lot of conversion and processing of the data.
this was pretty much new system that I installed database and application on it, so after couple of hours I see it give error of
"got minus one from a read call"
after digging around I see it was the session and process causing it so in increase that, but still after 24 hours I see the processes and session increasing it does not decrease when checking... bellow is the snap shot of the query during couple of hours I did.
select program,username,status,schemaname,count(*) from v$session
group by username,status,osuser,schemaname,program
order by count(*) desc,status
fetch first 3 rows only
2:00 PM 11 December
JDBC Thin Client oracleuser INACTIVE schemeName 233
SQL Developer SYS INACTIVE SYS 4
JDBC Thin Client oracleuser ACTIVE schemeName 3
select * from v$resource_limit where resource_name in ('processes','sessions');
6:55 PM 11 December
processes 490 492 1200 1200 0
sessions 500 504 1920 1920 0
After I observe it that it keep increasing I changed the idle_time in user profile to be 30 minute
IDLE_TIME 30
CONNECT_TIME UNLIMITED
after that I observe them for couple of more hours and run same query above and here is the results respectively.
8:25 PM 11 December
JDBC Thin Client oracleUser KILLED schemename 683
JDBC Thin Client oracleuser INACTIVE
schemeName 241
SQL Developer SYS INACTIVE SYS 4
8:25 AM 12 December
processes 995 998 1200 1200 0
sessions 1003 1007 1920 1920 0
NOW I see killed session and less inactive session, but they number of session keep increasing it does not drop I was expecting that changing idle_time in profile will kill inactive session and release the session and processes.
my concern is that when the session and processes hit the limit I set, I will getting again the same minus one error . and it will kill application also.
"got minus one from a read call"
any help will be highly appreciated
regards
We have an Oracle instance with duplicate schemas. Same procedure runs on one schema in 7 secondes but on the copied schema takes more than 7 hours to complete.
We have rebuild the indexes and tables spaces (an in-house tool), it speeds up a little but still hours to complete.
The dbf (data & index) files are the same for both schemas.
After one hour and 30 mn the alert_bdora10.log file contains these new lines
Thread 1 advanced to log sequence 35514 (LGWR switch)
Current log# 3 seq# 35514 mem# 0: D:\ORACLE\ORADATA\BDORA10\REDO03.LOG
Fri Aug 25 16:08:57 2017
Time drift detected. Please check VKTM trace file for more details.
Fri Aug 25 17:04:44 2017
Thread 1 cannot allocate new log, sequence 35515
Private strand flush not complete
Current log# 3 seq# 35514 mem# 0: D:\ORACLE\ORADATA\BDORA10\REDO03.LOG
Thread 1 advanced to log sequence 35515 (LGWR switch)
Current log# 1 seq# 35515 mem# 0: D:\ORACLE\ORADATA\BDORA10\REDO01.LOG
I am a little bit lost and don't know where to investigate first.
Sorry I am a noob at Oracle SQL and any help will be welcome
Thanks
Jluc
After removing lines after lines, finally I discovered a filter which is time consuming
select brm.loc_id , cli.cli_nompatr, cli.cli_nom, cli.cli_prenom, cli.cli_datenaiss
from V_BORDMIXTES brm
INNER JOIN BORDSOMENCSLIGNES bel ON bel.bse_id = brm.bse_id
INNER JOIN REVERSIONS rev ON rev.rev_id = bel.rev_id
INNER JOIN CLISANTES cls ON cls.cls_id = rev.cls_id
INNER JOIN CLIENTS cli ON cli.cli_id = cls.cli_id
where brm.brm_id = 39328
and cli.cli_id = 44517 -- If I add this filter clause, the query takes hours, without 55 ms