database does not open and not mounted [migrated] - oracle

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated yesterday.
After unexpected shutdown the data base not open,
`ORA-01507:database not mounted
SQL>alter database mount;
ORA-00214:control file'E:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\CLUSTER\CONTROLFILECONTROLFILE\OO1_MF_HB1484JB_.CTL' version 359456 inconsistent with file 'E:\APP\ADMINISTRATOR\ORAORADATA\CLUSTER\CONTROLFILE\O1_MF_HB114848J_.CTL'
I took a copy of both of the control files on external hard drive and replace the less version number with the higher version number, then excuted
```SQL> shutdown immediate;
ORA-01507:database not mounted
ORACLE instance shut down
SQL> startup mount;
Total System Global Area 2221395968 bytes
Fixed size 2177656 bytes
Variable size 1677723016 bytes
Database Buffers 536870912 bytes
Redo Buffer 4624384 bytes
ORA-00205: error in identiidentifying control file, check alert log for more info

One of your control files is either corrupted or contains older version of data than other. Did you run out of storage?
make copy of both
overwrite one control file with the other one (smaller by bigger or oldner by newer).
try to start database

Related

oracle control file and undo databasefile was deleted,Is there a way to get back?

Recreate control file,this is the code
CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS NOARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 '/home/oracle/app/oradata/orcl/redo01.log' SIZE 50M,
GROUP 2 '/home/oracle/app/oradata/orcl/redo02.log' SIZE 50M,
GROUP 3 '/home/oracle/app/oradata/orcl/redo03.log' SIZE 50M
DATAFILE
'/home/oracle/app/oradata/orcl/osc_zb.dbf',
......
CHARACTER SET ZHS16GBK;
After then open database,the result is as follows:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/home/oracle/app/oradata/orcl/system01.dbf'
recover datafile 1:
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.
then,use hidden parameters to start database.
undo_management='manual'
undo_tablespace='UNDOTBS01'
_allow_resetlogs_corruption=true
also don't work:
SQL> startup pfile=/home/oracle/initoracle.ora
ORACLE instance started.
Total System Global Area 1586708480 bytes
Fixed Size 2253624 bytes
Variable Size 973081800 bytes
Database Buffers 603979776 bytes
Redo Buffers 7393280 bytes
Database mounted.
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/home/oracle/app/oradata/orcl/system01.dbf'
Such a cycle
SQL> recover datafile 1
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.
I hava no idea to restore database,moguls,help me
Can start to mounted status?Maybe You can try following method。
first,find the 'CURRENT' redo groups.
select group#,sequence#,status,first_time,next_change# from v$log;
And find the redo file location
select * from v$logfile;
Then,through this redo log to recover database
SQL> recover database until cancel using backup controlfile;
ORA-00279: change 4900911271334 generated at 03/06/2018 05:46:29 needed for
thread 1
ORA-00289: suggestion :
/home/wonders/app/wonders/flash_recovery_area/ORCL/archivelog/2018_03_12/o1_mf_1
_4252_%u_.arc
ORA-00280: change 4900911271334 for thread 1 is in sequence #4252
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/home/wonders/app/wonders/oradata/orcl/redo01.log
Log applied.
Media recovery complete.
Finally,open database with ‘RESETLOGS’

oracle dbf file is normal, but cannot mount

Things start not close oracle process when I shutdown system, after I exec startup appear error ORA-01157 ORA-01110.
I very sure dbf file is existed, and I use dbv see the file, every thing is normal.
Last thing, I try offline drop those dbf, but cannot recovery them.
Please give me some help, thank you very much!
mount your database :
SQL> startup mount;
Provided your database is in NOARCHIVELOG mode, Issue the following queries :
SQL> select min(first_change#) min_first_change
from v$log V1 inner join v$logfile f on ( l.group# = f.group# );
SQL> select change# ch_number from v$recover_file;
If the ch_number is greater than the min_first_change of your logs, the datafile can be recovered.
If the ch_number is less than the min_first_change of your logs,
the file cannot be recovered.
In this case;
restore the most recent full backup (and thus lose all changes to
the database since) or recreate the tablespace.
Recover the datafile(If the case in the upper yellow part isn't met):
SQL> recover datafile '/opt/oracle/resource/undotbs02.dbf';
Confirm each of the logs that you are prompted for until you receive the message Media Recovery Complete. If you are prompted for a non-existing
archived log, Oracle probably needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the
ORA-00280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence
number matches the one you are being asked for. Keep entering online logs as requested until you receive the message Media Recovery Complete .
If the database is at mount point, open it :
SQL> alter database open;
If the DBF file fails to mount then check the source of DBF file, whether it is imported from other database or converted with any other tool. Generally, if the DBF file does not have a specific form then it cannot be mounted, troubleshoot Oracle DBF file by following steps
https://docs.cloud.oracle.com/iaas/Content/File/Troubleshooting/exportpaths.htm
If the database is still causing the problem then there could be problems with other components and before mounting fix them with a professional database recovery tool like https://www.filerepairtools.com/oracle-database-recovery.html

Lost Redologs and Archivelogs

I am using Oracle XE 11g R2 and due to a mistake all the archivelogs where deleted by running delete archivelog all; command on RMAN.
Also one set of redo logs were deleted i.e. redo_g02a.log, redo_g02b.log and redo_g02c.log
Other redolog are available i.e. redo_g01a.log, redo_g01b.log, redo_g01c.log and redo_g03a.log, redo_g03b.log and redo_g03c.log
Is there a way I can startup the database now? It is a production database and I am really worried.
I tried copying from redo_g01a.log to redo_g02a.log ... but alert logs say:
ORA-00312: online log 2 thread 1: '/u01/app/oracle/fast_recovery_area/XE/onlinelog/redo_g02a.log'
USER (ospid: 30663): terminating the instance due to error 341
Any help will be much much appreciated.
First make a copy of your datafiles, redo logs, and control file. That way you can get back to this point.
If the database was shut down clean you can try clearing the group and it will be recreated for you.
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size 2260048 bytes
Variable Size 675283888 bytes
Database Buffers 385875968 bytes
Redo Buffers 5517312 bytes
Database mounted.
SQL> alter database clear logfile group 2;
Database altered.
SQL> alter database open;
Database altered.
SQL>
If not you will need to recover and open with the resetlogs option. Unfortunately because you lost an entire log group you may also have lost data.

oracle bigfile import: inappropriate ioctl for device [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Oracle 11G, Linux with 200GB space. About 25GB were already occupied by some files.
I received a ".dmp" file of 50GB and, in order to perform the import, I created a bigtablespace with bigdatafile of 100GB (cause I thought it would be enough).
The import command:
imp user/pwd full=Y log=/home/oracle/log.txt file=/usr/oracle/public/dump.dmp
But after importing lot of tables and inserting lot of data into them, I received the following error message:
IMP-00017: following statement failed with ORACLE error 603:
"CREATE UNIQUE INDEX "PR_TABLE1" ON "TABLE1" ("TABLE1_ID" , "ROW_NUM" , "COL_NUM" ) "
" PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 4294967294 FREELISTS 1 "
"FREELIST GROUPS 1 BUFFER_POOL DEFAULT) NOLOGGING"
IMP-00003: ORACLE error 603 encountered
ORA-00603: ORACLE server session terminated by fatal error
ORA-01114: IO error writing block to file 201 (block # 1719438)
ORA-27072: File I/O error
Linux-x86_64 Error: 25: Inappropriate ioctl for device Additional information: 4 Additional information: 1719438 Additional information:
114688
ORA-01114: IO error writing block to file 201 (block # 1719438)
ORA-27072: File I/O error
Linux-x86_64 Error: 25: Inappropriate ioctl for device Additional information: 4 Additional information: 1719438 Additional info
IMP-00017: following statement failed with ORACLE error 3114:
"BEGIN SYS.DBMS_EXPORT_EXTENSION.SET_IMP_SKIP_INDEXES_OFF; END;"
IMP-00003: ORACLE error 3114 encountered
ORA-03114: not connected to ORACLE
IMP-00000: Import terminated unsuccessfully
About 15GB were consumed by the "temp" datafile.
It seems it run out of space, according to what I've read online about datafiles trying to extend themselves. Linux shows around 8GB free, but it might have them reserved for something else.
The question is: is there a way for me to know how much space should I give to my hard drive to successfully realize the import???
The reason to ask this is because it takes so long to upload the ".dmp" file to the server and many more to create the ".dbf" files and realize the import.
The client doesn't know this info. I'm not an Oracle DBA, but I once successfully connected a .Net application to an Oracle database, so now I'm supposed to be the expert here.
I'm assuming your datafiles and tempfiles are on filesystem?
So, for temp files, Oracle creates them as sparse files. So, if you have a filesystem that has, say, 100MB of free space. You could create a temp tablespace in that filesystem, of size 2GB, even though there's only 100MB of free space. So, it's easy to over allocate space. So, as soon as you started using this temp space, when your usage exceeded 100MB, you'd get the error above, "inappropriate ioctl for device".
So, please sanity check the size of your data file and temp files, and the amount of free space you actually have. Note that, once you start using the space in the temp file, the size of the file (as reported by the O/S) probably won't change, but the free space as reported by the O/S will decrease till it hits zero and the filesystem is full.
Note also, that index creation, on a large table, could create significant temp space usage, particularly if you're using WORKAREA_SIZE_POLICY=AUTO and PGA_AGGREGATE_TARGET. (I know, if you're new to Oracle, and not a DBA, that may not make any sense, don't worry about it.)
What you may want to do, is run the import with 'INDEXES=N', to just import the data, then create the indexes after the fact, running from a script. This way, you can set WORKAREA_SIZE_POLICY=MANUAL, and set very large sort_area_size and sort_area_retained_size. this may (or may not, depending on the size of the indexes being created) reduce the space usage of temp tablespace.
So, in summary, first check the sizes of your temp files, and space left in your filesystem. If you have a problem w/ initially sparse files that have now overcommitted you on space, then you need to resolve that. Best bet may be to drop the temp tablespace, check the amount of free space then in the filesystem, and then create a new temp tablespace, appropriate to your situation.
Then, secondly, see if you can tune the TEMP space consumption during index builds, using manual workarea size policy and explicitly set sort_area_size/sort_area_retained_size.
Hope that helps.

Resolving ORA-4031 "unable to allocate x bytes of shared memory"

I need some pointers on how to diagnose and fix this problem. I don't know if this is a simple server setup problem or an application design problem (or both).
Once or twice every few months this Oracle XE database reports ORA-4031 errors. It doesn't point to any particular part of the sga consistently. A recent example is:
ORA-04031: unable to allocate 8208 bytes of shared memory ("large pool","unknown object","sort subheap","sort key")
When this error comes up, if the user keeps refreshing, clicking on different links, they'll generally get more of these kinds of errors at different times, then soon they'll get "404 not found" page errors.
Restarting the database usually resolves the problem for a while, then a month or so later it comes up again, but rarely at the same location in the program (i.e. it doesn't seem linked to any particular portion of code) (the above example error was raised from an Apex page which was sorting 5000+ rows from a table).
I've tried increasing sga_max_size from 140M to 256M and hope this will help things. Of course, I won't know if this has helped since I had to restart the database to change the setting :)
I'm running Oracle XE 10.2.0.1.0 on a Oracle Enterprise Linux 5 box with 512MB of RAM. The server only runs the database, Oracle Apex (v3.1.2) and Apache web server. I installed it with pretty much all default parameters and it's been running quite well for a year or so. Most issues I've been able to resolve myself by tuning the application code; it's not intensively used and isn't a business critical system.
These are some current settings I think may be relevant:
pga_aggregate_target 41,943,040
sga_max_size 268,435,456
sga_target 146,800,640
shared_pool_reserved_size 5,452,595
shared_pool_size 104,857,600
If it's any help here's the current SGA sizes:
Total System Global Area 268435456 bytes
Fixed Size 1258392 bytes
Variable Size 251661416 bytes
Database Buffers 12582912 bytes
Redo Buffers 2932736 bytes
Even though you are using ASMM, you can set a minimum size for the large pool (MMAN will not shrink it below that value).
You can also try pinning some objects and increasing SGA_TARGET.
Don't forget about fragmentation.
If you have a lot of traffic, your pools can be fragmented and even if you have several MB free, there could be no block larger than 4KB.
Check size of largest free block with a query like:
select
'0 (<140)' BUCKET, KSMCHCLS, KSMCHIDX,
10*trunc(KSMCHSIZ/10) "From",
count(*) "Count" ,
max(KSMCHSIZ) "Biggest",
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ<140
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 10*trunc(KSMCHSIZ/10)
UNION ALL
select
'1 (140-267)' BUCKET,
KSMCHCLS,
KSMCHIDX,
20*trunc(KSMCHSIZ/20) ,
count(*) ,
max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ between 140 and 267
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 20*trunc(KSMCHSIZ/20)
UNION ALL
select
'2 (268-523)' BUCKET,
KSMCHCLS,
KSMCHIDX,
50*trunc(KSMCHSIZ/50) ,
count(*) ,
max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ between 268 and 523
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 50*trunc(KSMCHSIZ/50)
UNION ALL
select
'3-5 (524-4107)' BUCKET,
KSMCHCLS,
KSMCHIDX,
500*trunc(KSMCHSIZ/500) ,
count(*) ,
max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ between 524 and 4107
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 500*trunc(KSMCHSIZ/500)
UNION ALL
select
'6+ (4108+)' BUCKET,
KSMCHCLS,
KSMCHIDX,
1000*trunc(KSMCHSIZ/1000) ,
count(*) ,
max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ >= 4108
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 1000*trunc(KSMCHSIZ/1000);
Code from
All of the current answers are addressing the symptom (shared memory pool exhaustion), and not the problem, which is likely not using bind variables in your sql \ JDBC queries, even when it does not seem necessary to do so. Passing queries without bind variables causes Oracle to "hard parse" the query each time, determining its plan of execution, etc.
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:528893984337
Some snippets from the above link:
"Java supports bind variables, your developers must start using prepared statements and bind inputs into it. If you want your system to ultimately scale beyond say about 3 or 4 users -- you will do this right now (fix the code). It is not something to think about, it is something you MUST do. A side effect of this - your shared pool problems will pretty much disappear. That is the root cause. "
"The way the Oracle
shared pool (a very important shared memory data structure)
operates is predicated on developers using bind variables."
" Bind variables are SO MASSIVELY important -- I cannot in any way shape or form OVERSTATE their importance. "
The following are not needed as they they not fix the error:
ps -ef|grep oracle
Find the smon and kill the pid for it
SQL> startup mount
SQL> create pfile from spfile;
Restarting the database will flush your pool and that solves a effect not the problem.
Fixate your large_pool so it can not go lower then a certain point or add memory and set a higher max memory.
This is Oracle bug, memory leak in shared_pool, most likely db managing lots of partitions.
Solution: In my opinion patch not exists, check with oracle support. You can try with subpools or en(de)able AMM ...
Error
ORA-04031: unable to allocate 4064 bytes of shared memory ("shared pool","select increment$,minvalue,m...","sga heap(3,0)","kglsim heap")
Solution: by nepasoft nepal
1.-
ps -ef|grep oracle
2.- Find the smon and kill the pid for it
3.-
SQL> startup mount
ORACLE instance started.
Total System Global Area 4831838208 bytes
Fixed Size 2027320 bytes
Variable Size 4764729544 bytes
Database Buffers 50331648 bytes
Redo Buffers 14749696 bytes
Database mounted.
4.-
SQL> alter system set shared_pool_size=100M scope=spfile;
System altered.
5.-
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
6.-
SQL> startup
ORACLE instance started.
Total System Global Area 4831838208 bytes
Fixed Size 2027320 bytes
Variable Size 4764729544 bytes
Database Buffers 50331648 bytes
Redo Buffers 14749696 bytes
Database mounted.
Database opened.
7.-
SQL> create pfile from spfile;
File created.
SOLVED

Resources