Lost Redologs and Archivelogs - oracle

I am using Oracle XE 11g R2 and due to a mistake all the archivelogs where deleted by running delete archivelog all; command on RMAN.
Also one set of redo logs were deleted i.e. redo_g02a.log, redo_g02b.log and redo_g02c.log
Other redolog are available i.e. redo_g01a.log, redo_g01b.log, redo_g01c.log and redo_g03a.log, redo_g03b.log and redo_g03c.log
Is there a way I can startup the database now? It is a production database and I am really worried.
I tried copying from redo_g01a.log to redo_g02a.log ... but alert logs say:
ORA-00312: online log 2 thread 1: '/u01/app/oracle/fast_recovery_area/XE/onlinelog/redo_g02a.log'
USER (ospid: 30663): terminating the instance due to error 341
Any help will be much much appreciated.

First make a copy of your datafiles, redo logs, and control file. That way you can get back to this point.
If the database was shut down clean you can try clearing the group and it will be recreated for you.
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size 2260048 bytes
Variable Size 675283888 bytes
Database Buffers 385875968 bytes
Redo Buffers 5517312 bytes
Database mounted.
SQL> alter database clear logfile group 2;
Database altered.
SQL> alter database open;
Database altered.
SQL>
If not you will need to recover and open with the resetlogs option. Unfortunately because you lost an entire log group you may also have lost data.

Related

database does not open and not mounted [migrated]

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated yesterday.
After unexpected shutdown the data base not open,
`ORA-01507:database not mounted
SQL>alter database mount;
ORA-00214:control file'E:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\CLUSTER\CONTROLFILECONTROLFILE\OO1_MF_HB1484JB_.CTL' version 359456 inconsistent with file 'E:\APP\ADMINISTRATOR\ORAORADATA\CLUSTER\CONTROLFILE\O1_MF_HB114848J_.CTL'
I took a copy of both of the control files on external hard drive and replace the less version number with the higher version number, then excuted
```SQL> shutdown immediate;
ORA-01507:database not mounted
ORACLE instance shut down
SQL> startup mount;
Total System Global Area 2221395968 bytes
Fixed size 2177656 bytes
Variable size 1677723016 bytes
Database Buffers 536870912 bytes
Redo Buffer 4624384 bytes
ORA-00205: error in identiidentifying control file, check alert log for more info
One of your control files is either corrupted or contains older version of data than other. Did you run out of storage?
make copy of both
overwrite one control file with the other one (smaller by bigger or oldner by newer).
try to start database

oracle control file and undo databasefile was deleted,Is there a way to get back?

Recreate control file,this is the code
CREATE CONTROLFILE REUSE DATABASE "ORCL" RESETLOGS NOARCHIVELOG
MAXLOGFILES 5
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 226
LOGFILE
GROUP 1 '/home/oracle/app/oradata/orcl/redo01.log' SIZE 50M,
GROUP 2 '/home/oracle/app/oradata/orcl/redo02.log' SIZE 50M,
GROUP 3 '/home/oracle/app/oradata/orcl/redo03.log' SIZE 50M
DATAFILE
'/home/oracle/app/oradata/orcl/osc_zb.dbf',
......
CHARACTER SET ZHS16GBK;
After then open database,the result is as follows:
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/home/oracle/app/oradata/orcl/system01.dbf'
recover datafile 1:
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.
then,use hidden parameters to start database.
undo_management='manual'
undo_tablespace='UNDOTBS01'
_allow_resetlogs_corruption=true
also don't work:
SQL> startup pfile=/home/oracle/initoracle.ora
ORACLE instance started.
Total System Global Area 1586708480 bytes
Fixed Size 2253624 bytes
Variable Size 973081800 bytes
Database Buffers 603979776 bytes
Redo Buffers 7393280 bytes
Database mounted.
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '/home/oracle/app/oradata/orcl/system01.dbf'
Such a cycle
SQL> recover datafile 1
ORA-00283: recovery session canceled due to errors
ORA-16433: The database must be opened in read/write mode.
I hava no idea to restore database,moguls,help me
Can start to mounted status?Maybe You can try following method。
first,find the 'CURRENT' redo groups.
select group#,sequence#,status,first_time,next_change# from v$log;
And find the redo file location
select * from v$logfile;
Then,through this redo log to recover database
SQL> recover database until cancel using backup controlfile;
ORA-00279: change 4900911271334 generated at 03/06/2018 05:46:29 needed for
thread 1
ORA-00289: suggestion :
/home/wonders/app/wonders/flash_recovery_area/ORCL/archivelog/2018_03_12/o1_mf_1
_4252_%u_.arc
ORA-00280: change 4900911271334 for thread 1 is in sequence #4252
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
/home/wonders/app/wonders/oradata/orcl/redo01.log
Log applied.
Media recovery complete.
Finally,open database with ‘RESETLOGS’

oracle dbf file is normal, but cannot mount

Things start not close oracle process when I shutdown system, after I exec startup appear error ORA-01157 ORA-01110.
I very sure dbf file is existed, and I use dbv see the file, every thing is normal.
Last thing, I try offline drop those dbf, but cannot recovery them.
Please give me some help, thank you very much!
mount your database :
SQL> startup mount;
Provided your database is in NOARCHIVELOG mode, Issue the following queries :
SQL> select min(first_change#) min_first_change
from v$log V1 inner join v$logfile f on ( l.group# = f.group# );
SQL> select change# ch_number from v$recover_file;
If the ch_number is greater than the min_first_change of your logs, the datafile can be recovered.
If the ch_number is less than the min_first_change of your logs,
the file cannot be recovered.
In this case;
restore the most recent full backup (and thus lose all changes to
the database since) or recreate the tablespace.
Recover the datafile(If the case in the upper yellow part isn't met):
SQL> recover datafile '/opt/oracle/resource/undotbs02.dbf';
Confirm each of the logs that you are prompted for until you receive the message Media Recovery Complete. If you are prompted for a non-existing
archived log, Oracle probably needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the
ORA-00280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence
number matches the one you are being asked for. Keep entering online logs as requested until you receive the message Media Recovery Complete .
If the database is at mount point, open it :
SQL> alter database open;
If the DBF file fails to mount then check the source of DBF file, whether it is imported from other database or converted with any other tool. Generally, if the DBF file does not have a specific form then it cannot be mounted, troubleshoot Oracle DBF file by following steps
https://docs.cloud.oracle.com/iaas/Content/File/Troubleshooting/exportpaths.htm
If the database is still causing the problem then there could be problems with other components and before mounting fix them with a professional database recovery tool like https://www.filerepairtools.com/oracle-database-recovery.html

alter system flush shared_pool oracle

I've two questions
First, it's substantial the difference between execute the ALTER SYSTEM FLUSH SHARED_POOL command in the server and in one client? In my company they teach me that I've to execute directly in the server that command, but I think is just a command that goes for the network and just a message that is flushed I think that shouldn't be substantial differente how it happens with lot of data, I'm talking of a system that take about 5 minutes approximately to flush
The second, how can I flush a instance from another instance?
ALTER SYSTEM FLUSH SHARED_POOL; can be run from either the client or the server, it doesn't matter.
Many DBAs will run the command from the server, for two reasons. First, many DBAs run all commands from the server, usually because they never learned the importance of an IDE. Second, the command ALTER SYSTEM FLUSH SHARED_POOL; only affects one instance in a clustered database. Connecting directly to the server is usually an easy way of ensuring you connect to each database instance of a cluster.
But you can easily flush the shared pool from all instances without directly connecting to each instance, using the below code. (Thanks to berxblog for this idea.)
--Assumes you have elevated privileges, like DBA role or ALTER SYSTEM privilege.
create or replace function flush_shared_pool return varchar2 authid current_user as
begin
execute immediate 'alter system flush shared_pool';
return 'Done';
end;
/
select *
from table(gv$(cursor(
select instance_number, flush_shared_pool from v$instance
)));
INSTANCE_NUMBER FLUSH_SHARED_POOL
--------------- -----------------
1 Done
3 Done
2 Done
I partially disagree with #sstan - flushing the shared pool should be rare in production, but it may be relatively common in development. Flushing the shared pool and buffer cache can help imitate running queries "cold".

Oracle 12c extended to support varchar2 > 4000 bytes doesn't work for user who is not sysdba

On oracle 12c compatible 12.0.0, changed to extended with sysdba privileges.
I can create a table with varchar2(16000) as column now and insert a string > 4000 bytes; but only when connected as sysdba.
When connected as a normal user rather than sysdba, I cannot play with varchar2 >4000 bytes, an error ORA-60019 is thrown. Can anyone explain why?
the param max_string_size= extended and compatible=12.0.0 when logged in as a user who is not a sysdba.
Do following steps and let me know if the issue is resolved. I am asking to set the parameter again just to make sure
everything is in order.
1) Back up your spfile ( get location of spfile)
sqlplus / as sysdba
show parameter spfile;
2) Shut down the database.
sqlplus / as sysdba
shutdown immediate
3) Restart the database in UPGRADE mode.
startup upgrade
4) Change the setting of MAX_STRING_SIZE to EXTENDED.
alter system set MAX_STRING_SIZE ='EXTENDED' scope=spfile;
5)
sqlplus / as sysdba
#%ORACLE_HOME%\RDBMS\ADMIN\utl32k.sql
#%ORACLE_HOME%\RDBMS\ADMIN\utlrp.sql
Note: The utl32k.sql script increases the maximum size of the
VARCHAR2, NVARCHAR2, and RAW columns for the views where this is
required. The script does not increase the maximum size of the
VARCHAR2, NVARCHAR2, and RAW columns in some views because of the way
the SQL for those views is written.
rdbms/admin/utlrp.sql script helps to recompile invalid objects. You
must be connected AS SYSDBA to run the script.
6) Restart the database in NORMAL mode.
sqlplus / as sysdba
shutdown immediate
startup;
show parameter MAX_STRING_SIZE;
7) create new table with column datatype varchar2 having more than 4000 size.
You must change your file "TNSNAMES.ORA" to connect by PDB.
I was with the same problem.
I have solved with the information of link bellow.
https://dba.stackexchange.com/questions/240761/in-oracle-12c-tryiyng-to-create-table-with-columns-greater-than-4000
The reason for that behaviour is that you are in a multi-tenant environment, i.e. a master container called the CDB ("Container Database"), and any number of PDBs ("Pluggable Databases").
The CDB ("container") is a kind of "system" database that is there to contain the actual customer databases ("pluggable databases" or PDBs). The CDB is not intended to receive any customer data whatsoever. Everything goes into one or more PDBs.
When you connect without specifying any service, you are automatically placed in the CDB. The extended strings parameter is ignored for the CDB: the limit remains 4000 bytes. The following connects to the CDB. Creating a table with a long string is rejected, just like in your case:

Resources