This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated yesterday.
After unexpected shutdown the data base not open,
`ORA-01507:database not mounted
SQL>alter database mount;
ORA-00214:control file'E:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\CLUSTER\CONTROLFILECONTROLFILE\OO1_MF_HB1484JB_.CTL' version 359456 inconsistent with file 'E:\APP\ADMINISTRATOR\ORAORADATA\CLUSTER\CONTROLFILE\O1_MF_HB114848J_.CTL'
I took a copy of both of the control files on external hard drive and replace the less version number with the higher version number, then excuted
```SQL> shutdown immediate;
ORA-01507:database not mounted
ORACLE instance shut down
SQL> startup mount;
Total System Global Area 2221395968 bytes
Fixed size 2177656 bytes
Variable size 1677723016 bytes
Database Buffers 536870912 bytes
Redo Buffer 4624384 bytes
ORA-00205: error in identiidentifying control file, check alert log for more info
One of your control files is either corrupted or contains older version of data than other. Did you run out of storage?
make copy of both
overwrite one control file with the other one (smaller by bigger or oldner by newer).
try to start database
Things start not close oracle process when I shutdown system, after I exec startup appear error ORA-01157 ORA-01110.
I very sure dbf file is existed, and I use dbv see the file, every thing is normal.
Last thing, I try offline drop those dbf, but cannot recovery them.
Please give me some help, thank you very much!
mount your database :
SQL> startup mount;
Provided your database is in NOARCHIVELOG mode, Issue the following queries :
SQL> select min(first_change#) min_first_change
from v$log V1 inner join v$logfile f on ( l.group# = f.group# );
SQL> select change# ch_number from v$recover_file;
If the ch_number is greater than the min_first_change of your logs, the datafile can be recovered.
If the ch_number is less than the min_first_change of your logs,
the file cannot be recovered.
In this case;
restore the most recent full backup (and thus lose all changes to
the database since) or recreate the tablespace.
Recover the datafile(If the case in the upper yellow part isn't met):
SQL> recover datafile '/opt/oracle/resource/undotbs02.dbf';
Confirm each of the logs that you are prompted for until you receive the message Media Recovery Complete. If you are prompted for a non-existing
archived log, Oracle probably needs one or more of the online logs to proceed with the recovery. Compare the sequence number referenced in the
ORA-00280 message with the sequence numbers of your online logs. Then enter the full path name of one of the members of the redo group whose sequence
number matches the one you are being asked for. Keep entering online logs as requested until you receive the message Media Recovery Complete .
If the database is at mount point, open it :
SQL> alter database open;
If the DBF file fails to mount then check the source of DBF file, whether it is imported from other database or converted with any other tool. Generally, if the DBF file does not have a specific form then it cannot be mounted, troubleshoot Oracle DBF file by following steps
https://docs.cloud.oracle.com/iaas/Content/File/Troubleshooting/exportpaths.htm
If the database is still causing the problem then there could be problems with other components and before mounting fix them with a professional database recovery tool like https://www.filerepairtools.com/oracle-database-recovery.html
I want to load about 2 million rows from CSV formatted file to database and run some SQL statement for analysis, and then remove the data. File size is 2GB in size. Data is web server log message.
Did some research and found H2 in-memory database seems to be faster, since its keep the data in memory. When I try to load the data got OutOfMemory error message because of 32 bit java. Planning to try with 64 bit java.
I am looking for all the optimization option to load the quickly and run the SQL.
test.sql
CREATE TABLE temptable (
f1 varchar(250) NOT NULL DEFAULT '',
f2 varchar(250) NOT NULL DEFAULT '',
f3 reponsetime NOT NULL DEFAULT ''
) as select * from CSVREAD('log.csv');
Running like this in 64 bit java:
java -Xms256m -Xmx4096m -cp h2*.jar org.h2.tools.RunScript -url 'jdbc:h2:mem:test;LOG=0;CACHE_SIZE=65536;LOCK_MODE=0;UNDO_LOG=0' -script test.sql
If any other database available to use in AIX please let me know.
thanks
If the CSV file is 2 GB, then it will need more than 4 GB of heap memory when using a pure in-memory database. The exact memory requirements depend a lot on how redundant the data is. If the same values appear over and over again, then the database will need less memory as common objects are re-used (no matter if it's a string, long, timestamp,...).
Please note the LOCK_MODE=0, UNDO_LOG=0, and LOG=0 are not needed when using create table as select. In addition, the CACHE_SIZE does not help when using the mem: prefix (but it helps for in-memory file systems).
I suggest to try using the in-memory file system first (memFS: instead of mem:), which is slightly slower than mem:, but needs less memory usually:
jdbc:h2:memFS:test;CACHE_SIZE=65536
If this is not enough, try the compressed in-memory mode (memLZF:), which is again slower but uses even less memory:
jdbc:h2:memLZF:test;CACHE_SIZE=65536
If this is still not enough, I suggest to try the regular persistent mode and see how fast this is:
jdbc:h2:~/data/test;CACHE_SIZE=65536
Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage:
SELECT e.SID, e.username, e.status, b.PGA_MEMORY
FROM v$session e
LEFT JOIN
(select y.SID, y.value pga,
TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY
from v$sesstat y, v$statname z
where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b
ON e.sid=b.sid
WHERE (PGA)/1024/1024 > 20
ORDER BY 4 DESC;
It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this?
Update:
I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.
Perhaps SQL Developer doesn't close the cursors it had opened.
So if you run a query which sorts a million rows and SQL Developer fetches only first 20 rows from there, it needs to keep the cursor open should you want to scroll down and fetch more.
So, it needs to keep some of the PGA memory associated with the cursor's sort area still allocated (it's called retained sort area) as long as the cursor is open and hasn't reached EOF (end-of-fetch).
Pick a session and run:
select sql_id,operation_type,actual_mem_used,max_mem_used,tempseg_size
from v$sql_workarea_active
where sid = &SID_OF_INTEREST
This should show whether some cursors are still kept open with their memory...
Are you using Automatic Memory Management? If yes, I would not worry about the PGA memory used.
See docs:
Automatic Memory Management: http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/memory003.htm#ADMIN11011
MEMORY_TARGET: http://download.oracle.com/docs/cd/B28359_01/server.111/b28320/initparams133.htm
Is there a reason you are using 32 bit Oracle? Most recent hardware supports 64 bit.
Oracle, especially with AMM, will use every bit of memory on the machine you give it. If it doesn't have a reason to de-allocate memory it will not do so. It is the same with storage space: if you delete 20 GB of user data that space is not returned to the OS. Oracle will hold on to it unless you explicitly compact the tablespaces.
I believe a simple test should relieve your concerns. If it's 32 bit, and each SQL Developer session is using 100MB+ of RAM, then you'd only need a few hundred sessions open to cause a low-memory problem...if there really is one.
I need some pointers on how to diagnose and fix this problem. I don't know if this is a simple server setup problem or an application design problem (or both).
Once or twice every few months this Oracle XE database reports ORA-4031 errors. It doesn't point to any particular part of the sga consistently. A recent example is:
ORA-04031: unable to allocate 8208 bytes of shared memory ("large pool","unknown object","sort subheap","sort key")
When this error comes up, if the user keeps refreshing, clicking on different links, they'll generally get more of these kinds of errors at different times, then soon they'll get "404 not found" page errors.
Restarting the database usually resolves the problem for a while, then a month or so later it comes up again, but rarely at the same location in the program (i.e. it doesn't seem linked to any particular portion of code) (the above example error was raised from an Apex page which was sorting 5000+ rows from a table).
I've tried increasing sga_max_size from 140M to 256M and hope this will help things. Of course, I won't know if this has helped since I had to restart the database to change the setting :)
I'm running Oracle XE 10.2.0.1.0 on a Oracle Enterprise Linux 5 box with 512MB of RAM. The server only runs the database, Oracle Apex (v3.1.2) and Apache web server. I installed it with pretty much all default parameters and it's been running quite well for a year or so. Most issues I've been able to resolve myself by tuning the application code; it's not intensively used and isn't a business critical system.
These are some current settings I think may be relevant:
pga_aggregate_target 41,943,040
sga_max_size 268,435,456
sga_target 146,800,640
shared_pool_reserved_size 5,452,595
shared_pool_size 104,857,600
If it's any help here's the current SGA sizes:
Total System Global Area 268435456 bytes
Fixed Size 1258392 bytes
Variable Size 251661416 bytes
Database Buffers 12582912 bytes
Redo Buffers 2932736 bytes
Even though you are using ASMM, you can set a minimum size for the large pool (MMAN will not shrink it below that value).
You can also try pinning some objects and increasing SGA_TARGET.
Don't forget about fragmentation.
If you have a lot of traffic, your pools can be fragmented and even if you have several MB free, there could be no block larger than 4KB.
Check size of largest free block with a query like:
select
'0 (<140)' BUCKET, KSMCHCLS, KSMCHIDX,
10*trunc(KSMCHSIZ/10) "From",
count(*) "Count" ,
max(KSMCHSIZ) "Biggest",
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ<140
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 10*trunc(KSMCHSIZ/10)
UNION ALL
select
'1 (140-267)' BUCKET,
KSMCHCLS,
KSMCHIDX,
20*trunc(KSMCHSIZ/20) ,
count(*) ,
max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ between 140 and 267
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 20*trunc(KSMCHSIZ/20)
UNION ALL
select
'2 (268-523)' BUCKET,
KSMCHCLS,
KSMCHIDX,
50*trunc(KSMCHSIZ/50) ,
count(*) ,
max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ between 268 and 523
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 50*trunc(KSMCHSIZ/50)
UNION ALL
select
'3-5 (524-4107)' BUCKET,
KSMCHCLS,
KSMCHIDX,
500*trunc(KSMCHSIZ/500) ,
count(*) ,
max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ between 524 and 4107
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 500*trunc(KSMCHSIZ/500)
UNION ALL
select
'6+ (4108+)' BUCKET,
KSMCHCLS,
KSMCHIDX,
1000*trunc(KSMCHSIZ/1000) ,
count(*) ,
max(KSMCHSIZ) ,
trunc(avg(KSMCHSIZ)) "AvgSize",
trunc(sum(KSMCHSIZ)) "Total"
from
x$ksmsp
where
KSMCHSIZ >= 4108
and
KSMCHCLS='free'
group by
KSMCHCLS, KSMCHIDX, 1000*trunc(KSMCHSIZ/1000);
Code from
All of the current answers are addressing the symptom (shared memory pool exhaustion), and not the problem, which is likely not using bind variables in your sql \ JDBC queries, even when it does not seem necessary to do so. Passing queries without bind variables causes Oracle to "hard parse" the query each time, determining its plan of execution, etc.
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:528893984337
Some snippets from the above link:
"Java supports bind variables, your developers must start using prepared statements and bind inputs into it. If you want your system to ultimately scale beyond say about 3 or 4 users -- you will do this right now (fix the code). It is not something to think about, it is something you MUST do. A side effect of this - your shared pool problems will pretty much disappear. That is the root cause. "
"The way the Oracle
shared pool (a very important shared memory data structure)
operates is predicated on developers using bind variables."
" Bind variables are SO MASSIVELY important -- I cannot in any way shape or form OVERSTATE their importance. "
The following are not needed as they they not fix the error:
ps -ef|grep oracle
Find the smon and kill the pid for it
SQL> startup mount
SQL> create pfile from spfile;
Restarting the database will flush your pool and that solves a effect not the problem.
Fixate your large_pool so it can not go lower then a certain point or add memory and set a higher max memory.
This is Oracle bug, memory leak in shared_pool, most likely db managing lots of partitions.
Solution: In my opinion patch not exists, check with oracle support. You can try with subpools or en(de)able AMM ...
Error
ORA-04031: unable to allocate 4064 bytes of shared memory ("shared pool","select increment$,minvalue,m...","sga heap(3,0)","kglsim heap")
Solution: by nepasoft nepal
1.-
ps -ef|grep oracle
2.- Find the smon and kill the pid for it
3.-
SQL> startup mount
ORACLE instance started.
Total System Global Area 4831838208 bytes
Fixed Size 2027320 bytes
Variable Size 4764729544 bytes
Database Buffers 50331648 bytes
Redo Buffers 14749696 bytes
Database mounted.
4.-
SQL> alter system set shared_pool_size=100M scope=spfile;
System altered.
5.-
SQL> shutdown immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
6.-
SQL> startup
ORACLE instance started.
Total System Global Area 4831838208 bytes
Fixed Size 2027320 bytes
Variable Size 4764729544 bytes
Database Buffers 50331648 bytes
Redo Buffers 14749696 bytes
Database mounted.
Database opened.
7.-
SQL> create pfile from spfile;
File created.
SOLVED