Out of Process memory issue with below error - oracle

Previously i was getting PGA_AGGREGATE_LIMIT exceeded issue. so i have changed PGA_AGGRIGATE_LIMIT to 0(no limit).
We have 47GB RAM. we have set PGA_AGGREGATE_LIMIT=0. and PGA_TARGET=10GB. still getting "OUT OF PROCESS MEMORY ISSUE" any suggestions here will be appreciated.
Below is Error:
java.sql.SQLException: ORA-04030: out of process memory when trying to allocate 107096 bytes (kolarsCreateCt,qmemNextBuf:Large Alloc)
ORA-06512: at "SYS.XMLTYPE", line 138.
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1059)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:522)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at oracle.jdbc.driver.T4CPreparedStatement.fetch(T4CPreparedStatement.java:1066)
at oracle.jdbc.driver.OracleStatement.fetchMoreRows(OracleStatement.java:3716)

There are many, many things that could affect this situation and cause this error, including your PGA_AGGREGATE_LIMIT, your OS and kernel configuration, ulimit settings for the oracle user (on Linux), etc. If you should have plenty of physical RAM, then I suspect ulimit values may be your problem - artificially limiting the amount of memory the OS can allocate. See these links for additional troubleshooting tips:
https://asktom.oracle.com/pls/apex/asktom.search?tag=error-message-ora-04030
https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=454361477893454&parent=EXTERNAL_SEARCH&sourceId=HOWTO&id=1934141.1&_afrWindowMode=0&_adf.ctrl-state=2l47phfis_4
http://www.dba-oracle.com/t_ora_04030_out_process_memory.htm

I assume this question is related to this one, and that you are loading large XML files that are using up all of the memory. Adjusting the memory settings like pmdba suggests is a good idea, but it will only get you so far if individual files are ginormous.
Applications don't typically read 1GB files from a database table. Normally, application performance improves by batching commands and processing multiple items at once. With such large files, batching will quickly eat up memory so you should try adjusting your settings to process things row-by-row as much as possible. Disable prefetch for this query. If that doesn't work, try executing multiple queries, each of which will only returns a single row.
Alternatively, perhaps you could change the way the XML is stored on the database and create XML indexes to improve query performance. I haven't used these features, but Oracle provides different ways to store and index XML, and I'd bet that one of those methods allows you to read from the XML without loading the entire thing into memory. See the XML DB Developer's Guide for more information.

Related

oracle parameter enable_shared_pool_durations setting to true or false

Could ANYONE please help ME to identify if this change of parameter could affect the operation of the application?
Oracle 11gR2 DB.
alter system set "_enable_shared_pool_durations"=false scope=spfile sid=*
Can this parameter affect or produce the new error for DB too?
This seems to be a setting that is used as a workaround for a small number of unpublished bugs around shared pools and ASMM.
It seems to be something you should only do if directed to by Oracle Support.
From MOS:
_enable_shared_pool_durations=false
This affects the architecture of memory in the pools.
When _enable_shared_pool_durations is FALSE, subpools within the SGA will no longer have 4 durations.
Instead, each subpool will have only a single duration.
This mimics the behavior in 9i, and the shared pool will no longer be able to shrink.
The advantage of this is that the performance issues such as sudden drop/resize/shrink of buffer cache (Doc ID 1344228.1) can be avoided.
A duration will not encounter memory exhaustion while another duration has free memory.
The disadvantage is that the shared pool (and streams pool) are not able to shrink, mostly negating the benefits of ASMM.
PLEASE NOTE : Even if you have AMM / ASMM disabled similar behavior might be seen as per the note.

CFSpreadSheet functions using up memory for large data sets

We have a Coldfusion application that is running a large query (up to 100k rows) and then displaying it in HTML. The UI then offers an Export button that triggers writing the report to an Excel spreadsheet in .xlsx format using the cfspreadsheet tags and spreadsheet function, in particular, spreadsheetSetCellValue for building out row column values, spreadsheetFormatRow and spreadsheetFormatCell functions for formatting. The ssObj is then written to a file using:
<cfheader name="Content-Disposition" value="attachment; filename=OES_#sel_rtype#_#Dateformat(now(),"MMM-DD-YYYY")#.xlsx">
<cfcontent type="application/vnd-ms.excel" variable="#ssObj#" reset="true">
where ssObj is the SS object. We are seeing the file size about 5-10 Mb.
However... the memory usage for creating this report and writing the file jumps up by about 1GB. The compounding problem is that the memory is not released right away after the export completes by the java GC. When we have multiple users running and exporting this type of report, the memory keeps climbing up and reaches the heap size allocated and kills the serer's performance to the point it brings down the server. A reboot is usually necessary to clear it out.
Is this normal/expected behavior or how should we be dealing with this issue? Is it possible to easily release the memory usage of this operation on demand after the export has completed, so that others running the report readily get access to the freed up space for their reports? Is this type of memory usage for a 5-10Mb file common with cfspreadsheet functions and writing the object out?
We have tried temporarily removing the expensive formatting functions and still the memory usage is large for the creation and writing of the .xlsx file. We have also tried using the spreadsheetAddRows approach and the cfspreadsheet action="write" query="queryname" tag passing in a query object but this too took up a lot of memory.
Why are these functions so memory hoggish? What is the optimal way to generate Excel SS files without this out of memory issue?
I should add the server is running in Apache/Tomcat container on Windows and we are using CF2016.
How much memory do you have allocated to your CF instance?
How many instances are you running?
Why are you allowing anyone to view 100k records in HTML?
Why are you allowing anyone to export that much data on the fly?
We had issues of this sort (CF and memory) at my last job. Large file uploads consumed memory, large excel exports consumed memory, it's just going to happen. As your application's user base grows, you'll hit a point where these memory hogging requests kill the site for other users.
Start with your memory settings. You might get a boost across the board by doubling or tripling what the app is allotted. Also, make sure you're on the latest version of the supported JDK for your version of CF. That can make a huge difference too.
Large file uploads would impact the performance of the instance making the request. This meant that others on the same instance doing normal requests were waiting for those resources needlessly. We dedicated a pool of instances to only handle file uploads. Specific URLs were routed to these instances via a load balancer and the application was much happier for it.
That app also handled an insane amount of data and users constantly wanted "all of it". We had to force search results and certain data sets to reduce the amount shown on screen. The DB was quite happy with that decision. Data exports were moved to a queue so they could craft those large excel files outside of normal page requests. Maybe they got their data immediately, maybe the waited a while to get a notification. Either way, the application performed better across the board.
Presumably a bit late for the OP, but since I ended up here others might too. Whilst there is plenty of general memory-related sound advice in the other answer+comments here, I suspect the OP was actually hitting a genuine memory leak bug that has been reported in the CF spreadsheet functions from CF11 through to CF2018.
When generating a spreadsheet object and serving it up with cfheader+cfcontent without writing it to disk, even with careful variable scoping, the memory never gets garbage collected. So if your app runs enough Excel exports using this method then it eventually maxes out memory and then maxes out CPU indefinitely, requiring a CF restart.
See https://tracker.adobe.com/#/view/CF-4199829 - I don't know if he's on SO but credit to Trevor Cotton for the bug report and this workaround:
Write spreadsheet to temporary file,
read spreadsheet from temporary file back into memory,
delete temporary file,
stream spreadsheet from memory to
user's browser.
So given a spreadsheet object that was created in memory with spreadsheetNew() and never written to disk, then this causes a memory leak:
<cfheader name="Content-disposition" value="attachment;filename=#arguments.fileName#" />
<cfcontent type="application/vnd.ms-excel" variable = "#SpreadsheetReadBinary(arguments.theSheet)#" />
...but this does not:
<cfset local.tempFilePath = getTempDirectory()&CreateUUID()&arguments.filename />
<cfset spreadsheetWrite(arguments.theSheet, local.tempFilePath, "", true) />
<cfset local.theSheet = spreadsheetRead(local.tempFilePath) />
<cffile action="delete" file="#local.tempFilePath#" />
<cfheader name="Content-disposition" value="attachment;filename=#arguments.fileName#" />
<cfcontent type="application/vnd.ms-excel" variable = "#SpreadsheetReadBinary(local.theSheet)#" />
It shouldn't be necessary, but Adobe don't appear to be in a hurry to fix this, and I've verified that this works for me in CF2016.

An issue of partial insertion of data into the target when job fails

We have 17 records data set in one of the source tables in which we have erroneous data in the 14th record, which causes the job failure. Then, in the target only 10 records would be inserted as the commit size given as “10” in the mysqloutput component and the job failed. In the next execution after correcting the error record, job will fetch all the 17 records with successful execution. Due to which there will be duplicates in the target.
we tried :
To overcome this, we have tried with tmysqlrollback component in which we have included the tmysqlconnection and tmysqlcommit components.
Q1 : Is there any other option to use tmysqlrollback without using the tmysqlconnection and tmysqlcommit components?
Explored the tmysqlrollback and commit component from the documentation
https://help.talend.com/reader/QgrwjIQJDI2TJ1pa2caRQA/7cjWwNfCqPnCvCSyETEpIQ
But still looking for clue how to design the above process efficient manner.
Q2 : Also, We'd like to know about the RAM usage and disk space consumption from the performance perspective.
Any help on it would be much appreciated ?
No, the only way to do transactions in Talend is to open a connection using tMysqlConnection, then either commit using a tMysqlCommit or rollback using tMysqlRollback.
Without knowing what you're doing in your job (lookups, transformations..etc), it's hard to advise you on the ram consumption and performance. But if you only have a source to target, then ram consumption should be minimal (make sure you enable stream on the tMysqlInput component). If you have another database as your source, then ram consumption depends on how that database driver is configured (jdbc drivers usually accept a parameter to tell it to only fetch a certain number of records at a time).
Lookups and components that process data in memory (tSortRow, tUniqRow, tAggregateRow..etc) are what causes memory issues, but it's possible to tweak their usage (using disk among other methods).

How do I load the Oracle schema into memory instead of the hard drive?

I have a certain web application that makes upwards of ~100 updates to an Oracle database in succession. This can take anywhere from 3-5 minutes, which sometimes causes the webpage to time out. A re-design of the application is scheduled soon but someone told me that there is a way to configure a "loader file" which loads the schema into memory and runs the transactions there instead of on the hard drive, supposedly improving speed by several orders of magnitude. I have tried to research this "loader file" but all I can find is information about the SQL* bulk data loader. Does anyone know what he's talking about? Is this really possible and is it a feasible quick fix or should I just wait until the application is re-designed?
Oracle already does it's work in memory - disk I/O is managed behind the scenes. Frequently accessed data stays in memory in the buffer cache. Perhaps your informant was referring to "pinning" an object in memory, but that's really not effective in the modern releases of Oracle (since V8), particularly for table data. Let Oracle do it's job - it's actually very good at it (probably better than we are). Face it - 100K updates is going to take a while.

What is the Oracle KGL SIMULATOR?

What is this thing called a KGL SIMULATOR and how can its memory utilisation be managed by application developers?
The background to the question is that I'm occasionally getting errors like the following and would like to get a general understanding of what is using this heap-space?
ORA-04031: unable to allocate 4032 bytes of shared memory ("shared pool","select text from > view$ where...","sga heap(3,0)","kglsim heap")
I've read forum posts through Google suggesting that the kglsim is related to the KGL SIMULATOR, but there is no definition of that component, or any tips for developers.
KGL=Kernel General Library cache manager, as the name says it deals with library objects such cursors, cached stored object definitions (PL/SQL stored procs, table definitions etc).
KGL simulator is used for estimating the benefit of caching if the cache was larger than currently. The general idea is that when flushing out a library cache object, it's hash value (and few other bits of info) are still kept in the KGL simulator hash table. This stores a history of objects which were in memory but flushed out.
When loading a library cache object (which means that no existing such object is in library cache), Oracle goes and checks the KGL simulator hash table to see whether an object with matching hash value is in there. If a matching object is found, that means that the required object had been in cache in past, but flushed out due space pressure.
Using that information of how many library cache object (re)loads could have been been avoided if cache had been bigger (thanks to KGL simulator history) and knowing how much time the object reloads took, Oracle can predict how much response time would have been saved instancewide if shared pool was bigger. This is seen from v$library_cache_advice.
Anyway, this error was probably raised by a victim session due running out of shared pool space. In other words, someone else may have used up all the memory (or all the large enough chunks) and this allocation for KGL sim failed because of that.
v$sgastat would be the first point for troubleshooting ORA-4031 errors, you need to identify how much free memory you have in shared pool (and who's using up most of the memory).
--
Tanel Poder
http://blog.tanelpoder.com
I've found that KGL stands for "Kernel Generic Library".
Your issue could be a memory leak within Oracle. You probably should open a case with Oracle support.

Resources