Out of Memory Error
ORA -04030: Out Of Process Memory When Trying to allocate 64528 byte(Short subheap, sort key)
Generally speaking this is an actual out of memory error on the Oracle server typically caused by a too small pga.
The most likely resolution (depending on your server version) in to increase the PGA_AGGREGATE_TARGET configuration setting.
Related
I know that SGA ( that contain data and control information for one Oracle Database instance) stands for System Global Area and PGA (that contains data and control information exclusively for use by an Oracle process) stands for Program Global Area but, I don't really understand the function of the variables does to the database. How would it help when retrieving data if SGA is configured like 10 times larger than PGA ?
The SGA is a memory structure on the server that contains pools to hold code, SQL, classes, cursors, etc. and caches to hold data. So when a client sends a query to the server, the code and data sits in the SGA to get processed by the RDBMS on the server.
The PGA is a shared memory area for a user server process and is used for temporary storage and work areas. Oracle uses the PGA and temp tablespaces to work to get to a result set which is passed back to the client, then the PGA for the session is freed.
There is no ratio between the two. The SGA is sized according to how much code and data is getting sent to the server, and the PGA is dynamic according to how many processes are active. If there are thousands of processes, the PGA can easily be double the SGA. The SGA is sized VERY carefully though; making it bigger does not necessarily make it better for performance reasons.
There is also a UGA (User Global Area) which is the memory area for each client (non-server) process.
We are using sqlplus to offload data from oracle using sqlplus on a large table with 500+ columns and around 15 million records per day.
The query fails as oracle is not able to allocate the required memory for the result set.
Fine tuning oracle DB server to increase memory allocation is ruled out since it is used across teams and is critical.
This is a simple select with a filter on a column.
What options do I have to make it work?
1) to break my query down into multiple chunks and run it in nightly batch mode.
If so , how can a select query be broken down
2) Are there any optimization techniques I can use while using sqlplus for a select query on a large table?
3) Any java/ojdbc based solution which can break a select into chunks and reduce the load on db server?
Any pointers are highly appreciated.
Here is the errror message thrown:
ORA-04030: out of process memory when trying to allocate 169040 bytes (pga heap,kgh stack)
ORA-04030: out of process memory when trying to allocate 16328 bytes (koh-kghu sessi,pl/sql vc2)
The ORA-4030 indicates the process needs more memory(UGA in SGA/PGA depending upon the server architecture) to execute job.
This could be caused by shortage of RAM(Dedicated server mode environment), a small PGA size, or may be operating system setting to restrict allocation of enough RAM.
This MOS Note describes how to diagnose and resolve ORA-04030 error.
Diagnosing and Resolving ORA-4030 Errors (Doc ID 233869.1)
Your option 1 seems in your control. Breaking down the query will require knowledge of the query/data. Either a column in the data might work; i.e.
query1: select ... where col1 <= <value>
query2: select ... where col1 > <value>
... or ... you might have to build more code around the problem.
Thought: does the query involving sorting/grouping? Can you live without it? Those operations take up more memory.
I executed improper sql statement that syntactically correct but caused MonetDB to fail at allocating heap memory trying to allocate 490G. In result disk ran out of space and seems MonetDB never cleaned up. One of the subdirs in /bat holds 127G which I think was the one generated during that query execution.
How can I reclaim that space?
Also, in which directories does the actual data reside that represents columns?
I was able to fix the issue so decided to self answer in case someone else will run into the same problem. Because partition with dbfarm was completely filled mclient would not start. After I freed some space and run mclient again, then MonetDB was able to clean up and recover disk space.
When loading Spatial Network into memory where is the actual memory occupied? On client or server side?
PS, example for loading network into memory:
PL/SQL:
sdo_net_mem.network_manager.read_network(net_mem=>'XXX', allow_updates=>'TRUE');
Java:
NetworkMetadata metadata = LODNetworkManager.getNetworkMetadata(sql.getConnection(), 'XXX', 'XXX');
NetworkIO networkIO = LODNetworkManager.getNetworkIO(sql.getConnection(), 'XXX', 'XXX', metadata);
networkIO.readLogicalNetwork(1);
When using the LOD APIs the memory is allocated in the client side or where your client application is running. Please check this whitepaper: A Load-On-Demand Approach to Handling Large Networks in the Oracle Spatial Network Data Model
It's on the client (i.e. host application). If you're using PL/SQL then the database itself is the host application. If you're using Java and run your code on the application server then it's on the appserver.
A recommended approach is LOD. And in contrast to in-memory you can fine-tune, how big the partitions are and how many should be loaded in memory at once, so you have good control of the memory consumption. In-memory can be considered as a corner case of LOD with just 1 unlimited partition and everything loaded into memory.
The drawback of LOD is the necessity to partition your network.
I've got Oracle database that is used as a storage for web services. Most of the time data are in read-only mode and cached in RAM directly by the service. However during the system startup all data are pulled once from Oracle and the database tries to be smart and keeps the data in RAM (1GB).
How can I limit/control the amount of RAM available to the Oracle 9 instance?
A short answer is modify SGA_MAX_SIZE. The long one follows.
If you are referring to the "data", you have to check the DB_CACHE_SIZE (size of the memory buffers) and related to this the SGA_MAX_SIZE (max memory usage for the SGA instance).
Because SGA_MAX_SIZE reffers to the SGA memory (buffers, shared pool and redo buffers) if you want to free up the size of buffers you also have to drecrease the SGA_MAX_SIZE.
Take a look to Setting Initialization Parameters that Affect the Size of the SGA or give more details.
There are several database parameters that control memory usage in Oracle. Here is a reasonable starting point - it's not a trivial exercise to get it right. In particular, you probably want to look at DB_CACHE_SIZE.