SGA and PGA in Oracle Database Configuration - oracle

I know that SGA ( that contain data and control information for one Oracle Database instance) stands for System Global Area and PGA (that contains data and control information exclusively for use by an Oracle process) stands for Program Global Area but, I don't really understand the function of the variables does to the database. How would it help when retrieving data if SGA is configured like 10 times larger than PGA ?

The SGA is a memory structure on the server that contains pools to hold code, SQL, classes, cursors, etc. and caches to hold data. So when a client sends a query to the server, the code and data sits in the SGA to get processed by the RDBMS on the server.
The PGA is a shared memory area for a user server process and is used for temporary storage and work areas. Oracle uses the PGA and temp tablespaces to work to get to a result set which is passed back to the client, then the PGA for the session is freed.
There is no ratio between the two. The SGA is sized according to how much code and data is getting sent to the server, and the PGA is dynamic according to how many processes are active. If there are thousands of processes, the PGA can easily be double the SGA. The SGA is sized VERY carefully though; making it bigger does not necessarily make it better for performance reasons.
There is also a UGA (User Global Area) which is the memory area for each client (non-server) process.

Related

Best practices for loading large data from Oracle to Tabular model

We have created some new SSAS Tabular models which fetch data directly from Oracle. But after some testing, we found that with real customer data (with few millions of rows of data), the processing times go close to 4 hours. Our goal is to keep them under about 15mins (Due to existing system performance). We fetch from Oracle tables so query performance is not the bottleneck.
Are there any general design guides/best practices to handle such a scenario?
Check your application side array fetch size as you could be experiencing network latency.
** Array fetch size note:
As per the Oracle documentation the Fetch Buffer Size is an application side memory setting that affects the number of rows returned by a single fetch. Generally, you balance the number of rows returned with a single fetch (a.k.a. array fetch size) with the number of rows needed to be fetched.
A low array fetch size compared to the number of rows needed to be returned will manifest as delays from increased network and client side processing needed to process each fetch (i.e. the high cost of each network round trip [SQL*Net protocol]).
If this is the case, on the Oracle side you will likely see very high waits on “SQL*Net message from client”. [This wait event is posted by the session when it is waiting for a message from the client to arrive. Generally, this means that the session is just sitting idle, however, in a Client/Server environment it could also mean that either the client process is running slow or there are network latency delays. The database performance is not degraded by high wait times for this wait event.]
As I like to say: “SQL*Net is a chatty protocol”; so even though Oracle may be done with its processing of the query, excessive network round-trips results in slower response times on the client side. One should expect that low array fetch size may be contributing to the slowness if the elapsed time to get the data into the application is much longer than the elapsed time for the DB to run the SQL; in this case app side processing time can also be a factor contributing to the slowness [you can look into app specific ways to troubleshoot/tune app side processing].
Array fetch size is not an attribute of the Oracle account nor is it an Oracle side session setting. Array fetch size can only be set at the client; there is no DB setting for the array fetch size the client will use. Every client application has a different mechanism for specifying the array fetch size:
Informatica: ?? config. file param ??? setting at the connection or
result set level??
Cognos http://www-01.ibm.com/support/docview.wss?uid=swg21981559
SQL*Plus: set arraysize n
Java/JDBC: setFetchSize(int rows) /* method in Statement,
PreparedStatement, CallableStatement, and ResultSet objects */
Properties object put method “defaultRowPrefetch”
http://download.oracle.com/otn_hosted_doc/jdeveloper/905/jdbc-javadoc/oracle/jdbc/OracleDriver.html Another link to Oracle JDBC DefaultRowPrefetch
http://www.oracle.com/technetwork/database/enterprise-edition/jdbc-faq-090281.html
.Net Oracle .Net Developers Guide The FetchSize property represents
the total memory size in bytes that ODP.NET allocates to cache the
data fetched from a database round-trip. The FetchSize property can
be set on the OracleCommand, OracleDataReader, or OracleRefCursor
object, depending on the situation. It controls the fetch size for
filling a DataSet or DataTable using an OracleDataAdapter.
ODBC driver: ?? something like: SetRowsetSize

JDBC pre-fetch when fetching SYS_REFCURSOR opened in an Oracle stored function

I was reading some interesting stuff about JDBC pre-fetch size, but I cannot find any answer to a few questions:
The Java app I'm working on is designed to fetch rows from cursors opened and returned by functions within PL/SQL packages. I was wondering whether the pre-fetch default setting of the JDBC driver is actually affecting the fetching process or not, being the SQL statements parsed and opened within the Oracle database. I tried setting the fetch size on the JBoss configuration file and printing the value taken from the method setFetchSize(). The new value (100, just for testing purpose) was returned but I see no difference in how the application performs.
I also read this pre-fetching is enhancing performance by reducing the number of round-trips between the client and the database server, but how can I measure the number of round trips in order to verify and quantify the actual benefits I can eventually get by tuning the pre-fetch size?
Yes the Oracle JDBC thin driver will use the configured prefetch size when fetching from any cursor whether the cursor was opened by the client or from within a stored proc.
The easiest way to count the roundtrips is to look a the sqlnet trace. You can turn on sqlnet tracing on the server-side by adding trace_level_server = 16 to your sqlnet.ora file (again on the server as JDBC thin doesn't use sqlnet.ora). Each foreground process will then dump the network traffic in a trace file. You can then see the network packets exchanged with the client and count the roundtrips. By default the driver fetches rows 10 by 10. But since you have increased the fetch size to 100 it should fetch up to that number of rows in one single roundtrip.
Note that unless your client is far away from your server (significant ping time) then the cost of a roundtrip won't be high and unless you're fetching a very high number of rows (10,000s) you won't see much difference in performance in increasing the fetch size. The default 10 usually works fine for most OLTP applications. In your client is far away then you can also consider increasing the SDU size (maximum size of a sqlnet packet). The default is 8k but you can increase it up to 2MB in 12.2.

How does oracle db writer decide whether or not to do multiblock / sequential writes

We have a test system which matches our production system like for like. 6 months ago we did some testing on new hardware, and found the performance limit of our system.
However, now we are re-doing the testing with a view to adding further hardware, and we have found the system doesnt perform as it used to.
The reason for this is because on one specific volume we are now doing random I/O which used to be sequential. Further to this it has turned out that the activity on this volume by oracle which is 100% writes, is actually in 8k blocks, where before it was up to 128k.
So something has caused the oracle db writer to stop batching up it's writes.
We've extensively checked our config, and cannot see any difference between our test and production systems. We've also opened a call with Oracle but at this stage information is slow in forthcoming.
so; Ultimately this is 2 related questions:
Can you rely on oracle multiblock writes? Is that a safe thing to engineer/tune your system for?
Why would oracle change its behaviour?
We're not at this stage necessarily blaming oracle - it may well be reacting to something in the environment - but what?
The OS/arch is solaris/sparc.
Oh; I forgot to mention, the insert table has no indexes, and only a couple of foreign keys - it's designed as a bucket for as fast an insert as possible. It's also partitioned on the key field.
Thanks for any tips!
More description of the workload would allow some hypotheses.
If you are updating random blocks, then the DBWR process(es) are going to have little choice but to do single-block writes. Indexes especially are likely to have writes all over the place. If you have an index of character values and need to insert a new 'M' record where there isn't room, it will get a new block for the index and split the current block. You'll have some of those 'M' records in the original block, and some in the new block (while will be the last [used] block in the last extent).
I suspect you are most likely to get multi-block writes when bulk inserting into tables, as new blocks will be allocated and written to. Potentially, initially you had (say) 1GB of extents allocated and were writing into that space. Now you might have reached the limit of that and be creating new extents (say 50 Mb) which it may be getting from scattered file locations (eg other tables that have been dropped).

How to limit RAM usage in Oracle 9

I've got Oracle database that is used as a storage for web services. Most of the time data are in read-only mode and cached in RAM directly by the service. However during the system startup all data are pulled once from Oracle and the database tries to be smart and keeps the data in RAM (1GB).
How can I limit/control the amount of RAM available to the Oracle 9 instance?
A short answer is modify SGA_MAX_SIZE. The long one follows.
If you are referring to the "data", you have to check the DB_CACHE_SIZE (size of the memory buffers) and related to this the SGA_MAX_SIZE (max memory usage for the SGA instance).
Because SGA_MAX_SIZE reffers to the SGA memory (buffers, shared pool and redo buffers) if you want to free up the size of buffers you also have to drecrease the SGA_MAX_SIZE.
Take a look to Setting Initialization Parameters that Affect the Size of the SGA or give more details.
There are several database parameters that control memory usage in Oracle. Here is a reasonable starting point - it's not a trivial exercise to get it right. In particular, you probably want to look at DB_CACHE_SIZE.

Loading data to shared memory from database tables

Any idea about loading the data from database to shared memory, the idea is to fasten the data retrieval from frequently used tables?
the server will automatically cache frequently used tables. So I would no optimize from the server side. Now, if the client is querying remotely you might consider coying the data to a local database (like the free SQL Express).
You are talking about cache.
it is easily implemented.
but there are some tricks you need to remember:
You will need to log changes in the underlying table - and reload the cache when they happens.
(poll a change table).
Some operation might be faster inside the database then in your own memory structure.
(If you intereseted in a fast data access with no work at all there are some in-memory Databases that can do the trick for you).

Resources