When loading Spatial Network into memory where is the actual memory occupied? On client or server side?
PS, example for loading network into memory:
PL/SQL:
sdo_net_mem.network_manager.read_network(net_mem=>'XXX', allow_updates=>'TRUE');
Java:
NetworkMetadata metadata = LODNetworkManager.getNetworkMetadata(sql.getConnection(), 'XXX', 'XXX');
NetworkIO networkIO = LODNetworkManager.getNetworkIO(sql.getConnection(), 'XXX', 'XXX', metadata);
networkIO.readLogicalNetwork(1);
When using the LOD APIs the memory is allocated in the client side or where your client application is running. Please check this whitepaper: A Load-On-Demand Approach to Handling Large Networks in the Oracle Spatial Network Data Model
It's on the client (i.e. host application). If you're using PL/SQL then the database itself is the host application. If you're using Java and run your code on the application server then it's on the appserver.
A recommended approach is LOD. And in contrast to in-memory you can fine-tune, how big the partitions are and how many should be loaded in memory at once, so you have good control of the memory consumption. In-memory can be considered as a corner case of LOD with just 1 unlimited partition and everything loaded into memory.
The drawback of LOD is the necessity to partition your network.
Related
I know that SGA ( that contain data and control information for one Oracle Database instance) stands for System Global Area and PGA (that contains data and control information exclusively for use by an Oracle process) stands for Program Global Area but, I don't really understand the function of the variables does to the database. How would it help when retrieving data if SGA is configured like 10 times larger than PGA ?
The SGA is a memory structure on the server that contains pools to hold code, SQL, classes, cursors, etc. and caches to hold data. So when a client sends a query to the server, the code and data sits in the SGA to get processed by the RDBMS on the server.
The PGA is a shared memory area for a user server process and is used for temporary storage and work areas. Oracle uses the PGA and temp tablespaces to work to get to a result set which is passed back to the client, then the PGA for the session is freed.
There is no ratio between the two. The SGA is sized according to how much code and data is getting sent to the server, and the PGA is dynamic according to how many processes are active. If there are thousands of processes, the PGA can easily be double the SGA. The SGA is sized VERY carefully though; making it bigger does not necessarily make it better for performance reasons.
There is also a UGA (User Global Area) which is the memory area for each client (non-server) process.
I was reading some interesting stuff about JDBC pre-fetch size, but I cannot find any answer to a few questions:
The Java app I'm working on is designed to fetch rows from cursors opened and returned by functions within PL/SQL packages. I was wondering whether the pre-fetch default setting of the JDBC driver is actually affecting the fetching process or not, being the SQL statements parsed and opened within the Oracle database. I tried setting the fetch size on the JBoss configuration file and printing the value taken from the method setFetchSize(). The new value (100, just for testing purpose) was returned but I see no difference in how the application performs.
I also read this pre-fetching is enhancing performance by reducing the number of round-trips between the client and the database server, but how can I measure the number of round trips in order to verify and quantify the actual benefits I can eventually get by tuning the pre-fetch size?
Yes the Oracle JDBC thin driver will use the configured prefetch size when fetching from any cursor whether the cursor was opened by the client or from within a stored proc.
The easiest way to count the roundtrips is to look a the sqlnet trace. You can turn on sqlnet tracing on the server-side by adding trace_level_server = 16 to your sqlnet.ora file (again on the server as JDBC thin doesn't use sqlnet.ora). Each foreground process will then dump the network traffic in a trace file. You can then see the network packets exchanged with the client and count the roundtrips. By default the driver fetches rows 10 by 10. But since you have increased the fetch size to 100 it should fetch up to that number of rows in one single roundtrip.
Note that unless your client is far away from your server (significant ping time) then the cost of a roundtrip won't be high and unless you're fetching a very high number of rows (10,000s) you won't see much difference in performance in increasing the fetch size. The default 10 usually works fine for most OLTP applications. In your client is far away then you can also consider increasing the SDU size (maximum size of a sqlnet packet). The default is 8k but you can increase it up to 2MB in 12.2.
When I run a query to copy data from schemas, does it perform all SQL on the server end or copy data to a local application and then push it back out to the DB?
The two tables sit in the same DB, but the DB is accessed through a VPN. Would it change if it was across databases?
For instance (Running in Toad Data Point):
create table schema2.table
as
select
sum(row1)
,row2
from schema1
The purpose I ask the question is because I'm getting quotes for a Virtual Machine in Azure Cloud and want to make sure that I'm not going to break the bank on data costs.
The processing of SQL statements on the same database usually takes place entirely on the server and generates little network traffic.
In Oracle, schemas are a logical object. There is no physical barrier between them. In a SQL query using two tables it makes no difference if those tables are in the same schema or in different schemas (other than privilege issues).
Some exceptions:
Real Application Clusters (RAC) - RAC may share a huge amount of data between the nodes. For example, if the table was cached on one node and the processing happened on another, it could send all the table data through the network. (I'm not sure how this works on the cloud though. Normally the inter-node traffic is done with a separate, dedicated network connection.)
Database links - It should be obvious if your application is using database links though.
Oracle Reports and Forms(?) - A few rare tools have client-side PL/SQL processing. Possibly those programs might send data to the client for processing. But I still doubt it would do something crazy like send an entire table to the client to be sorted, and then return the results to the server.
Backups/archive logs - I assume all the data will be backed up. I'm not sure how that's counted, but possibly that means all data written will also be counted as network traffic eventually.
The queries below are examples of different ways to check the network traffic being generated.
--SQL*Net bytes sent for a session.
select *
from gv$sesstat
join v$statname
on gv$sesstat.statistic# = v$statname.statistic#
--You probably also want to filter for a specific INST_ID and SID here.
where lower(display_name) like '%sql*net%';
--SQL*Net bytes sent for the entire system.
select *
from gv$sysstat
where lower(name) like '%sql*net%'
order by value desc;
I've got Oracle database that is used as a storage for web services. Most of the time data are in read-only mode and cached in RAM directly by the service. However during the system startup all data are pulled once from Oracle and the database tries to be smart and keeps the data in RAM (1GB).
How can I limit/control the amount of RAM available to the Oracle 9 instance?
A short answer is modify SGA_MAX_SIZE. The long one follows.
If you are referring to the "data", you have to check the DB_CACHE_SIZE (size of the memory buffers) and related to this the SGA_MAX_SIZE (max memory usage for the SGA instance).
Because SGA_MAX_SIZE reffers to the SGA memory (buffers, shared pool and redo buffers) if you want to free up the size of buffers you also have to drecrease the SGA_MAX_SIZE.
Take a look to Setting Initialization Parameters that Affect the Size of the SGA or give more details.
There are several database parameters that control memory usage in Oracle. Here is a reasonable starting point - it's not a trivial exercise to get it right. In particular, you probably want to look at DB_CACHE_SIZE.
Any idea about loading the data from database to shared memory, the idea is to fasten the data retrieval from frequently used tables?
the server will automatically cache frequently used tables. So I would no optimize from the server side. Now, if the client is querying remotely you might consider coying the data to a local database (like the free SQL Express).
You are talking about cache.
it is easily implemented.
but there are some tricks you need to remember:
You will need to log changes in the underlying table - and reload the cache when they happens.
(poll a change table).
Some operation might be faster inside the database then in your own memory structure.
(If you intereseted in a fast data access with no work at all there are some in-memory Databases that can do the trick for you).