The memory usage in oracle server when using jdbc setfetchsize - oracle

When I use setFetchSize() method for a select statement for example
select * from tablename
for a large table in oracle JDBC driver. It actually limit the memory usage in the JDBC client.
However, what I am curious is that will this statement cause oracle server stores all the rows in the server memory ignoring the fetch size which leads to an OutOfMemory error on the Oracle Server?

However, what I am curious is that will this statement cause oracle server stores all the rows in the server memory ignoring the fetch size which leads to an OutOfMemory error on the Oracle Server?
No, Oracle, processing the cursor(select), will not get all the rows of a particular table at once in memory.
Oracle has a complex and secure architecture.
Oracle has a number of criteria for evaluating a table : "large" or "small".
When using the cursor normally (sql engine), it will not be possible to get OutOfMemory on the server process.
For example, if your server-side code processes data through pl / sql collections, you can get data from your server process without specifying the limit for retrieving rows, and if the server process reaches the PGA limit(PGA_AGGREGATE_LIMIT), the process will crash(after all resources occupied by the process will be freed).
This theme is not simple, from the point of view of explaining the mechanism of the database in one post)
If there is an interest to understand in more detail, then I think the following links may be useful.
Additional links:
SQL Processing
Working with Cursors
Oracle Relational Data Structures
Oracle Data Access
Oracle Database Storage Structures
Process Architecture

Related

The proper way to record DML and DDL changes to specified tables, schemas or entire oracle database

I am finding a resolution to record DML and DDL changes made to specified Oracle schemas or tables dynamically, which meaning that schemas and tables monitored can be changed in application run time.
In a word, I am going to achieve an Oracle database probe, not for synchronizing databases.
Updated
For example, I set a monitor to a table test for database db. I want to retrieve all changes made to test, such as drop/add/modify a column or insert/update/delete records and so on, I need to analyze and send all changes to a blockchain such as table test added a column field1,that's why I want to get all executed SQL for the monitored tables.
I have read Oracle docs about data guard and streams.
Data guard doc says:
SQL Apply (logical standby databases only)
Reconstitutes SQL statements from the redo received from the primary database and executes the SQL statements against the logical standby database.
Logical standby databases can be opened in read/write mode, but the target tables being maintained by the logical standby database are opened in read-only mode for reporting purposes (providing the database guard was set appropriately). SQL Apply enables you to use the logical standby database for reporting activities, even while SQL statements are being applied.
Stream doc says:
Oracle Streams provides two ways to capture database changes implicitly: capture processes and synchronous captures. A capture process can capture DML changes made to tables, schemas, or an entire database, and DDL changes. A synchronous capture can capture DML changes made to tables. Rules determine which changes are captured by a capture process or synchronous capture.
And before this, I have already tried to get SQL change by analyzing redo log with oracle LogMinner and finally did it.
The Oracle stream seems to be the most appropriate way of achieving my purpose, but it implements steps are too complicated and manually. And in fact, there is an open-source for MySQL published by Alibaba which named canal, canal pretends itself as a slave so that MySQL will dump binlog and push it to canal service, and then canal reconstitutes the original SQL from binlog.
I think Oracle standby database is like MySQL slave so that the probe can be implemented in a similar way. So I want to use the data guard way, but I don't want to analyze the redo log myself since it needs root privilege to shut down the database and enable some functions, however, in production I only have a read-only user. I want to use logical standby database, but the problem is that I didn't see how to get the Reconstitutes SQL statements described above.
So, are there any pros can make some suggestions?
Anyway thanks a lot.

SSIS - Iterating with SQL Server Data in ForEachLoop to Dataflow with Oracle Backend and Inserting Results to SQL Server

Hey EXPERIENCED SSIS DEVELOPERS, I need your help.
High-Level Requirements
Query SQL Server table (on a different server than my SSIS server) resulting in about 200-300k records results set.
Use three output colums for each row to lookup date in Oracle database.
Insert or Update SQL Server table with results.
Use SSIS.
SQL Server 2008
Sounds easy, right?
Here is what I have done:
Created on Control Flow Execute SQL Task that gets a recordset from SQL Server. Very fast, easy query, like select field1, field2, field 3 from table where condition > 0. That's it. Takes less than a second.
Created a variable (evaluated as expression) for the Oracle query that uses the results set from the above in the WHERE clause.
Created a ForEachLoop Container that takes the results (from #1 above) for each row in the recordset and runs it through a Data Flow that uses the Oracle query (from #2 above) with Data access mode: SQL command from variable against an Oracle data source. Fast, simple query with only about 6 columns returned.
Data Conversion - obvious reasons - changing 3 columns from Oracle data types to SQL Server data types.
OLE DB Destination to insert to SQL Server using Fast Load to staging table.
It works perfectly! Hooray! Bad news - it is very, very slow. When I say slow, I mean it process 3000 records per hour. Holy moly - so freaking slow.
Question: am I missing a way to speed it up? It seems like the ForEachLoop Container is the bottleneck. Growl.
Important Points:
- I have NO write access in Oracle environment, so don't even suggest a potential solution that requires it. Not a possibility. At all.
Oracle sources do not allow for direct parameter definition. So no SELECT FIELD FROM TABLE WHERE ?. Don't suggest it - doesn't work.
Ideas
- Should I find a way to break down the results of the Execute SQL task and send them through several ForEachLoop Containers for faster processing?
Is there another design that is more appropriate?
Is there a script I can use that is faster?
Would it be faster to create a temporary table in memory and populate it - then use the results to bulk insert to SQL Server? Does this work when using an Oracle data source?
ANY OTHER IDEAS?

Why External table concept has been established in Oracle?

SQL*Loader: Oracle uses this functionality, through the ORACLE_LOADER access driver to move data from a flat file into the database;
Data Pump: It uses a Data Pump access driver to move data out of the database into a file in an proprietary Oracle format, and back into the database from files of that format.
When a data load can be done by either the SQL*Loader or Data Pump utilities, and data unload can also be done by the Data Pump utility:
Are there any extra benefits that can be achieved by using external tables, that none of the previously mentioned utilities can do by themselves?
The below Oracle table creation command creates a table which looks like an Oracle table.Why are then Oracle telling us to call it as an external table?
create table export_empl_info organization external
( type oracle_datapump
default directory xtern_data_dir
location ('empl_info_rpt.dmp')
) as select * from empl_info;
"Are there any extra benefits that can be achieved by using external
tables, that none of the previously mentioned utilities can do by
themselves?"
SQL*loader and Datapump both require us to load the data into tables before we can access it with the database. Whereas we only access external tables through SELECT statements. It's a much more flexible mechanism.
"Why are then Oracle telling us to call it as an external table?"
umm, because it is external. The data resides in an file (or files) which is controlled by the OS. We can change the data in an external table by running an OS command like
$> cp wnatever.csv external_table_data.csv
There's no redo, rollback, flashback query or any of the other appurtenances of an internal database table.
I think that the primary benefits of external tables for me have been:
i) Not having to execute a host command to import data, which supports a trend in Oracle to control the entire code bade from inside the database. Preprocessing in 11g allows access to remote files through ftp, use of compressed files, combining multiple files into one, etc
ii) More efficient loads, by means of applying complex data transformations during the load process. Aggregations, merges, multitable inserts ... etc
I've used it for data warehouse loads, but any scenario requiring loading of or access to standard data files is a candidate for use of external tables. SQL*Loader still has its place as a tool for loading to an Oracle database from a client or other host system. Data pump is for transfer of data between Oracle databases, so it's rather different.
One limitation of external tables is that they won't process stream data -- records have to be delimited. This was true in 10.2, not sure if it's been permitted since then.
Use the system catalog views ALL/DBA/USER_EXTERNAL_TABLES for information on them
RE: Why external table vs sqlldr for loading data? Mainly to have server managed parallelism vs client managed parallelism.

How to invalidate a SQL statement in the Oracle SQL area so that a new plan is produced when collecting statistics

I have a table and a query (within a PL/SQL packge) accessing that table. Statistics are collected weekly normally.
A large update has been run on the table, resulting in significantly different data distribution on a particular indexed column. The query plan used by Oracle (which I can see from v$sqlarea) is sub-optimal. If I take an explain plan on the same* query from SQL*Plus, a good plan is returned.
I have since collected statistics on the table. Oracle is still using the query plan that it originally came up with. v$sqlarea.last_load_time suggests this was a plan generated prior to the statistics generation. I thought regenerating statistics would have invalidated plans in the SQL cache.
Is there any way to remove just this statement from the SQL cache?
(* Not character-for-character, matches-in-the-SQL-cache same, but the same statement).
If you are using 10.2.0.4 or later, you should be able to use the DBMS_SHARED_POOL package to purge a single cursor from the shared pool.
I found out (when researching something else) that what I should have done was to use
no_invalidate => FALSE
When collecting the statistics by calling gather_table_stats. This would have caused all SQL plans referencing the table to immediately be invalidated.
The Oracle docs say:
Does not invalidate the dependent cursors if set to TRUE. The procedure
invalidates the dependent cursors immediately if set to FALSE. Use
DBMS_STATS.AUTO_INVALIDATE. to have Oracle decide when to invalidate dependent
cursors. This is the default.
The default of AUTO_INVALIDATE seems to cause invalidation of SQL statements within the next 5 hours. This is to stop massive number of hard-parses if you are collecting statistics on lots of objects.

TimesTen - correct way to reinstall schema

I have TimesTen local store which open cache connect to an Oracle data store.
Sometimes I need to drop the whole Oracle schema (Entities changes etc..), so I simply drop every table, and recreate it.
The problem I'm facing at this stage is by getting inifite XLA messages
(in the TimesTen side) for every entity in every table (I get update, add and delete events).
To solve the problem I have to truncate the inner Oracle tables.
I understand that dropping cached table without doing something with the cachegroup is problematic.
What is the right way to drop an entire schema?
Is truncating the TimesTen inner tables' a good solution?
Thanks,
Udi
There are two issues here:
The best way to change or drop an Oracle schema when TimesTen Cache groups use that schema:
When an Oracle schema needs to be modified or dropped, you should always first stop and drop the associated TimesTen Cache groups. Once the Oracle schema has been modified or cre-created, then you should re-create and start the associated TimesTen cache groups.
Dealing with unwanted XLA messages:
XLA is an asynchronous way to see committed inserts/updates/deletes/merges and DDL on tables of interest.
If you know that you want to drop a table, but do not want to see the XLA messages associated with that table while it is being re-created, then you can just stop the C, C++ or Java program that is listening for those XLA messages. If you have to keep your XLA listening program running while the table is dropped, you can use the XLA API to stop listening to that table until it has been re-created.

Resources