Is there a limit to DB2 JDBC transaction size? - jdbc

I need to insert a large number of rows (up to 100,000) into 6 different DB2 tables. I am using Java JDBC to do it. I would like to do it all in a single database transaction so that the whole thing can be rolled back if any issues are encountered. Is there a limit somewhere (in JDBC driver or in DB2) to how many inserts can be handled in a single transaction? We are using DB2 version 8.

The size of a single transaction is limited by the size of the database transaction logs. With a sufficiently large transaction log you can do what you are asking.
You don't say what platform you are running DB2 on, but for Linux/UNIX/Windows the transaction log size is controlled by three database configuration parameters - LOGFILSIZ (the size of each transaction log file), LOGPRIMARY (the number of primary transaction logs) and LOGSECOND (the number of secondary transaction log files).

Related

The memory usage in oracle server when using jdbc setfetchsize

When I use setFetchSize() method for a select statement for example
select * from tablename
for a large table in oracle JDBC driver. It actually limit the memory usage in the JDBC client.
However, what I am curious is that will this statement cause oracle server stores all the rows in the server memory ignoring the fetch size which leads to an OutOfMemory error on the Oracle Server?
However, what I am curious is that will this statement cause oracle server stores all the rows in the server memory ignoring the fetch size which leads to an OutOfMemory error on the Oracle Server?
No, Oracle, processing the cursor(select), will not get all the rows of a particular table at once in memory.
Oracle has a complex and secure architecture.
Oracle has a number of criteria for evaluating a table : "large" or "small".
When using the cursor normally (sql engine), it will not be possible to get OutOfMemory on the server process.
For example, if your server-side code processes data through pl / sql collections, you can get data from your server process without specifying the limit for retrieving rows, and if the server process reaches the PGA limit(PGA_AGGREGATE_LIMIT), the process will crash(after all resources occupied by the process will be freed).
This theme is not simple, from the point of view of explaining the mechanism of the database in one post)
If there is an interest to understand in more detail, then I think the following links may be useful.
Additional links:
SQL Processing
Working with Cursors
Oracle Relational Data Structures
Oracle Data Access
Oracle Database Storage Structures
Process Architecture

Oracle Advanced Queues versus a Small Oracle Database Table

I'm looking for a simple way to communicate between two databases, there currently exists a database link between both database.
I want to process a job on database 1 for a batch of records (batch code for each batch of records), once the process has finished on database 1 and all the batches of records have been processed. I want database 2 to see that database 1 has processed a number of batches (batch codes) either by querying a oracle table or an Oracle advanced queue which sits on either database 1 or database 2.
Database 2 will process the batches of records that are on database 1 through a database linked view using each batch code and update the status of that batch to complete.
I want to be able to update the Oracle Advanced Queue or database table of its batch no, progress status ('S' started, 'C' completed), status date
Table name.
batch_records
Table columns
Batch No,
Status,
status date
Questions:
Can this be done by a simple database table rather than a complex Oracle Advanced Queue?
Can a table be updated over a database link?
Are there any examples of this?
To answer your question first:
yes, I believe so
yes, it can. But, if there are many rows involved, it can be pretty slow
probably
Database link is the way to communicate between two databases. If those jobs run on the database 1 (DB1), I'd suggest you to keep it there - in the DB1. Doing stuff over a database link calls for problems of different kinds. Might be slow, you can't do everything over the database link (LOBs, for example). One option is to schedule a job (using DBMS_SCHEDULER or DBMS_JOB (which is quite OK for simple things)). Let the procedure maintain job status in some table (that would be a "simple table" from your 1st question) in DB1 which will be read by the DB2.
How? Do it directly, or create a materialized view which will be refreshed in a scheduled manner (e.g. every morning at 07:00) or on demand (not that good idea) or on commit (once the DB1 procedure does the job and commits changes, materialized view will be refreshed).
If there aren't that many rows involved, I'd probably read the DB1 status table directly, and think of other options later (if necessary).

The proper way to record DML and DDL changes to specified tables, schemas or entire oracle database

I am finding a resolution to record DML and DDL changes made to specified Oracle schemas or tables dynamically, which meaning that schemas and tables monitored can be changed in application run time.
In a word, I am going to achieve an Oracle database probe, not for synchronizing databases.
Updated
For example, I set a monitor to a table test for database db. I want to retrieve all changes made to test, such as drop/add/modify a column or insert/update/delete records and so on, I need to analyze and send all changes to a blockchain such as table test added a column field1,that's why I want to get all executed SQL for the monitored tables.
I have read Oracle docs about data guard and streams.
Data guard doc says:
SQL Apply (logical standby databases only)
Reconstitutes SQL statements from the redo received from the primary database and executes the SQL statements against the logical standby database.
Logical standby databases can be opened in read/write mode, but the target tables being maintained by the logical standby database are opened in read-only mode for reporting purposes (providing the database guard was set appropriately). SQL Apply enables you to use the logical standby database for reporting activities, even while SQL statements are being applied.
Stream doc says:
Oracle Streams provides two ways to capture database changes implicitly: capture processes and synchronous captures. A capture process can capture DML changes made to tables, schemas, or an entire database, and DDL changes. A synchronous capture can capture DML changes made to tables. Rules determine which changes are captured by a capture process or synchronous capture.
And before this, I have already tried to get SQL change by analyzing redo log with oracle LogMinner and finally did it.
The Oracle stream seems to be the most appropriate way of achieving my purpose, but it implements steps are too complicated and manually. And in fact, there is an open-source for MySQL published by Alibaba which named canal, canal pretends itself as a slave so that MySQL will dump binlog and push it to canal service, and then canal reconstitutes the original SQL from binlog.
I think Oracle standby database is like MySQL slave so that the probe can be implemented in a similar way. So I want to use the data guard way, but I don't want to analyze the redo log myself since it needs root privilege to shut down the database and enable some functions, however, in production I only have a read-only user. I want to use logical standby database, but the problem is that I didn't see how to get the Reconstitutes SQL statements described above.
So, are there any pros can make some suggestions?
Anyway thanks a lot.

Verify an Oracle database rollback action is successful

How can I verify an Oracle database rollback action is successful? Can I use Number of rows in activity log and Number of rows in event log?
V$TRANSACTION does not contain historical information but it does contain information about all active transactions. In practice this is often enough to quickly and easily monitor rollbacks and estimate when they will complete.
Specifically the columns USED_UBLK and USED_UREC contain the number of UNDO blocks and records remaining. USED_UREC is not always the same as the number of rows; sometimes the number is higher because it includes index entries and sometimes the number is lower because it groups inserts together.
During a long rollback those numbers will decrease until they hit 0. No rows in the table imply that the transactions successfully committed or rolled back. Below is a simple example.
create table table1(a number);
create index table1_idx on table1(a);
insert into table1 values(1);
insert into table1 values(1);
insert into table1 values(1);
select used_ublk, used_urec, ses_addr from v$transaction;
USED_UBLK USED_UREC SES_ADDR
--------- --------- --------
1 6 000007FF1C5A8EA0
Oracle LogMiner, which is part of Oracle Database, enables you to query online and archived redo log files through a SQL interface. Redo log files contain information about the history of activity on a database.
LogMiner Benefits
All changes made to user data or to the database dictionary are
recorded in the Oracle redo log files so that database recovery
operations can be performed.
Because LogMiner provides a well-defined, easy-to-use, and
comprehensive relational interface to redo log files, it can be used
as a powerful data audit tool, as well as a tool for sophisticated
data analysis. The following list describes some key capabilities of
LogMiner:
Pinpointing when a logical corruption to a database, such as errors
made at the application level, may have begun. These might include
errors such as those where the wrong rows were deleted because of
incorrect values in a WHERE clause, rows were updated with incorrect
values, the wrong index was dropped, and so forth. For example, a user
application could mistakenly update a database to give all employees
100 percent salary increases rather than 10 percent increases, or a
database administrator (DBA) could accidently delete a critical system
table. It is important to know exactly when an error was made so that
you know when to initiate time-based or change-based recovery. This
enables you to restore the database to the state it was in just before
corruption. See Querying V$LOGMNR_CONTENTS Based on Column Values
for details about how you can use LogMiner to accomplish this.
Determining what actions you would have to take to perform
fine-grained recovery at the transaction level. If you fully
understand and take into account existing dependencies, it may be
possible to perform a table-specific undo operation to return the
table to its original state. This is achieved by applying
table-specific reconstructed SQL statements that LogMiner provides in
the reverse order from which they were originally issued. See
Scenario 1: Using LogMiner to Track Changes Made by a Specific
User for an example.
Normally you would have to restore the table to its previous state,
and then apply an archived redo log file to roll it forward.
Performance tuning and capacity planning through trend analysis. You
can determine which tables get the most updates and inserts. That
information provides a historical perspective on disk access
statistics, which can be used for tuning purposes. See Scenario 2:
Using LogMiner to Calculate Table Access Statistics for an
example.
Performing postauditing. LogMiner can be used to track any data
manipulation language (DML) and data definition language (DDL)
statements executed on the database, the order in which they were
executed, and who executed them. (However, to use LogMiner for such a
purpose, you need to have an idea when the event occurred so that you
can specify the appropriate logs for analysis; otherwise you might
have to mine a large number of redo log files, which can take a long
time. Consider using LogMiner as a complementary activity to auditing
database use. See the Oracle Database Administrator's Guide for
information about database auditing.)
Enjoy.

TimesTen - correct way to reinstall schema

I have TimesTen local store which open cache connect to an Oracle data store.
Sometimes I need to drop the whole Oracle schema (Entities changes etc..), so I simply drop every table, and recreate it.
The problem I'm facing at this stage is by getting inifite XLA messages
(in the TimesTen side) for every entity in every table (I get update, add and delete events).
To solve the problem I have to truncate the inner Oracle tables.
I understand that dropping cached table without doing something with the cachegroup is problematic.
What is the right way to drop an entire schema?
Is truncating the TimesTen inner tables' a good solution?
Thanks,
Udi
There are two issues here:
The best way to change or drop an Oracle schema when TimesTen Cache groups use that schema:
When an Oracle schema needs to be modified or dropped, you should always first stop and drop the associated TimesTen Cache groups. Once the Oracle schema has been modified or cre-created, then you should re-create and start the associated TimesTen cache groups.
Dealing with unwanted XLA messages:
XLA is an asynchronous way to see committed inserts/updates/deletes/merges and DDL on tables of interest.
If you know that you want to drop a table, but do not want to see the XLA messages associated with that table while it is being re-created, then you can just stop the C, C++ or Java program that is listening for those XLA messages. If you have to keep your XLA listening program running while the table is dropped, you can use the XLA API to stop listening to that table until it has been re-created.

Resources