TimesTen - correct way to reinstall schema - oracle

I have TimesTen local store which open cache connect to an Oracle data store.
Sometimes I need to drop the whole Oracle schema (Entities changes etc..), so I simply drop every table, and recreate it.
The problem I'm facing at this stage is by getting inifite XLA messages
(in the TimesTen side) for every entity in every table (I get update, add and delete events).
To solve the problem I have to truncate the inner Oracle tables.
I understand that dropping cached table without doing something with the cachegroup is problematic.
What is the right way to drop an entire schema?
Is truncating the TimesTen inner tables' a good solution?
Thanks,
Udi

There are two issues here:
The best way to change or drop an Oracle schema when TimesTen Cache groups use that schema:
When an Oracle schema needs to be modified or dropped, you should always first stop and drop the associated TimesTen Cache groups. Once the Oracle schema has been modified or cre-created, then you should re-create and start the associated TimesTen cache groups.
Dealing with unwanted XLA messages:
XLA is an asynchronous way to see committed inserts/updates/deletes/merges and DDL on tables of interest.
If you know that you want to drop a table, but do not want to see the XLA messages associated with that table while it is being re-created, then you can just stop the C, C++ or Java program that is listening for those XLA messages. If you have to keep your XLA listening program running while the table is dropped, you can use the XLA API to stop listening to that table until it has been re-created.

Related

The proper way to record DML and DDL changes to specified tables, schemas or entire oracle database

I am finding a resolution to record DML and DDL changes made to specified Oracle schemas or tables dynamically, which meaning that schemas and tables monitored can be changed in application run time.
In a word, I am going to achieve an Oracle database probe, not for synchronizing databases.
Updated
For example, I set a monitor to a table test for database db. I want to retrieve all changes made to test, such as drop/add/modify a column or insert/update/delete records and so on, I need to analyze and send all changes to a blockchain such as table test added a column field1,that's why I want to get all executed SQL for the monitored tables.
I have read Oracle docs about data guard and streams.
Data guard doc says:
SQL Apply (logical standby databases only)
Reconstitutes SQL statements from the redo received from the primary database and executes the SQL statements against the logical standby database.
Logical standby databases can be opened in read/write mode, but the target tables being maintained by the logical standby database are opened in read-only mode for reporting purposes (providing the database guard was set appropriately). SQL Apply enables you to use the logical standby database for reporting activities, even while SQL statements are being applied.
Stream doc says:
Oracle Streams provides two ways to capture database changes implicitly: capture processes and synchronous captures. A capture process can capture DML changes made to tables, schemas, or an entire database, and DDL changes. A synchronous capture can capture DML changes made to tables. Rules determine which changes are captured by a capture process or synchronous capture.
And before this, I have already tried to get SQL change by analyzing redo log with oracle LogMinner and finally did it.
The Oracle stream seems to be the most appropriate way of achieving my purpose, but it implements steps are too complicated and manually. And in fact, there is an open-source for MySQL published by Alibaba which named canal, canal pretends itself as a slave so that MySQL will dump binlog and push it to canal service, and then canal reconstitutes the original SQL from binlog.
I think Oracle standby database is like MySQL slave so that the probe can be implemented in a similar way. So I want to use the data guard way, but I don't want to analyze the redo log myself since it needs root privilege to shut down the database and enable some functions, however, in production I only have a read-only user. I want to use logical standby database, but the problem is that I didn't see how to get the Reconstitutes SQL statements described above.
So, are there any pros can make some suggestions?
Anyway thanks a lot.

What is the use of disable operation in hbase?

I know it is to disallow anyone from performing any operation on a table, when a schema change is going to be made.
> disable ‘table_name’
But I want more clarification on it. Why should we disallow others to perform any operation on it? Is it just because wrong and unexpected results would be given when a query is made while a schema change is undergoing...!
HBase is a strictly consistent NoSQL database in case of reads and writes.
So achieving consistency is very important for HBase during DB operations.
HBase demands disabling table in case of altering schema changes and dropping tables.
HBase doesn't have a protocol to tell all the regions to update the schema changes online. So we need to disable the table before alter it.
HBase table drop is two step procedure:
Closing all the regions. i.e disable the table
Dropping them. i.e drop the table.
So We must disable all operations except a few operations like list, is_enabled, is_disabled etc... on the table before dropping it.

How to check whether a delete has been occured in a table at specified time

Recently, a very strange scenario has been reported from one of of our sites.
Based on our fields, we found that there should be some kind of delete that must have happenend for that scenario
In our application code, there is no delete for that table itself. So we checked in gv$sqlarea(since we use RAC) table whether there are any delete sql for this table. We found nothing.
Then we tried to do the same kind of delete through our PL/SQL Developer. We are able to track all delete through gv$sqlarea or gv$session. But when we use below query, lock, edit and commit in plsql developer, there is no trace
select t.*, t.rowid
from <table>
Something which we are able to find is sys.mon_mods$ has the count of deletes. But it is not stored for a long time, so that we can trace by timestamp
Can anyone help me out to track this down
Oracle Version: 11.1.0.7.0
Type : RAC (5 instances)
gv$sqlarea just shows the SQL statements that are in the shared pool. If the statement is only executed once, depending on how large the shared pool and how many distinct SQL statements are executed, a statement might not be in the shared pool very long. I certainly wouldn't expect that a one-time statement would still be in the shared pool of a reasonably active system after a couple hours.
Assuming that you didn't enable auditing and that you don't have triggers that record deletes, is the system in ARCHIVELOG mode? Do you have the archived logs from the point in time where the row was deleted? If so, you could potentially use LogMiner to look through the archived logs to find the statement in question.

Can you using joins with direct path inserts?

I have tried to find examples but they are all simple with a single where clause. Here is the situation. I have a bunch of legacy data transferred from another database. I also have the "good" tables in that same database. I need to transfer (data-conversion) data from the legacy tables to thew tables. Because this is a different set of tables the data-conversion requires complex joins to put the old data into the new tables correctly.
So, old tables old data.
New tables must have the old data but it requires lots of joins to get that old data into the new tables correctly.
Can I use direct path with lots of joins like this? INSERT SELECT (lots of joins)
Does direct path apply to tables that are already on the same database (transfer between tables)? Is it only for loading tables from say a text file?
Thank you.
The query in your SELECT can be as complex as you'd like with a direct-path insert. The direct-path refers only to the destination table. It has nothing to do with the way that data is read or processed.
If you're doing a direct-path insert, you're asking Oracle to insert the new data above the high water mark of the table so you bypass the normal code that reuses space in existing blocks for new rows to be inserted. It also has to block other inserts since you can't have the high water mark of the table change during a direct-path insert. This probably isn't a big deal if you've got a downtime window in which to do the load but it would be quite problematic if you wanted the existing tables to be available for other applications during the load.
No, on the contrary, it means you need to do a backup after a NOLOGGING load, not that you can't backup the database.
Allow me to elaborate a bit. Normally, when you do DML in Oracle, the before images of the changes you are are making get logged in UNDO, and all the changes (including the UNDO changes) are first written to REDO. This is how Oracle manages transactions, instance recovery, and database recovery. If a transaction is aborted or rolled back, Oracle uses the information in UNDO to undo the changes your transaction made. If the instance crashes, then on instance restart, Oracle will use the information in REDO and UNDO to recover up to the last committed transaction. First, Oracle will read the REDO and roll forward, then, use UNDO to roll back all the transactions that were not committed at the time of the crash. In this way, Oracle is able to recover up to the last committed transaction.
Now, when you specify an APPEND hint on an insert statement, Oracle will execute the INSERT with direct load. This means that data is loaded into brand new, never before used blocks, from above the highwater mark. Because the blocks being loaded are brand new, there is no "before image", so, Oracle can avoid writing UNDO, which improves performance. If the database is in NOARCHIVELOG mode, then Oracle will also not write REDO. On a database in ARCHIVELOG mode, Oracle will still write REDO, unless, before you do the insert /*+ append */, you set the table to NOLOGGING, (i.e. alter table tab_name nologging;). In that case, REDO logging is disabled for the table. However, this is where you could run into backup/recovery implications. If you do a NOLOGGING direct load, and then you suffer a media failure, and the datafile containing the segment with the nologging operation is restored from a backup taken before the nologging load, then the redo log will not contain the changes required to recover that segment. So, what happens? Well, when you do a NOLOGGING load, Oracle writes extent invaldation records to the redo log, instead of the actual changes. Then, if you use that redo in recovery, those data blocks will be marked logically corrupt. Any subsequent queries against that segment will get an ORA-26040 error.
So, how to avoid this? Well, you should always take a backup imediately following any NOLOGGING direct load. If you restore/recover from a backup taken after the nologging load, there is no problem, because the data will be in the datablocks in the file that was restored.
Hope that's clear,
-Mark
Yes, there should not be any arbitrary limits on query complexity.
If you do
insert /*+ APPEND */ into target_table select .... from source1, source2..., sourceN where
It should work fine. Consider though, that the performance of the load will be limited by the performance of that query, so, be sure it's well-tuned, if you're expecting good performance.
Finally, consider whether setting NOLOGGING on the target table would improve performance significantly. But, also consider the backup recovery implications, if you decide to implement NOLOGGING.
Hope that helps,
-Mark

Oracle trace all SELECTS

I need to do a task but I have no idea how to do it.
Here is the problem:
I have about 1000 tables on a Oracle Database and many processes.
Each process does one or more SELECT on one or many tables.
Because it's almost impossible to look in the source code to find which process does which SELECT on which tables, I would like to have some kind of trigger on SELECT on every table.
The idea is that I will launch the processes one by one to be able to see which tables will query.
I know that there is no trigger on SELECT, but is there anything else?
I need to do this in a one shot, just to recover the necessary info, it will not run every day.
You could activate auditing. You can audit all SELECT with:
AUDIT SELECT TABLE;
You can specify BY SESSION so that only one record will be written to the audit trail per table accessed per session.
Your AUDIT_TRAIL parameter must be set to either DB or OS. If it is set to DB, the audit trail will be written to the SYS.AUD$ table.
Assuming that you can map a "process" in your terminology to a particular Oracle session, you could trace the Oracle session. That would show you all the SQL statements executed by that session.
You could also potentially do a SQL*Net trace from whatever the client machine is (note that the "client machine" in a three-tier environment is the application server). A SQL*Net trace tends not to be nearly as easy to work with, however.

Resources