New Oracle Constraint Type - oracle

I have to admit that this just caught me by surprise in a production system. I recently added supplemental logging to a few tables for use with Oracle Change Data Capture. A routine check during an unrelated code build revealed that this apparently created disabled constraints in the database of type "S". I can't seem to find any reference to this in the Oracle docs and the single "S" makes it hard to Google for something relevant.
My questions:
Can anyone thing of a reason that
supplemental logging results in an
implicit constraint?
Why is it created with a DISABLED
state?
Does anyone have experience with the
effects of enabling these? We have
a standard clean-up process that
runs after deployments to fully
enable constraints that may have
been disabled or enabled novalidate
for data migration reasons.

Constraint type is documented: http://download.oracle.com/docs/cd/E11882_01/server.112/e10820/statviews_1045.htm#REFRN20047
S stands for "Supplemental logging", which is further explained here: http://download.oracle.com/docs/cd/E11882_01/server.112/e10704/strms_glossary.htm#CHDIHHGA and here: http://download.oracle.com/docs/cd/E11882_01/server.112/e10705/prep_rep.htm#STREP107
I don't have any experience with it. This is the result of searching on tahiti.oracle.com.
Regards,
Rob.

Supplemental logging is necessary to support certain types of asynchronous data capture: AQ Streams for instance, and CDC (both Oracle's and third party's implementations). These mechanisms work by reading the redo logs and applying those changes to some other Oracle database. The point of supplemental logging is to increase the amount of data included in the redo logs, .
There are two ways of enabling supplemental logging. In more recent versions of the database we can set minimal logging at the database level
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
In additional we can add specify tables and columns in Supplemental Log Groups. The point of this is to include the values for unchanged columns in the change table, as it makes it easier to apply the changes in the target database. Find out more.
Obviously the S type constraint identifies columns in a Supplemental Log Group. I think the reason they are disabled is because they do not enforce a data integrity rule (unlike primary keys or check constraints). If so, I think it would be unwise to enable them, and so you should re-write your automatic clean-up to filter constraints of type S.

Related

Dynamically List contents of a table in database that continously updates

It's kinda real-world problem and I believe the solution exists but couldn't find one.
So We, have a Database called Transactions that contains tables such as Positions, Securities, Bogies, Accounts, Commodities and so on being updated continuously every second whenever a new transaction happens. For the time being, We have replicated master database Transaction to a new database with name TRN on which we do all the querying and updating stuff.
We want a sort of monitoring system ( like htop process viewer in Linux) for Database that dynamically lists updated rows in tables of the database at any time.
TL;DR Is there any way to get a continuous updating list of rows in any table in the database?
Currently we are working on Sybase & Oracle DBMS on Linux (Ubuntu) platform but we would like to receive generic answers that concern most of the platform as well as DBMS's(including MySQL) and any tools, utilities or scripts that can do so that It can help us in future to easily migrate to other platforms and or DBMS as well.
To list updated rows, you conceptually need either of the two things:
The updating statement's effect on the table.
A previous version of the table to compare with.
How you get them and in what form is completely up to you.
The 1st option allows you to list updates with statement granularity while the 2nd is more suitable for time-based granularity.
Some options from the top of my head:
Write to a temporary table
Add a field with transaction id/timestamp
Make clones of the table regularly
AFAICS, Oracle doesn't have built-in facilities to get the affected rows, only their count.
Not a lot of details in the question so not sure how much of this will be of use ...
'Sybase' is mentioned but nothing is said about which Sybase RDBMS product (ASE? SQLAnywhere? IQ? Advantage?)
by 'replicated master database transaction' I'm assuming this means the primary database is being replicated (as opposed to the database called 'master' in a Sybase ASE instance)
no mention is made of what products/tools are being used to 'replicate' the transactions to the 'new database' named 'TRN'
So, assuming part of your environment includes Sybase(SAP) ASE ...
MDA tables can be used to capture counters of DML operations (eg, insert/update/delete) over a given time period
MDA tables can capture some SQL text, though the volume/quality could be in doubt if a) MDA is not configured properly and/or b) the DML operations are wrapped up in prepared statements, stored procs and triggers
auditing could be enabled to capture some commands but again, volume/quality could be in doubt based on how the DML commands are executed
also keep in mind that there's a performance hit for using MDA tables and/or auditing, with the level of performance degradation based on individual config settings and the volume of DML activity
Assuming you're using the Sybase(SAP) Replication Server product, those replicated transactions sent through repserver likely have all the info you need to know which tables/rows are being affected; so you have a couple options:
route a copy of the transactions to another database where you can capture the transactions in whatever format you need [you'll need to design the database and/or any customized repserver function strings]
consider using the Sybase(SAP) Real Time Data Streaming product (yeah, additional li$ence is required) which is specifically designed for scenarios like yours, ie, pull transactions off the repserver queues and format for use in downstream systems (eg, tibco/mqs, custom apps)
I'm not aware of any 'generic' products that work, out of the box, as per your (limited) requirements. You're likely looking at some different solutions and/or customized code to cover your particular situation.

Forcing Oracle to use Primary Key Index without using Hints

We have an application that generates some temporary tables and then processes the data. I dont really have control of the way the application creates this and the subsequent queries involved. What we have noticed is that Oracle uses a full table scan instead of using the index which is the primary key of the tables. If it used the primary key index the process would run a whole lot faster.
Since I do not have control over the select queries generated by the application I cannot use hints and force Oracle to use primary key index. Is there any other setting I could change somewhere that could force Oracle to use primary key index for the temporary tables?
The two most common reasons for a query not using indexes are:
It's quicker to do a full table scan.
Poor statistics.
If your queries are selecting all of the table or doing joins without mentioning a primary key in the where clause etc., chances are it's quicker to do a full scan. Without the query and indexes, and preferably an explain plan as well it's impossible to tell for certain.
I would, however, recommend that you ask your DBA to re-gather - I hope, if not gather for the first time - statistics on the table. Use dbms_stats.gather_table_stats, with an estimate percentage of 25%+.
If the tables are re-created each time the application is run then try and gather statistics after creation and primary key generation. If they are truncated and re-filled each time, then ask your DBA to rebuild them and the PK and then gather statistics as this could significantly increase query runtime.
With no control over anything I don't see how you can improve the query time any other way.
You can use hints without changing SQL by leveraging SQL Profiles. Wrap your hint(s) into a SQL Profile that takes effect for that particular SQL ID.
I understand you don't have control over SQL, I have many apps where I encounter the same restriction. After checking query structure and statistics as in Ben's post and you have proved that hinting to use the index will improve performance why not try a manually created SQL profile.
Christian Antognini has a great paper here about SQL Profiles and creating them manually. The paper mentions creating SQL Profiles manually is undocumented. I would agree undocumented, but that doesn't necessarily mean unsupported. I would say there is little documentation out there, but if you want proof that Oracle allows manual creation, check the API or look at the coe_xfr_sql_profile.sql file in the SQLT utility directory.
I also posted a cheatsheet on how to quickly manually create a SQL Profile here.

Using PostgreSQL Rules/Triggers for debugging purposes

An application I am trying to support is currently running into unique constraint violations. I haven't been able to reproduce this problem in non-production environments. Is it reasonable, for debugging purposes, to create a rule (trigger?) that will in effect just copy every insert to a different table? So in effect the new table will be the same as the old table without a constraint, hopefully.
The application is using Spring to manage transactionality, and I haven't been able to find any documentation relating rules to transactions. After the violation, whatever is written so far in the transaction is rolled back - will this affect the rule in any way?
This is Postgres 8.3.
After the violation, whatever is written so far in the transaction is
rolled back - will this affect the rule in any way?
That will rollback everything the rule did, as well. You could create a trigger that uses dblink, to get some work done outside your current transaction. Another option could be a savepoint, but then you have to change all your current code and transaction.
Unique violations are logged in the logfiles as well, get this information to see what is going wrong. Version 9.0 has a change that will tell you also what the values are:
Improve uniqueness-constraint violation error messages to report the
values causing the failure (Itagaki Takahiro) For example, a
uniqueness constraint violation might now report Key (x)=(2) already
exists.
You can do almost anything you can imagine with rules and triggers. And then some more. Your exact intent remains somewhat unclear, though.
If the transaction is rolled back anyway, as you hint at the end, then everything will be undone, including all side-effects of any rules or triggers involved. Your plan would be futile.
There is a workaround for that in case that is, in fact, what you want to achieve: use dblink to link and INSERT to a table in the same database. That's not rolled back.
However, if it's just for debugging purposes, the database log is a much simpler way to see which duplicates have not been entered. Errors are logged by default. If not, you can set it up as you need it. See about your options in the manual.
As has been said, rules cannot be used for this purpose, as they only serve to rewrite the query. But rewritten query is just like the original one still part of the transaction.
Rules can be used to enforce constraints that are impossible to implement using regular constraints, such as a key being unique among several tables, or other multi-table stuff. (these do have the advantage of the "canary" tablename showing up in the logs and error messages) But the OP already had too many constraints, it appears...
Tweaking the serialisation level also seems indicated (are there multiple sessions involved? does the framework use a connection pool?)

Oracle Database Change Notification and ROWID's

Oracle's database change notification feature sends rowids (physical row addresses) on row inserts, updates and deletes. As indicated in the oracle's documentation this feature can be used by the application to built a middle tier cache. But this seems to contradict when we have a detailed look on how row ids work.
ROWID's (physical row addresses) can change when various database operations are performed as indicated by this stackoverflow thread. In addition to this, as tom mentions in this thread clustered tables can have same rowids.
Based on the above research, it doesn't seem to be safe to use the rowid sent during the database change notification as the key in the application cache right? This also raises a question on - Should database change notification feature be used to built an application server cache? or is a recommendation made to restart all the application server clusters (to reload/refresh the cache) when the tables of the cached objects undergo any operations which result in rowid's to change? Would that be a good assumption to be made for production environments?
It seems to me to none of operations that can potentially change the ROWID is an operation that would be carried out in a productive environment while the application is running. Furthermore, I've seen a lot of productive software that uses the ROWID accross transaction (usually just for a few seconds or minutes). That software would probably fail before your cache if the ROWID changed. So creating a database cache based on change notification seems reasonable to me. Just provide a small disclaimer regarding the ROWID.
The only somewhat problematic operation is an update causing a movement to another partition. But that's something that rarely happens because it defeats the purpose of the partitioning, at least if it occurred regularly. The designer of a particular database schema will be able to tell you whether such an operation can occur and is relevant for caching. If none of the tables has ENABLE ROW MOVEMENT set, you don't even need to ask the designer.
As to duplicate ROWIDs: ROWIDs aren't unique globally, they are unique within a table. And you are given both the ROWID and the table name in the change notification. So the tuple of ROWID and table name is a perfect unique key for building a reliable cache.

Setting Oracle's Column COMMENTS Attribute with NHibernate

One of my larger applications is using NHibernate over an Oracle data store. For testing/development, the application uses NHibernate's schema generation to create/re-create the database when needed. Prior to delivery, one of the things being asked of us by the DBAs is to include Comments for each field in the database (there are a lot). I'm looking for solutions that would let me specify the comment in the mapping file. Has anyone done anything like this? Would NHibernate support this activity with a little effort on my end?
Yes, you can use <database-object> to add any additional artifacts (comments, indexes, triggers, etc) that you need.
See 5.6. Auxiliary Database Objects

Resources