Using Oracle Streams to implement audit trails - oracle

I'm going to implement asynchronous audit trails functionality for highly loaded system with using of Oracle Streams (for log mining on redo and archive logs). Audit trails in my case mustn't slow down any DML operations over set of my tables. Also audits must contain additional information about end user identity and date and time of modification.
Does someone have experience in implementing audit trails with using of Oracle Streams? Is it good idea to move this way?
Is there any tutorials exist with tips and tricks about implementing audit trails with using of Oracle Streams?

Oracle claims the auditing features in the database create an insignificant amount of overhead. Have you tried those as a test case to see how it performs? It doesn't require any DML triggers on the tables. I've used them and got no noticeable difference but the system resources weren't maxed out either.
Using streams for auditing sounds possible but I think it's an overly complicated solution. I supposes you could use streams to replicate transactions to another database and then use the auditing in that database. You're still going to add I/O load to wherever you store your redo logs.

Agree with #JOTN. One more thing to add wrt Oracles streams, it is deprecated in 12c and being packaged/offered as 'Golden gate' with a separate license cost.

Related

Oracle user who have read/selected data in my tables

How do I know which are all the users have been accessing my table that is available in my schema.
EX: I have a table in oracle myschema.mytable with a public synonym to it. There are other users in the database.
I would like to know who are all other the users who have been accessing "mytable", other than "myschema"
Thanks,
The only sure-fire way to know for sure is to enable Database Auditing (Docs).
This would record every session that had selected or read data from HR.EMPLOYEES
AUDIT SELECTON "HR"."EMPLOYEES"
BY SESSION
WHENEVER SUCCESSFUL;
Once this rule is set, you can start checking your audit trails - reports of who is doing what in terms of audited events, in this case looking at data in HR.EMPLOYEES.
You can simply query the DBA_AUDIT_OBJECT view.
Note that this feature does come with a cost - it increases the amount of work required of the database. Every session that looks at the data in EMPLOYEES, Oracle will have to record the entry in this trail.
If you want more granular, you can record activity by occurrence instead of by session. That will cost, even more.
Many people have built their own auditing systems with TRIGGERS, but all of them have drawbacks - mainly that you have to build and maintain the system.
I've only ever seen 100% complete auditing systems successful using this built-in feature. You just have to be prepared for the potential performance hit, and decide how often you want to clean up the audit trails.
And yes, SQL Developer has interface for the database auditing feature.

Dynamically List contents of a table in database that continously updates

It's kinda real-world problem and I believe the solution exists but couldn't find one.
So We, have a Database called Transactions that contains tables such as Positions, Securities, Bogies, Accounts, Commodities and so on being updated continuously every second whenever a new transaction happens. For the time being, We have replicated master database Transaction to a new database with name TRN on which we do all the querying and updating stuff.
We want a sort of monitoring system ( like htop process viewer in Linux) for Database that dynamically lists updated rows in tables of the database at any time.
TL;DR Is there any way to get a continuous updating list of rows in any table in the database?
Currently we are working on Sybase & Oracle DBMS on Linux (Ubuntu) platform but we would like to receive generic answers that concern most of the platform as well as DBMS's(including MySQL) and any tools, utilities or scripts that can do so that It can help us in future to easily migrate to other platforms and or DBMS as well.
To list updated rows, you conceptually need either of the two things:
The updating statement's effect on the table.
A previous version of the table to compare with.
How you get them and in what form is completely up to you.
The 1st option allows you to list updates with statement granularity while the 2nd is more suitable for time-based granularity.
Some options from the top of my head:
Write to a temporary table
Add a field with transaction id/timestamp
Make clones of the table regularly
AFAICS, Oracle doesn't have built-in facilities to get the affected rows, only their count.
Not a lot of details in the question so not sure how much of this will be of use ...
'Sybase' is mentioned but nothing is said about which Sybase RDBMS product (ASE? SQLAnywhere? IQ? Advantage?)
by 'replicated master database transaction' I'm assuming this means the primary database is being replicated (as opposed to the database called 'master' in a Sybase ASE instance)
no mention is made of what products/tools are being used to 'replicate' the transactions to the 'new database' named 'TRN'
So, assuming part of your environment includes Sybase(SAP) ASE ...
MDA tables can be used to capture counters of DML operations (eg, insert/update/delete) over a given time period
MDA tables can capture some SQL text, though the volume/quality could be in doubt if a) MDA is not configured properly and/or b) the DML operations are wrapped up in prepared statements, stored procs and triggers
auditing could be enabled to capture some commands but again, volume/quality could be in doubt based on how the DML commands are executed
also keep in mind that there's a performance hit for using MDA tables and/or auditing, with the level of performance degradation based on individual config settings and the volume of DML activity
Assuming you're using the Sybase(SAP) Replication Server product, those replicated transactions sent through repserver likely have all the info you need to know which tables/rows are being affected; so you have a couple options:
route a copy of the transactions to another database where you can capture the transactions in whatever format you need [you'll need to design the database and/or any customized repserver function strings]
consider using the Sybase(SAP) Real Time Data Streaming product (yeah, additional li$ence is required) which is specifically designed for scenarios like yours, ie, pull transactions off the repserver queues and format for use in downstream systems (eg, tibco/mqs, custom apps)
I'm not aware of any 'generic' products that work, out of the box, as per your (limited) requirements. You're likely looking at some different solutions and/or customized code to cover your particular situation.

Commits in the absence of locks in CockroachDB

I'm trying to understand how ACID in CockroachDB works without locks, from an application programmer's point of view. Would like to use it for an accounting / ERP application.
When two users update the same database field (e.g. a general ledger account total field) at the same time what does CockroachDB do? Assuming each is updating many other non-overlapping fields at the same time as part of the respective transactions.
Will the aborted application's commit process be informed about this immediately at the time of the commit?
Do we need to take care of additional possibilities than, for example, in ACID/locking PostgreSQL when we write the database access code in our application?
Or is writing code for accessing CockroachDB for all practical purposes the same as for accessing a standard RDBMS with respect to commits and in general.
Of course, ignoring performance issues / joins, etc.
I'm trying to understand how ACID in CockroachDB works without locks, from an application programmer's point of view. Would like to use it for an accounting / ERP application.
CockroachDB does have locks, but uses different terminology. Some of the existing documentation that talks about optimistic concurrency control is currently being updated.
When two users update the same database field (e.g. a general ledger account total field) at the same time what does CockroachDB do? Assuming each is updating many other non-overlapping fields at the same time as part of the respective transactions.
One of the transactions will block waiting for the other to commit. If a deadlock between the transactions is detected, one of the two transactions involved in the deadlock will be aborted.
Will the aborted application's commit process be informed about this immediately at the time of the commit?
Yes.
Do we need to take care of additional possibilities than, for example, in ACID/locking PostgreSQL when we write the database access code in our application?
Or is writing code for accessing CockroachDB for all practical purposes the same as for accessing a standard RDBMS with respect to commits and in general.
At a high-level there is nothing additional for you to do. CockroachDB defaults to serializable isolation which can result in more transaction restarts that weaker isolation levels, but comes with the advantage that the application programmer doesn't have to worry about anomalies.

storing data in secondary database

Our application (java,spring, hibernate) uses postgress to store data.
We are looking to add an analysis engine to the application. I want to explore using a nosql db to run the analysis on. This is an attempt at learning the nosql a bit also to free the main application activity from performance penalty (as much as possible).
So, I want the data changes to also synch to the nosql db (in addition to postgres). Any synch mechanism will affect the performance of the main data/transaction activity.
Is it a good idea to push the data changes to a message bus and free the main transaction as early as possible ? Can anyone point me to frameworks/technologies/ideas that address this issue of same data going to two different data stores.
The simplest solution would be sending data to a Postgres read replica and running your analytics queries on that. The performance impact is minimal and this would save a lot of time compared to alternative approaches.
Unless you really know what you are doing, I would avoid NoSQL for this kind of application. If your dataset is too big for a Postgres read replica, you might want to use Redshift, which is a columnar datastore that is optimized for types of analytics queries typically performed.

Performance impact when creating Audit trail using trigger in MS SQL Server 2012

In SQL Server 2012 database we want to create audit trail for almost all major tables on Update and Delete operations.Noramally we creating Audit Trail using trigger on each table and store it on shadow table. So there is any performance impact? if huge records updated or deleted on any table. There is anyother way to implement Audit trail?
Typically, when I implement and audit trail for DB tables, I implement it via code, not in triggers. When implemented in code, you can provide additional context information, such as the reason the change was made, who made the change, what was the reason behind the change, etc., which is a very common business requirement. In a typical multi-layer application design, we have DAOs for each table and the business services which implement the updates are responsible for calling the separate DAOs for the core table update and the history entry insert. This approach is no good if you want a bunch of different sources directly making table updates to the DB, but it's a natural approach if you have a service-oriented architecture and your one set of services are the only way into and out of those tables.
If you implement audit trail using this approach, you of course need to make sure the audit trail record is inserted in the same transaction as the modification to the core table.
Whether this would perform better than a trigger-based approach, I couldn't say. My guess would be that if you are using bulk insert operations it may run faster, but would probably be slower in the more common scenario where you are updating/deleting one record at a time via SQL. It's another option you could explore, though.

Resources