overload of session log in informatica - informatica-powercenter

I have a mappig to update certain columns in a table. Only 10% or less records should get updated. The remaining records should be rejected by the informatica.
The mapping works just fine if not for all the records getting logged into the session log file. Is there a way to prevent this other than using the filter transformation? I am aware this can be eliminated with a filter transforamtion. But just wanted to check if there is any simpler approach like selecting an option or something.

Change the tracing level to Terse - you can configure it for the update strategy transformation transformation or for the entire session (Config Object / Override tracing).

Well, by design, you should not let the reject the records and let them go to sink. Rather, control the logic for rejection, so that in future if there is a change to that rejection logic, you have control in ur hands.
Further, the rejected records are logged to session log by default, since its supposed to be abnormal behaviour on a mapping's part that some data is not handled properly through the flow.
To avoid getting all that data logged into session log, you can change the tracing level of the session to Terse. But remember, in that case, you wont get lot of other logging information in session, that is generally useful. This will achieve the end result, but is not the "ideal" way of achieving the same end result.
I would suggest looking again at your mapping design.

Related

Using HIbernate / Spring whats the best way to watch a table for changes to individual records?

Q: What is the proper way to watch a table for record level changes using Hibernate / Spring? The DB is a typical relational database system. Our intent is to move to an in-memory solution some time in the future but we can't do it just yet. Q: Are we on the right track or is there a better approach? Examples?
We've thought of two possibilities. One is to load and cache the whole table and the other is to implement a hibernate event listener. Problem is that we aren't interested in events originating in the current VM. What we are interested in is if someone else changes the table. If we load and cache the entire table we'll still have to figure out an efficient way to know when it changes so we may end up implementing both a cache and a listener. Of course a listener might not help us if it doesn't hear changes external to the VM. Our interest is in individual records which is to say that if a record changes, we want Java to update something else based on that record. Ideally we want to avoid re-loading the entire cache, assuming we use one, from scratch and instead update specific records in the cache as they change.

Locking records returned by context? Or perhaps a change to my approach

I'm not sure whether I need a way to lock records returned by the context or simply need a new approach.
Here's the story. We currently have a small number of apps that integrate with our CRM. Some of them open a XrmServiceContext and return a few thousand record to perform updates. These scripts are calling SaveChanges along the way but there will still be accounts near the end that will be saved a couple of minutes after the context return them. If a user updates the record during this time, their changes are overwritten by the script.
Is there a way of locking the records until the context has saved the update back or is there a better approach I should be taking?
Kit
In my opinion, this type of database transaction issue is what CRM is currently lacking the most. There is no way to ensure that someone else doesn't monkey with your data, it's always a last-one-in-wins world in CRM.
With that being said, my suggestion would be to only update the attributes you care about. If you're returning all columns for an entity, when you update that entity, you're possibly going to update all the attributes of the entity, even if you only updated one of them.
If you're dealing with a system were you can't tolerate the last-one-in-wins mentality, then you're probably better off not using CRM.
Update 1
CRM 2015 SP1 and above supports Optimistic Updates. Which allows the use of a version number to ensure that no one has updated the record since you retrieved it.
You have a several options here, it just depends on what you want to do. First of all though, if you can move some of these automated processes to off-time hours, then that's the best option.
Another option would be to retrieve each record 1 by 1 instead of by 1000+.
If you are only updating a percentage of the records retrieved, then you would be better off to check before saving if an update occurred (comparing the modified date). If the modified date changed, then you need to do a single retrieve and then save.
At first thought, I would create a field or status that indicates a pending operation and then use JScript in the form OnLoad event to warn/lock the form. When you process completes, it could clear the flag.

Using PostgreSQL Rules/Triggers for debugging purposes

An application I am trying to support is currently running into unique constraint violations. I haven't been able to reproduce this problem in non-production environments. Is it reasonable, for debugging purposes, to create a rule (trigger?) that will in effect just copy every insert to a different table? So in effect the new table will be the same as the old table without a constraint, hopefully.
The application is using Spring to manage transactionality, and I haven't been able to find any documentation relating rules to transactions. After the violation, whatever is written so far in the transaction is rolled back - will this affect the rule in any way?
This is Postgres 8.3.
After the violation, whatever is written so far in the transaction is
rolled back - will this affect the rule in any way?
That will rollback everything the rule did, as well. You could create a trigger that uses dblink, to get some work done outside your current transaction. Another option could be a savepoint, but then you have to change all your current code and transaction.
Unique violations are logged in the logfiles as well, get this information to see what is going wrong. Version 9.0 has a change that will tell you also what the values are:
Improve uniqueness-constraint violation error messages to report the
values causing the failure (Itagaki Takahiro) For example, a
uniqueness constraint violation might now report Key (x)=(2) already
exists.
You can do almost anything you can imagine with rules and triggers. And then some more. Your exact intent remains somewhat unclear, though.
If the transaction is rolled back anyway, as you hint at the end, then everything will be undone, including all side-effects of any rules or triggers involved. Your plan would be futile.
There is a workaround for that in case that is, in fact, what you want to achieve: use dblink to link and INSERT to a table in the same database. That's not rolled back.
However, if it's just for debugging purposes, the database log is a much simpler way to see which duplicates have not been entered. Errors are logged by default. If not, you can set it up as you need it. See about your options in the manual.
As has been said, rules cannot be used for this purpose, as they only serve to rewrite the query. But rewritten query is just like the original one still part of the transaction.
Rules can be used to enforce constraints that are impossible to implement using regular constraints, such as a key being unique among several tables, or other multi-table stuff. (these do have the advantage of the "canary" tablename showing up in the logs and error messages) But the OP already had too many constraints, it appears...
Tweaking the serialisation level also seems indicated (are there multiple sessions involved? does the framework use a connection pool?)

nhibernate doesn't get a chance to update?

I am building a small web application, where the user is granted the ability to rate items.
In my application I am using nhibernate and asp.net mvc.
All the rating requests are sent by jquery (ajax/post).
When the user votes an item, I check if the item has been previously voted. If so, I update the last voting value to the new one received. If not, I just add a new rating to my table.
I have noticed something very strange. This works well, but when I click several times really fast something gets screwed up. I get multiple ratings, it seems as if nhibernate doesn't bother checking if the user has previously voted and just returns a false value.
Is this possible? How can I see what's going under the hood?
thank you
You probably have a concurrency problem. I assume that you get a thread and transaction per click. Clicking very fast results in parallel transactions which can't see what others are doing.
You have a typical problem that items which aren't in the database (the new votes) can't be locked.
The solutions are:
Use lock to avoid multiple votes of the same user being stored at the same time. This doesn't work when you have multiple servers (or AppDomains) on the same database, because the lock is restricted to the AppDomain.
Use table locks in the database to lock out the whole votes table that only one transaction can add votes at the same time.
Have you turned on NHibernate logging?
Add the following to the hibernate.config.xml file:
<property name="show_sql">true</property>
The sql generated can be seen in the console or test runner output if you are running unit tests. You can also configure log4net to write NHibernate logging information to file (See https://web.archive.org/web/20110514164829/http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/07/01/how-to-configure-log4net-for-use-with-nhibernate.aspx)
Lastly, how are you using NHibernate? Are you using a repository pattern? Its hard to determine what is wrong with your application without some idea of the code.

the best way to track data changes in oracle

as the title i am talking about, what's the best way to track data changes in oracle? i just want to know which row being updated/deleted/inserted?
at first i think about the trigger, but i need to write more triggers on each table and then record down the rowid which effected into my change table, it's not good, then i search in Google, learn new concepts about materialized view log and change data capture,
materialized view log is good for me that i can compare it to original table then i can get the different records, even the different of the fields, i think the way is the same with i create/copy new table from original (but i don't know what's different?);
change data capture component is complicate for me :), so i don't want to waste my time to research it.
anybody has the experience the best way to track data changes in oracle?
You'll want to have a look at the AUDIT statement. It gathers all auditing records in the SYS.AUD$ table.
Example:
AUDIT insert, update, delete ON t BY ACCESS
Regards,
Rob.
You might want to take a look at Golden Gate. This makes capturing changes a snap, at a price but with good performance and quick setup.
If performance is no issue, triggers and audit could be a valid solution.
If performance is an issue and Golden Gate is considered too expensive, you could also use Logminer or Change Data Capture. Given this choice, my preference would go for CDC.
As you see, there are quite a few options, near realtime and offline.
Coding a solution by hand also has a price, Golden Gate is worth investigating.
Oracle does this for you via redo logs, it depends on what you're trying to do with this info. I'm assuming your need is replication (track changes on source instance and propagate to 1 or more target instances).
If thats the case, you may consider Oracle streams (other options such as Advanced Replication, but you'll need to consider your needs):
From Oracle:
When you use Streams, replication of a
DML or DDL change typically includes
three steps:
A capture process or an application
creates one or more logical change
records (LCRs) and enqueues them into
a queue. An LCR is a message with a
specific format that describes a
database change. A capture process
reformats changes captured from the
redo log into LCRs, and applications
can construct LCRs. If the change was
a data manipulation language (DML)
operation, then each LCR encapsulates
a row change resulting from the DML
operation to a shared table at the
source database. If the change was a
data definition language (DDL)
operation, then an LCR encapsulates
the DDL change that was made to a
shared database object at a source
database.
A propagation propagates the staged
LCR to another queue, which usually
resides in a database that is separate
from the database where the LCR was
captured. An LCR can be propagated to
a number of queues before it arrives
at a destination database.
At a destination database, an apply
process consumes the change by
applying the LCR to the shared
database object. An apply process can
dequeue the LCR and apply it directly,
or an apply process can dequeue the
LCR and send it to an apply handler.
In a Streams replication environment,
an apply handler performs customized
processing of the LCR and then applies
the LCR to the shared database object.

Resources