If I prevent transaction creation via ACL, the transaction doesn't fail until after it executes.
The transaction script runs, but the failure happens at the very end when the transaction fails to be created due to ACL.
Is there a way to prevent the execution from ever starting ?
To be clear, the transaction is not committed, so the effect is the same. What you are describing is a (useful) optimization, rather than a change in behavior.
Related
Problem Statement: Asynchronous Plugins are failing intermittently because entity Records are being updated at very high pace
In Detail Analysis: We have a process where we frequently share/un-share Entity Records with Access Team. Asynchronous plugins executes after create and update. Dynamics CRM is integrated with multiple systems and receives updates in the system. We are facing an issue as these Async Plugins are failing with SQL Error,
Sql error: Generic SQL error. CRM ErrorCode: -2147204784 Sql ErrorCode: -2146232060 Sql Number: 1205
In the Dynamics 365 web application, if user creates/updates a record, plugin executes and succeeded. Issue is if a record is being updated through integrated systems, at that point of time plugins fails. We tried below things as to resolve the issue,
Changed the order of execution of Async Plugins
Optimized the plugin code
These are not helping much, is there any way we can put delays between Plugin executions? Or any other way we can overcome this hurdle?
***Microsoft says update record frequency is too high(0.07 sec difference between 2 record update). Deadlock is being caused due to the execution of internal SP "p_CascadeRevokeAccess".
Sql number 1205 is the key, transaction causing some deadlocks.
Error 1205 : Transaction (Process ID) was deadlocked on resources with another process and has been chosen as the deadlock victim. Rerun the transaction
I would recommend you to take care of few things:
You already mentioned that you did optimize the plugin code, make sure you have "No Lock" in any Query expression or fetchxml query that you are using in plugin code
If you are recursively updating the same entity record in plugin, try using pre-operation of create/update message and update the target attributes in same synchronous transaction instead of too many asynchronous transactions
Not sure how many steps getting triggered in your post-operation plugin, though you have execution order set properly, only the order of flow within the same transaction can be controlled in specific order. But you cannot control the execution order of step 1 of second transaction to run after the step 2 of first transaction of the same entity record update by your upstream systems.
So try to compose your needed flow and do shuffling the code in each plugin step for better handling. Though the CRM platform take care of some optimistic concurrency techniques, you have to take care of your implementation. Otherwise this will become a maintenance nightmare and future enhancements may be impossible.
Also you mentioned about sharing/unsharing using access teams, there is a possibility that this issue could be related to Principal Object Access (POA) table updates. Check with Microsoft with a premier support ticket if needed.
We have a post-operation sync plugin which writes events data into a eventhub. However we have some custom functionality which uses ExecuteTransactionrequest to run batch operations.
If the transactions running under ExecuteTransactionrequest fails it rollsback however data written into eventhub can't be rolled back.
Is there a way to control firing of post-operation plugins with this so that they fire after all the operations are completed.
Your options here are somewhat limited, you could try pre-validation.
Pipeline stages
Pre-validation - Stage in the pipeline for plug-ins that are to
execute before the main system operation. Plug-ins registered in this
stage may execute outside the database transaction. The
pre-validation stage occurs prior to security checks being performed
to verify the calling or logged on user has the correct permissions to
perform the intended operation.
Pre-operation - Stage in the pipeline for plug-ins that are to execute
before the main system operation. Plug-ins registered in this stage
are executed within the database transaction.
Post-operation - Stage in the pipeline for plug-ins which are to
execute after the main operation. Plug-ins registered in this stage
are executed within the database transaction.
Inclusion in database transactions
Any registered plug-in that executes during the database transaction
and that passes an exception back to the platform cancels the core
operation. This results in a rollback of the core operation. In
addition, any pre-event or post event registered plug-ins that have
not yet executed and any workflow that is triggered by the same event
that the plug-in was registered for will not execute.
If you are using event hub as some sort of logging, I would advise that you don't, as transaction rollback is likely to wipe out any logs.
You could consider taking your logging outside of CRM. Or, if you have to have it within CRM, then send the data somewhere it can't be rolled back first. For example plugin > external web service > CRM.
You can use an asynchronous plugin. Asynchronous steps are only invoked when the synchronous plugin pipeline has completed without errors.
I'm using Hibernate with Spring on Tomcat. I've been reading and re-reading the oft pointed to JBoss wiki page on the topic, and that has been helpful. But it leaves me with some questions.
The idea of starting a transaction for every request troubles me. I guess I could limit the filter to certain controllers -- maybe put all my controllers that need a transaction under a pseudo "tx" path or something. But isn't it a bad idea to use transactions if you don't know if you're going to need one? And if I'm just doing reads in some request -- reads that very likely may come from a cache -- aren't I better off without a transaction?
I've read posts mentioning how they handled the transactions at the service layer, and I'd like to do this with Spring. But then what does the filter code look like? I still want the session available in my view for some lazy loading.
If all I have to do is call sessionFactory.getCurrentSession() in my filter, how does it get "freed" back to the session factory for re-use? (I expected to see a session.close() or something, even when using transactions.) Who is telling the session factory that that session can be reused?
Perhaps it's the beginTransaction() call that binds a given database connection to a given session for the duration of a request? Otherwise, a session pulls db connections from the pool as needed, right?
Thanks for your patience with all my questions.
(And if your answer is going to be a link to the Spring documentation, you'll just make me cry. You don't want that, do you? I'll pay real money if people would stop answering Spring-related questions that way.)
Your concerns are valid, the solution provided on the wiki page is too simplistic. The transaction should not be managed at the web layer - it should be handled at the service layer.
The correct implementation would open a session and bind it to a thread in the filter. No transaction is started. The session is put in flush mode never - read only mode. A service call would set the session to flush mode auto & start / commit the transaction. Once the service method finishes the session flush mode is reverted back to never.
There is also an option to not open the session in the filter. Each service layer call would open a separate session & transaction - after the service call is done the session is not closed, but registered for deferred close. The session will be closed after the web request processing is complete.
Spring provides OpensessionInViewFilter which works as described above. So ignore the jboss wiki article and just configure the OpensessionInViewFilter - everything will be fine.
SessionFactory.getCurrentSession() - internally creates and assigns the session to a thread local. Each request / thread will have its own session. Once the web request processing is complete the session will be closed. From within your code you just need to use SessionFactory.getCurrentSession() and don't have to close it. The code sample on the jboss wiki page is wrong - it should have a SessionFactory.getCurrentSession().close() in the finally block. Or they might be using JTA transaction and configured hibernate to open/close session in conjunction with the JTA transaction.
It is not a problem if the filter creates a session for every request, because the sessions are coming from a session pool, and they will be reused. From the view of the OS, nothing happens.
A hibernate session is, by the fact, a tcp (or socket/pipe) connection to the database server. The cost of the db conn creation is very dependent from the sql type (postgresql is notably bad in this, altough it is very good in every anything). But it doesn't means really anything, because hibernate reuses the database connections.
The simple hibernate filter solution starts a new transaction on the session for every requests, too. It is transaction from the view of the SQL: it is a "BEGIN" and "COMMIT" query. It is always costly, and this should be reduced.
IMHO a possible solution were, if the transactions were started only at the first query of the current request. Maybe spring has something usable for this.
Is there any workaround to remove deadlock without killing the session?
From the Concepts Guide:
Oracle automatically detects deadlock situations and resolves them by rolling back one of the statements involved in the deadlock, thereby releasing one set of the conflicting row locks.
You don't have to do anything to remove a deadlock, Oracle takes care of it automatically. The session is not killed, it is rolled back to a point just before the trigger statement. The other session is unaffected (i-e it still waits for the lock until the rolled back session either commits or rolls back its transaction).
In most situations deadlocks should be exceptionally rare. You can prevent all deadlocks by using FOR UPDATE NOWAIT statements instead of FOR UPDATE.
See also
Discussion about removing deadlock on AskTom
Deadlocks are automatically cleared in Oracle by cancelling one of the locked statements. You need not do it manually. One of the sessions will get "ORA-00060" and it should decide whether to retry or roll back.
But from you description it looks like you have a block, not deadlock.
Anyway, blocking session should somehow release its lock -- by commiting or rolling back its transaction. You can just wait for it (possibly for a long time). If you can change code of your application -- you probably can rewrite it to release lock or avoid it. Otherwise, you have to kill session to immediately unlock resources.
No, Oracle 10g does not seem to resolve deadlocks automatically in practice. We did have dealocks and we had to clear the sessions manually.
This page can help in identifying if you have deadlocks
Identifying and Resolving Oracle ITL Deadlock
We randomly get warnings such as below on our WL server. We'd like to better understand what exactly these warnings are and what we should possibly do to avoid them.
Abandoning transaction after 86,606
seconds:
Xid=BEA1-52CE4A8A9B5CD2587CA9(14534444),
Status=Committing,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds
since begin=86605, seconds
left=0,XAServerResourceInfo[JMS_goJDBCStore]=(ServerResourceInfo[JMS_goJDBCStore]= (state=committed,assigned=go_server),xar=JMS_goJDBCStore,re-Registered
= true),XAServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=
(ServerResourceInfo[weblogic.jdbc.wrapper.JTSXAResourceImpl]=(state=new,assigned=none),xar=
weblogic.jdbc.wrapper.JTSXAResourceImpl#1a8fb80,re-Registered
= true),SCInfo[go+go_server]= (state=committed),properties=({weblogic.jdbc=t3://10.6.202.37:18080}),local
properties=
({weblogic.transaction.recoveredTransaction=true}),OwnerTransactionManager=
ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=go_server+10.6.202.37:18080+go+t3+,
XAResources={JMS_goJDBCStore,
weblogic.jdbc.wrapper.JTSXAResourceImpl},NonXAResources=
{})],CoordinatorURL=go_server+10.6.202.37:18080+go+t3+)
I do understand the BEA explanation:
Error: Abandoning transaction after secs seconds: tx
Description: When a transaction is abandoned,
knowledge of the transaction is
removed from the transaction manager
that was attempting to drive the
transaction to completion. The JTA
configuration attribute
AbandonTimeoutSeconds determines how
long the transaction manager should
persist in trying to commit or
rollback the transaction.
Cause: A resource or participating server may
have been unavailable for the duration of the
AbandonTimeoutSeconds period.
Action: Check participating resources for heuristic
completions and correct any data inconsistencies.
We have observed that you can get rid of these warnings by deleting the *.tlog files but this doesn't seem like the right strategy to deal with the warnings.
The warnings refer to JMS and our JMS store. We do use JMS. We just don't understand why transactions are hanging out there and why they would be "abandoned"??
I know it's not very satisfying, but we do delete *.tlog files before startup in our app hosted on WLS 7.
Our app is an event-processing back-end, largely driven by JMS. We aren't interested in preserving transactions across WLS restarts. If it didn't complete before the shutdown, it tends not to complete after a restart. So doing this *.tlog cleanup just eliminates some warnings and potential flaky behavior.
I don't think JMS is fundamental to any of this, by the way. At least not that I know.
By the way, we moved from JDBC JMS store to local files. That was said to be better performing and we didn't need the location independence we'd get from using JDBC. If that describes your situation also, maybe moving to local files would eliminate the root cause for you?