Update form throw an plug-in crm 4 - dynamics-crm

I try to update an entity with a plugin. I use Image (pre) and stage Post with asynk... The databse is updated in real time, but not the form. Does anybody know why i have to "upate" two times to se the updated text in the webform and the value is updated in the database immediately? I want to se it "immediately". Thanks

With asynchronous plugin execution, the database is never really updated "in real time"; the asynchronous execution might just happen so quickly that it looks that way to a human's "slow" perception.
On a server process level, however, the code of an asynchronous plugin will run "when there is time", while the code that rebuilds the form and sends it back to the client is running "immediately" and will wait for synchronous plugins, but not for asynchronous ones.
If you want the changes your plugin makes to be reflected in the entity form immediately after the reload, the plugin has to be registered for synchronous execution.
As far as I know, for asynchronous plugins, as well as for workflows, there is no timeframe in which they are guaranteed to run after being triggered.

Related

Is it possible to have both synchronous and asynchonous plugin running post creation of record?

We have a business need where we need to create up to 50,000 records. Using a synchronous plugin or javascript is not acceptable solution here because it takes too long-- SQL timeout will occur. Is it possible? Can we run asynchronous & synchronous plugin on the same PostOperation Create step of that entity?
Of course, yes. You cannot have same step as both sync and Async but You can register two steps, one as synchronous and another one as asynchronous. Make sure you are not doing the same logic in those steps in same plugin.
You can split the logic in two plugins and register the two separate steps carefully wrt what is need in sync mode vs Async mode.
Normally, If you want to Rollup the DB transaction when the logic fails - then synchronous step is needed. If the logic failure is not a show stopper and can silently fail to move forward - then asynchronous is enough (write a plugin trace log entry in try..catch for analysis).
Assembly (.dll) can have two plugins (.cs file), and multiple steps for each plugin is possible. But maintain the clarity for less complexity and maintenance.

How to trigger perforce changes before the submit using TeamCity

I have currently CI system which triggers submit and particular stream and then builds the change and tests it.
However as I said it is done upon submit, meaning the change is merged before the testing.
So my question is how I can trigger the changes in an earlier stage? What is the best approach?
We are not using any IDEs for development.
Thanks!
To do it on the Perforce side, you'd use a change-content trigger, which runs prior to submit while the files are available in a staging area on the server (the in-flight change is treated as a shelf and can be accessed using the #=change syntax). This allows a trigger script to access the content in-flight and reject it before it's finalized.
While a content trigger is running, the files are locked, and the submit will block the client session until it's finalized on the server and can report success, so you'd want to be careful about which codelines you enable something like this on.

Workflow Waiting Forever

I have a workflow that runs when an entity is created and it creates two other entities and puts them on a queue. It then waits until each entity's status reason is set to done. After which is continues.
Basically two teams will work an order and then it will continue processing after both teams are done.
Most of the time it works. However sometimes it waits forever. I'll re-active and re-resolve the other tasks, but it just never wakes up.
What can I do? The workflows aren't really powerful enough for me to have it poll with a timeout (there are no loops). I'd like to avoid on-change plugins for these other entities to get workflow behavior all scattered about.
Edit:
Restarting the CRM services (not sure which did it, I restarted them all) allowed the workflow to resume. However, I'd still like to know how to make this more reliable.
I had the same problem (and a lot more) with workflows in CRM 2011 and decided not to use them (except for very special purposes).
The main reason is because of their very limited error handling. Another reason is that it is inconvenient to put them under source control. Another reasons are: Worflows cannot run offline and user impersonation is also not supported. For a comparison look here: http://goo.gl/9ht1QJ
Use plugins instead of workflows, then you have full control.
But keep in mind that plugins (unlike workflows) are not designed for long running tasks.
So they have a default max execution time of 120 sec and are not stateful/persisted. But in most cases (and i think also in your case) that is not a problem.
Just change your eventing a little bit:
Implement and register a plugin step for: entity is created and it creates two other entities and puts them on a queue
Implement and register another step: entity's status reason is set to done, query for other entity and check status, if done continue processing
If you really do not want use plugins for you business logic you can consider implementing a plugin which restarts/resumes faulted workflows.
But thats not a very nice solution.

How to guarantee a long operation completes

Normally, billings should execute in the background on a scheduled date (I haven't figured out how to do that yet, but that's another topic).
But occasionally, the user may wish to execute a billing manually. Once clicked, I would like to be sure the operation runs to completion regardless of what happens on the user side (e.g. closes browser, machine dies, network goes down, whatever).
I'm pretty sure db.SaveChanges() wraps its DB operations in a transaction, so from a server perspective I believe the whole thing will either finish or fail, with no partial effect.
But what about all the work between the POST and the db.SaveChanges()? Is there a way to be sure the user can't inadvertently or intentionally stop that from completing?
I guess a corollary to this question is what happens to a running Asynchronous Controller or a running Task or Thread if the user disconnects?
My previous project was actually doing a billing system in MVC. I distinctly remember testing out what would happen if I used Task and then quickly exited the site. It did all of the calculations just fine, ran a stored procedure in SQL Server, and sent me an e-mail when it was done.
So, to answer your question: If you wrap the operations in a Task it should finish anyways with no problems.

Reverse AJAX? Can data changes be 'PUSHED' to script?

I have noticed that some of my ajax-heavy sites (ones I visit, not ones I have built), have certain auto-refresh features. For example, in GMail, if I get a new message, I see the new message without a page reload. It's the same with the Facebook browser-based IM client. From what I can tell, there aren't any java applets handling the server-browser binding, so I'm left to assume it's being done by AJAX and perhaps some element I'm unaware of. So by my best guess, it's done in one of two ways:
The javascript does a steady "ping" to a server-side script, checking for any updates that might be available (which would explain why some of these pages bring any other heavy-duty pages to a crawl). or
The javascript sits idly by and a server-side script actually "Pushes" any updates to the browser. But I'm not sure if this is possible. I'd imagine there is some kind of AJAX function that still pings, but all it simply asks "any updates?" and the server-script has a simple boolean that says "nope" or "I'm glad you asked." But if this is the case, any data changes would need to call the script directly so that it has the data changes ready and makes the change to that boolean function.
So is that possible/feasible/how it works? I imagine something like:
Someone sends an email/IM/DB update to the server, the server calls the script using the script's URL plus some relevant GET variable, the script notes the change and updates the "updates available" variable, the AJAX gets the response that there are in fact updates, the AJAX runs its normal "update page" functions, which executes the normal update scripts and outputs them to the browser.
I ask because it seems really inefficient that the js is just doing a constant check which requires a) the server to do work every 1.5 seconds, and b) my browser to do work every 1.5 seconds just so that on my end I can say "Oh boy, I got an IM! just like a real IM client!"
Read about Comet
I've actually been working on a small .NET Web App that uses the Ajax with long polling technique described.
Depending on what technology you're using, you could use thread signaling mechanisms to hold your request until an update is retrieved.
With ASP.NET I'm running my server on a single machine, so I store a reference to my Producer object (which contains a thread that processes the data). To initiate the data pull, my service's Subscribe method is called, which creates a Consumer object that's registered with the Producer. If the Consumer is long polling mode, it has a AutoResetEvent which is signaled whenever it receives new data, and whenever the web client makes a request for data, the Consumer first waits on the reset event, and then returns it.
But you're mentioning something about PHP - as far as I know persistence is maintained through serialization, not actually keeping the object in memory, so I don't know how you could reference a Producer object using $_CACHE[] or $_SESSION[]. When I developed in PHP I never really knew anything about multithreading so I didn't play around with it, but I guess you can look into that.
Using infinite loops is going to consume a lot of your processing power - I would exhaust all other options first.

Resources