Using PLINQ / TPL in a custom workflow activity - dynamics-crm

I have a workflow defined to execute when a field on a custom entity is changed.
The workflow calls into a custom activity which in turn uses PLINQ to process a bunch of records.
The code that the custom activty calls into looks like so:
protected override void Execute(CodeActivityContext executionContext)
{
// Get the context service.
IWorkflowContext context = executionContext.GetExtension<IWorkflowContext>();
IOrganizationServiceFactory serviceFactory =
executionContext.GetExtension<IOrganizationServiceFactory>();
// Use the context service to create an instance of IOrganizationService.
IOrganizationService _orgService = serviceFactory.CreateOrganizationService
(context.InitiatingUserId);
int pagesize = 2000;
// use FetchXML aggregate functions to get total count of the number of record to process
// Reference: http://msdn.microsoft.com/en-us/library/gg309565.aspx
int totalcount = GetTotalCount();
int totalPages = (int)Math.Ceiling((double)totalcount / (double)pagesize);
try
{
Parallel.For(1,
totalPages + 1,
() => new MyOrgserviceContext(_orgService),
(pageIndex, state, ctx) =>
{
var items = ctx.myEntitySet.Skip((pageIndex - 1) * pagesize).Take(pagesize);
foreach(var item in items)
{
//process item as needed
ctx.SaveChanges();
}
return ctx;
},
ctx => ctx.Dispose()
);
}
catch (AggregateException ex)
{
//handle as needed
}
}
I'm noticing the following error(s) as an aggregate exception (multiple occurences of the same error in the InnerExceptions):
"Encountered disposed CrmDbConnection when it should not be disposed"
From what I've read:
CRM 2011 Workflow "Invalid Pointer" error
this can happen when you have class level variables since the workflow runtime can end up
sharing the same class instance across multiple workflow invocations. This is clearly not the case here and also I don't have multiple instances of this workflow running on multiple records. There is just one instance of this workflow running at any point in time.
The code above works fine when extracted and hosted outside the workflow host (CRMAsyncService).
This is using CRM 2011 Rollup 10.
Any insights much appreciated.

I'm not certain, but this might just because you are disposing of your connection, at ctx.Dispose().
As each new MyOrgservicecontext(_orgService) object uses the same IOrganizationService, I would suspect when the first MyOrgservicecontext is disposed, then all the other MyOrgservicecontext objects then have a disposed connection - meaning their service calls will fail and an exception is throw.
I would suggest removing the dispose to see if this resolves the problem.

Related

NetworkStream ReadAsync and WriteAsync hang infinitelly when using CancellationTokenSource - Deadlock Caused by Task.Result (or Task.Wait)

After reading pretty much every question on Stack Overflow and Microsoft's documentation about NetworkStream, I dont understand what is wrong with my code.
The problem I see is that my method GetDataAsync() hangs very often. I call this method from Init Method like so:
public MyView(string id)
{
InitializeComponent();
MyViewModel myViewModel = session.Resolve<MyViewModel>(); //Autofac
myiewModel.Init(id);
BindingContext = myViewModel;
}
Above, my View does its initialization, then resolves MyViewModel from Autofac DiC and then calls MyViewModel Init() method to do some additional setup on the VM.
The Init method then calls my Async method GetDataAsync which return a IList like so:
public void Init()
{
// call this Async method to populate a ListView
foreach (var model in GetDataAsync("111").Result)
{
// The List<MyModel> returned by the GetDataAsync is then
// used to load ListView's ObservableCollection<MyModel>
// This ObservableCollection is data-bound to a ListView in
// this View. So, the ListView shows its data once the View
// displays.
}
}
, and here is my GetDataAsync() method including my comments:
public override async Task<IList<MyModel>> GetDataAsync(string id)
{
var timeout = TimeSpan.FromSeconds(20);
try
{
byte[] messageBytes = GetMessageBytes(Id);
using (var cts = new CancellationTokenSource(timeout))
using (TcpClient client = new TcpClient(Ip, Port))
using (NetworkStream stream = client.GetStream())
{
await stream.WriteAsync(messageBytes, 0, messageBytes.Length, cts.Token);
await stream.FlushAsync(cts.Token);
byte[] buffer = new byte[1024];
StringBuilder builder = new StringBuilder();
int bytesRead = 0;
await Task.Delay(500);
while (stream.DataAvailable) // need to Delay to wait for data to be available
{
bytesRead = await stream.ReadAsync(buffer, 0, buffer.Length, cts.Token);
builder.AppendFormat("{0}", Encoding.ASCII.GetString(buffer, 0, bytesRead));
}
string msg = buffer.ToString();
}
return ParseMessageIntoList(msg); // parses message into IList<MyModel>
}
catch (OperationCanceledException oce)
{
return await Task.FromResult<IList<RoomGuestModel>>(new List<RoomGuestModel>());
}
catch (Exception ex)
{
return await Task.FromResult<IList<RoomGuestModel>>(new List<RoomGuestModel>());
}
}
I would expect that a ReadAsync or WriteAsync either complete successfully, throw some exception, or get cancelled after 10 seconds in which case I would catch OperationCanceledException.
However, it just hangs endlessly when I call method above. If I am debugging and have some breakpoints in the code above, I will be able to go through the method entirely but if I call it 2nd time, app just hangs forever.
I am new to Tasks and Async programming, so I am also not sure I do my cancellations and exception handling properly here?
UPDATE AND FIX
I figured out how to fix the deadlock issue. In hope this will help others sho might run into the same issue, I'll first explain it. The articles that helped me a lot are:
https://devblogs.microsoft.com/pfxteam/await-and-ui-and-deadlocks-oh-my/ by Stephen Taub
https://montemagno.com/c-sharp-developers-stop-calling-dot-result/ by James Montemagno
https://msdn.microsoft.com/en-us/magazine/jj991977.aspx by StephenCleary
https://blog.xamarin.com/getting-started-with-async-await/ by Jon Goldberger
#StephenCleary was great help understanding the issue. Calling Result or Wait (above, I call Result when calling GetDataAsync) will lead to dead-lock.
The context thread (UI in this case) is now waiting for GetDataAsync to complete, but GetDataAsync captures the current context-thread (UI thread), so it can resume on it once it gets data from TCP. But since this context-thread is now blocked by call to Result, it cannot resume.
The end result is that it looks like call to GetDataAsync has deadlocked but in reality, it is call to Result that deadlocked.
After reading tons of articles from #StephenTaub, #StephenCleary, #JamesMontemagno, #JoeGoldenberger (thank you all), I started getting understanding of the issue (I am new to TAP/async/await).
Then I discovered continuations in Tasks and how to use them to resolve the issue (thanks to Stephen Taub's article above).
So, instead of calling it like:
IList<MyModel> models = GetDataAsync("111").Result;
foeach(var model in models)
{
MyModelsObservableCollection.Add(model);
}
, I call it with continuation like this:
GetDataAsync(id)
.ContinueWith((antecedant) =>
{
foreach(var model in antecedant.Result)
{
MyModelsObservableCollection.Add(model);
}
}, TaskContinuationOptions.OnlyOnRanToCompletion)
.ContinueWith((antecedant) =>
{
var error = antecedant.Exception.Flatten();
}, TaskContinuationOptions.OnlyOnFaulted);
This seam to have fixed my deadlocking issue and now my list will load fine even though it is loaded from the constructor.
So, this seam to work just fine. But #JoeGoldenberger also suggests another solution in his article https://blog.xamarin.com/getting-started-with-async-await/ which is to use Task.Run(async()=>{...}); and inside that await GetDataAsync and load ObservableCollection. So, I gave that a try as well and that is not blocking either, so working great:
Task.Run(async() =>
{
IList<MyModel> models = await GetDataAsync(id);
foreach (var model in models)
{
MyModelsObservableCollection.Add(model);
}
});
So, it looks like either of these 2 will remove deadlock just fine. And since above my Init method is called from a c-tor; therefore, I cannot make it Async and await on this, using one of the 2 methods described above resolves my problem. I dont know which one is better but in my tests, they do work.
Your problem is most likely due to GetDataAsync("111").Result. You shouldn't block on async code.
This can cause deadocks. E.g., if you're on a UI thread, the UI thread will start GetDataAsync and run it until it hits an await. At this point, GetDataAsync returns an incomplete task, and the .Result call blocks the UI thread until that task is completed.
Eventually, the inner async call completes and GetDataAsync is ready to resume executing after its await. By default, await captures its context and resumes on that context. Which in this example is the UI thread. Which is blocked since it called Result. So, the UI thread is waiting for GetDataAsync to complete, and GetDataAsync is waiting for the UI thread so it can complete: deadlock.
The proper solution is to go async all the way; replace .Result with await, and make the necessary changes to other code for that to happen.
As stated in my update, going async all the way by providing an async lambda like below resolved the issue for me
Task.Run(async() =>
{
IList<MyModel> models = await GetDataAsync(id);
foreach (var model in models)
{
MyModelsObservableCollection.Add(model);
}
});
Loading asynchronously an observable collection in a ctor this way (in my case, ctor calls Init which then uses this Task.Run) solves problem

How to perform new operation on #RetryOnFailure by jcabi

Iam using jcabi-aspects to retry connection to my URL http://xxxxxx:8080/hello till the connection comes back.As you know #RetryOnFailure by jcabi has two fields attempts and delay.
I want to perform the operation like attempts(12)=expiryTime(1 min=60000 millis)/delay(5 sec=5000 millis) on jcabi #RetryOnFailure.How do i do this.The code snippet is as below.
#RetryOnFailure(attempts = 12, delay = 5)
public String load(URL url) {
return url.openConnection().getContent();
}
You can combine two annotations:
#Timeable(unit = TimeUnit.MINUTE, limit = 1)
#RetryOnFailure(attempts = Integer.MAX_VALUE, delay = 5)
public String load(URL url) {
return url.openConnection().getContent();
}
#RetryOnFailure will retry forever, but #Timeable will stop it in a minute.
The library you picked (jcabi) does not have this feature. But luckily the very handy RetryPolicies from Spring-Batch have been extracted (so you can use them alone, without the batching):
Spring-Retry
One of the many classes you could use from there is TimeoutRetryPolicy:
RetryTemplate template = new RetryTemplate();
TimeoutRetryPolicy policy = new TimeoutRetryPolicy();
policy.setTimeout(30000L);
template.setRetryPolicy(policy);
Foo result = template.execute(new RetryCallback<Foo>() {
public Foo doWithRetry(RetryContext context) {
// Do stuff that might fail, e.g. webservice operation
return result;
}
});
The whole spring-retry project is very easy to use and full of features, like backOffPolicies, listeners, etc.

Data Fetching Crashes in Xamarin Forms

I am trying to fetch Customer data to parse them into customer object to display on TableView. The following code sometimes works, sometimes not. Whenever it does crash, it shows Customer data is empty in the foreach loop even though I run the same code every time. I do not have clue what could be wrong in this circumstances. I am quite new on this platform. If I am missing anything/ extra information, please let me know.
namespace TableViewExample
{
public partial class MyDataServices : ContentPage
{
private ODataClient mODataClient;
private IEnumerable <IDictionary<string,object>> Customers;
public MyDataServices ()
{
InitializeComponent ();
InitializeDataService ();
GetDataFromOdataService ();
TableView tableView = new TableView{ };
var section = new TableSection ("Customer");
foreach (var customers in Customers) {
//System.Diagnostics.Debug.WriteLine ((string)customers ["ContactName"]);
var name = (string)customers ["ContactName"];
var cell = new TextCell{ Text = name };
section.Add (cell);
}
tableView.Root.Add (section);
Padding = new Thickness (10, 20, 10, 10);
Content = new StackLayout () {
Children = { tableView }
};
}
private void InitializeDataService(){
try {
mODataClient = new ODataClient ("myURL is here");
}
catch {
System.Diagnostics.Debug.WriteLine("ERROR!");
}
}
private void GetDataFromOdataService (){
try {
Customers = mODataClient.For ("Customers").FindEntries ();
}
catch {
System.Diagnostics.Debug.WriteLine("ERROR!");
}
}
}
}
Its hard helping out here, however here are some things to consider:-
It sounds like the dataservice could either be not contactable / offline; too busy or it could even be throwing an exception itself and returning a data response that you are not expecting to receive, that then triggers an exception and crash in your application as your always expecting an exact response without catering for any abnormal responses / events.
If you are contacting an external service over the internet it may just be your internet connection is slow / faulty and not returning the information fast enough as other possibilities.
In your code you are assuming that you always get a response from the server - and that this response will always be of an anticipated structure that your expecting to decode - without factoring in any possibility of abnormal responses returned by the dataservice. I have not used ODataClient personally, so not sure how it behaves in the event of maybe no data received / timeout or in your case the dataservice and how it behaves internally in the response to a bad-request etc.
I am assuming an exception would get thrown, and you do get your debug line executed indicating a failure.
You may want to also adjust this statement so that you write out the exception as well, i.e.:-
private void GetDataFromOdataService ()
{
try
{
Customers = mODataClient.For ("Customers").FindEntries ();
}
catch (Exception ex)
{
System.Diagnostics.Debug.WriteLine("ERROR!" + ex.ToString());
}
}
If there was a bad response, then the line at Customers = ..... would throw the exception as there may be no Customers returned or some other information packaged in the response from the dataservice.
The Customers variable would also be null at this point I am assuming due to this failing.
So when you get back to your code at foreach (var customers in Customers) { it will then throw a null reference exception as Customers is infact null.
As all your current code executes in the constructor without any try and catch block around this, it will also crash your application at this point as well.
Also you are doing all of this work in the constructor. Try seperating this out. I haven't investigated exactly where the constructor gets called in an iOS page life-cycle, however, if it is in the viewDidLoad, then you have something like 10 seconds for everything to complete, otherwise it will exit automatically. I imagine in your case, this isn't applicable however.
Going forward also try putting your layout controls in the constructor, and move your data task to maybe the OnAppearing override instead.
Using async would definitely be advisable as well, but remember you need to inspect the response from your dataservice, as the error could be embedded within the response also and you will need to detect when it is OK to process the data.

Subscription to DTE events doesn't seem to work - Events don't get called

I've made an extension inside a package and I am calling the following code (occurs when a user presses a button in the toolbar):
DocumentEvents documentEvents = (DTE2)GetService(typeof(DTE));
_dte.Events.DebuggerEvents.OnEnterBreakMode += DebuggerEvents_OnEnterBreakMode;
_dte.Events.DebuggerEvents.OnEnterDesignMode += DebuggerEvents_OnEnterDesignMode;
_dte.Events.DebuggerEvents.OnContextChanged += DebuggerEvents_OnContextChanged;
_dte.Events.DocumentEvents.DocumentSaved += new _dispDocumentEvents_DocumentSavedEventHandler(DocumentEvents_DocumentSaved);
_dte.Events.DocumentEvents.DocumentOpened += new _dispDocumentEvents_DocumentOpenedEventHandler(DocumentEvents_DocumentOpened);
void DocumentEvents_DocumentOpened(Document Document)
{
}
void DocumentEvents_DocumentSaved(Document Document)
{
}
void DebuggerEvents_OnEnterBreakMode(dbgEventReason Reason, ref dbgExecutionAction ExecutionAction)
{
}
void DebuggerEvents_OnContextChanged(Process NewProcess, Program NewProgram, Thread NewThread, StackFrame NewStackFrame)
{
}
private void DebuggerEvents_OnEnterDesignMode(dbgEventReason reason)
{
}
The first and the major problem is that the subscription to the event doesn't work. I've tried:
Opening new documents
Detaching from debug (thus supposedly triggering OnEnterDesignMode
Saving a document
None of these seem to have any effect and the callback functions were never called.
The second issue is that the subscription to the event line works USUALLY (the subscription itself, the callback doesn't work as described above) but after a while running the subscription line, e.g:
_dte.Events.DebuggerEvents.OnEnterBreakMode -= DebuggerEvents_OnEnterBreakMode;
Causes an exception:
Exception occured!
System.Runtime.InteropServices.InvalidComObjectException: COM object that has been separated from its underlying RCW cannot be used.
at System.StubHelpers.StubHelpers.StubRegisterRCW(Object pThis, IntPtr pThread)
at System.Runtime.InteropServices.UCOMIConnectionPoint.Unadvise(Int32 dwCookie)
at EnvDTE._dispDebuggerEvents_EventProvider.remove_OnEnterDesignMode(_dispDebuggerEvents_OnEnterDesignModeEventHandler A_1)
Any ideas will be welcome
Thanks!
Vitaly
Posting an answer that I got from MSDN forums, by Ryan Molden, in case it helps anyone:
I believe the problem here is how the
CLR handles COM endpoints (event
sinks). If I recall correctly when
you hit the
_applicationObject.Events.DebuggerEvents
part of your 'chain' the CLR will
create a NEW DebuggerEvents object for
the property access and WON'T cache
it, therefor it comes back to you, you
sign up an event handler to it (which
creates a strong ref between the
TEMPORARY object and your object due
to the delegate, but NOT from your
object to the temporary object, which
would prevent the GC). Then you don't
store that object anywhere so it is
immediately GC eligible and will
eventually be GC'ed.
I changed the code to store DebuggerEvents as a field and it all started to work fine.
Here is what #VitalyB means using code:
// list where we will place events.
// make sure that this variable is on global scope so that GC does not delete the evvents
List<object> events = new List<object>();
public void AddEvents(EnvDTE dte)
{
// create an event when a document is open
var docEvent = dte.Events.DocumentEvents;
// add event to list so that GC does not remove it
events.Add(docEvent );
docEvent.DocumentOpened += (document)=>{
Console.Write("document was opened!");
};
// you may add more events:
var commandEvent = dte.Events.CommandEvents;
events.Add(commandEvent );
commandEvent.AfterExecute+= etc...
}

Non-Blocking Endpoint: Returning an operation ID to the caller - Would like to get your opinion on my implementation?

Boot Pros,
I recently started to program in spring-boot and I stumbled upon a question where I would like to get your opinion on.
What I try to achieve:
I created a Controller that exposes a GET endpoint, named nonBlockingEndpoint. This nonBlockingEndpoint executes a pretty long operation that is resource heavy and can run between 20 and 40 seconds.(in the attached code, it is mocked by a Thread.sleep())
Whenever the nonBlockingEndpoint is called, the spring application should register that call and immediatelly return an Operation ID to the caller.
The caller can then use this ID to query on another endpoint queryOpStatus the status of this operation. At the beginning it will be started, and once the controller is done serving the reuqest it will be to a code such as SERVICE_OK. The caller then knows that his request was successfully completed on the server.
The solution that I found:
I have the following controller (note that it is explicitely not tagged with #Async)
It uses an APIOperationsManager to register that a new operation was started
I use the CompletableFuture java construct to supply the long running code as a new asynch process by using CompletableFuture.supplyAsync(() -> {}
I immdiatelly return a response to the caller, telling that the operation is in progress
Once the Async Task has finished, i use cf.thenRun() to update the Operation status via the API Operations Manager
Here is the code:
#GetMapping(path="/nonBlockingEndpoint")
public #ResponseBody ResponseOperation nonBlocking() {
// Register a new operation
APIOperationsManager apiOpsManager = APIOperationsManager.getInstance();
final int operationID = apiOpsManager.registerNewOperation(Constants.OpStatus.PROCESSING);
ResponseOperation response = new ResponseOperation();
response.setMessage("Triggered non-blocking call, use the operation id to check status");
response.setOperationID(operationID);
response.setOpRes(Constants.OpStatus.PROCESSING);
CompletableFuture<Boolean> cf = CompletableFuture.supplyAsync(() -> {
try {
// Here we will
Thread.sleep(10000L);
} catch (InterruptedException e) {}
// whatever the return value was
return true;
});
cf.thenRun(() ->{
// We are done with the super long process, so update our Operations Manager
APIOperationsManager a = APIOperationsManager.getInstance();
boolean asyncSuccess = false;
try {asyncSuccess = cf.get();}
catch (Exception e) {}
if(true == asyncSuccess) {
a.updateOperationStatus(operationID, Constants.OpStatus.OK);
a.updateOperationMessage(operationID, "success: The long running process has finished and this is your result: SOME RESULT" );
}
else {
a.updateOperationStatus(operationID, Constants.OpStatus.INTERNAL_ERROR);
a.updateOperationMessage(operationID, "error: The long running process has failed.");
}
});
return response;
}
Here is also the APIOperationsManager.java for completness:
public class APIOperationsManager {
private static APIOperationsManager instance = null;
private Vector<Operation> operations;
private int currentOperationId;
private static final Logger log = LoggerFactory.getLogger(Application.class);
protected APIOperationsManager() {}
public static APIOperationsManager getInstance() {
if(instance == null) {
synchronized(APIOperationsManager.class) {
if(instance == null) {
instance = new APIOperationsManager();
instance.operations = new Vector<Operation>();
instance.currentOperationId = 1;
}
}
}
return instance;
}
public synchronized int registerNewOperation(OpStatus status) {
cleanOperationsList();
currentOperationId = currentOperationId + 1;
Operation newOperation = new Operation(currentOperationId, status);
operations.add(newOperation);
log.info("Registered new Operation to watch: " + newOperation.toString());
return newOperation.getId();
}
public synchronized Operation getOperation(int id) {
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
return op;
}
}
Operation notFound = new Operation(-1, OpStatus.INTERNAL_ERROR);
notFound.setCrated(null);
return notFound;
}
public synchronized void updateOperationStatus (int id, OpStatus newStatus) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setStatus(newStatus);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
public synchronized void updateOperationMessage (int id, String message) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setMessage(message);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
private synchronized void cleanOperationsList() {
Date now = new Date();
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if((now.getTime() - op.getCrated().getTime()) >= Constants.MIN_HOLD_DURATION_OPERATIONS ) {
log.info("Removed operation from watchlist: " + op.toString());
iterator.remove();
}
}
}
}
The questions that I have
Is that concept a valid one that also scales? What could be improved?
Will i run into concurrency issues / race conditions?
Is there a better way to achieve the same in boot spring, but I just didn't find that yet? (maybe with the #Async directive?)
I would be very happy to get your feedback.
Thank you so much,
Peter P
It is a valid pattern to submit a long running task with one request, returning an id that allows the client to ask for the result later.
But there are some things I would suggest to reconsider :
do not use an Integer as id, as it allows an attacker to guess ids and to get the results for those ids. Instead use a random UUID.
if you need to restart your application, all ids and their results will be lost. You should persist them to a database.
Your solution will not work in a cluster with many instances of your application, as each instance would only know its 'own' ids and results. This could also be solved by persisting them to a database or Reddis store.
The way you are using CompletableFuture gives you no control over the number of threads used for the asynchronous operation. It is possible to do this with standard Java, but I would suggest to use Spring to configure the thread pool
Annotating the controller method with #Async is not an option, this does not work no way. Instead put all asynchronous operations into a simple service and annotate this with #Async. This has some advantages :
You can use this service also synchronously, which makes testing a lot easier
You can configure the thread pool with Spring
The /nonBlockingEndpoint should not return the id, but a complete link to the queryOpStatus, including id. The client than can directly use this link without any additional information.
Additionally there are some low level implementation issues which you may also want to change :
Do not use Vector, it synchronizes on every operation. Use a List instead. Iterating over a List is also much easier, you can use for-loops or streams.
If you need to lookup a value, do not iterate over a Vector or List, use a Map instead.
APIOperationsManager is a singleton. That makes no sense in a Spring application. Make it a normal PoJo and create a bean of it, get it autowired into the controller. Spring beans by default are singletons.
You should avoid to do complicated operations in a controller method. Instead move anything into a service (which may be annotated with #Async). This makes testing easier, as you can test this service without a web context
Hope this helps.
Do I need to make database access transactional ?
As long as you write/update only one row, there is no need to make this transactional as this is indeed 'atomic'.
If you write/update many rows at once you should make it transactional to guarantee, that either all rows are updated or none.
However, if two operations (may be from two clients) update the same row, always the last one will win.

Resources