I'm using FluentScheduler 3.1.42, which appears to not support async jobs.
We have a few tasks that execute async code by using .Wait(). They work well and have never caused a task to hang, despite all await statements not using ConfigureAwait(false).
I added a new task that's a bit more complicated that does multiple awaits and uses WhenAll with the SemaphoreSlim pattern. This task usually works, but occasionally hangs (only twice in about a week, and it's scheduled to run every 5 seconds on 2 servers) until I restart the web server.
So my question is: could the periodic hangs be caused by not calling configureAwait(false) on all await statements down the pipe? If so, why doesn't it hang everytime? If not, any idea what might be causing my task to hang?
UPDATE
I updated my code to dispose the semaphore slim and also an smtp object that was being used, and that seemed to have cleared up the issue. I did also add explicit calls to configureAwait for good measure and wrapped fewer awaits in the semaphore try/finally since I didn't need to limit concurrency for as much code as I had been. Not totally sure what was exactly was happening, but it doesn't seem to be happening anymore, so that's cool.
There's probably no context (SynchronizationContext.Current / TaskScheduler.Current different than TaskScheduler.Default), and if that's correct then you probably don't need ConfigureAwait(false).
If not, any idea what might be causing my task to hang?
Of course, without code, any answers are complete guesses, but if I had to guess...
with the SemaphoreSlim pattern
My guess is that your code is not releasing the SemaphoreSlim. For example, if you forgot to wrap the Release in a finally block, then an exception would cause the semaphore to stay locked out.
Related
We have a legacy VB6 app which has started, from time to time, hangs. We thought it may be to do with a shift to Citrix, but can now replicate the behaviour on a thick client on Win10. We don't think that we have seen this before on earlier Windows versions, but are still checking logs to confirm that.
We experience the behaviour when tabbing into a text box and then tabbing out. As we pass through it, we are making a simple ado call to lookup/validate some data in a text box. As part of the correct program running we are logging
“Opening Dataset: SELECT ... FROM ... ”
“Opened Dataset”
Between these 2 log statements is simple ado data retrieval code with which we have had no problems previously. It is in an ActiveX dll and is running synchronously. Most importantly is that between these 2 log statements there is no DoEvents or api call which would yield control. As far as we can see, it should be a purely synchronous operation.
When the system crashes, which happens sporadically, we can see other logging statements appear between these 2 which can be either resource status (e.g. how much memory, gdi/user objects - which would usually be found because a timer has triggered in the main form) or focus type events - which aren’t timer driven - at least in our codebase.
“Opening Dataset: SELECT ... FROM ... ”
“Resource Status: ...”
“Opened Dataset”```
or
“Opening Dataset: SELECT ... FROM ... ”
“TextItem.OnLostFocus Item1 ...”
“TextItem.Validate ...”
“TextItem.OnGotFocus Item2 ...
“Opened Dataset”
So my initial question is, in what scenario can what should be a synchronous operation be interrupted and appear to act asynchronously.
For example, and we aren’t doing this, I could imagine writing some unsafe code whereby by using a multimedia timer (on another thread) and supplying an AddressOf parameter to the address of a function on one of our modules, that that timer initiates execution of our code, separate to the correct control flow. Other than something like that, I just can’t see how synchronous vb6 code could be interrupted in this way.
I’d be really grateful of any thoughts, suggestions or advice. I’m really sorry if this is soo vague. It perhaps reflects how I’m struggling to get my head round this problem.
Just to say, we tracked this down to Windows 10 plus an old (out of support) socket component we are using. It looks like it is pumping the message queue "at the wrong time" and hence we are seeing UI events appear in the middle of a synchronous process. We don't see this behaviour on earlier Windows versions.
I don't know what may have changed in Win10 which would result in this, but we obviously need to upgrade.
In our case we had a few long running timers to pull status/changes from the DB which caused this. We are using ADO with SQL Native Client and MARS, which worked great up until Windows 10 where intermittent lock ups occurred. Logging and Windbg confirmed this was happening when 2 requests where hitting the ADO connection at the same time. The error from ADO was "Unable to open a logical session" error number -2147467259, and actually caused SQL Server 2014 (running on another machine) to block all other client queries from multiple different applications and machines until the locked up app was killed. I could not replicate this in the IDE as apparently that forces timers to work the way they always did. The fix was to async our ADO implementation and put a connection manager on top of the SQL connections to force requestors to wait their turn (basically taking the Win10 async'd timer feature back out). My only performance impact was the additional few milliseconds of delaying the timer fired SQL query when it collided with a another query.
I have a workflow that runs when an entity is created and it creates two other entities and puts them on a queue. It then waits until each entity's status reason is set to done. After which is continues.
Basically two teams will work an order and then it will continue processing after both teams are done.
Most of the time it works. However sometimes it waits forever. I'll re-active and re-resolve the other tasks, but it just never wakes up.
What can I do? The workflows aren't really powerful enough for me to have it poll with a timeout (there are no loops). I'd like to avoid on-change plugins for these other entities to get workflow behavior all scattered about.
Edit:
Restarting the CRM services (not sure which did it, I restarted them all) allowed the workflow to resume. However, I'd still like to know how to make this more reliable.
I had the same problem (and a lot more) with workflows in CRM 2011 and decided not to use them (except for very special purposes).
The main reason is because of their very limited error handling. Another reason is that it is inconvenient to put them under source control. Another reasons are: Worflows cannot run offline and user impersonation is also not supported. For a comparison look here: http://goo.gl/9ht1QJ
Use plugins instead of workflows, then you have full control.
But keep in mind that plugins (unlike workflows) are not designed for long running tasks.
So they have a default max execution time of 120 sec and are not stateful/persisted. But in most cases (and i think also in your case) that is not a problem.
Just change your eventing a little bit:
Implement and register a plugin step for: entity is created and it creates two other entities and puts them on a queue
Implement and register another step: entity's status reason is set to done, query for other entity and check status, if done continue processing
If you really do not want use plugins for you business logic you can consider implementing a plugin which restarts/resumes faulted workflows.
But thats not a very nice solution.
I wanted to know how I can I make the io do something like a thread.join() wait for all tasks to finish.
io_type->post( strand->wrap(boost::bind &somemethod,ptr,parameter)));
In the above code if 4 threads were initially launched this would give work to the next available thread. However I want to know how I could actually wait for all the threads to finish work. Like we do with threads.join().
If this really needs to be done, then you could setup a mutex or critical section to stop your io handlers from processing messages off of the socket. This would need to be activated from another thread. But, more importantly...
Perhaps you should rethink your design. The problem with having the io wait for other threads to finish is that the io would then be unresponsive. In general, not a good idea. I suspect that most developers working on networking software would not even consider it. If you are receiving messages that are not ready to be processed yet due to other processing that is going on, then consider storing them in a queue and process them on a different thread when the other threads have signaled that they have completed their work.
I am currently writing on an application which requires me to compute something which will take some time to complete. Therefore, I am doing this computation in the background. I now implemented a solution which starts new Threads for each such request via an ExecutorService. These threads regularly report their progress back to a (volatile) IModel. Additionally, I am using an AjaxSelfUpdatingTimerBehavior which updates the website by printing the progress which is represented by this IModel to the screen. By doing so, the website stays responsive, the task can be interrupted by a button click and the HTTP request which was requesting the long lasting task does not time out.
However, Wicket does not like non-Serializable references in its WebPage or Panelinstances and I wonder what would be the best way of solving this problem. For now, I wrote a little manager class which uses a cash which is referenced by a static variable which is how I am avoiding the serialization restriction. The WebPage instance which was triggering the task now only holds a reference to a unique ID which was assigned to it by my manager class when invoking the task.
Of course, with this approach I have to clean up after myself and I am also concerned about security, since I did not yet take actions to avoid interferences of tasks started by different users. Also, it just feels wrong to me, since I want to keep this task on the scope of the WebPage instead of letting the task escape into a global environment. I am sure, there is a better way to do this!
Thanks for any thoughts on this matter and for sharing your experience!
Your approach sound perfectly reasonable: Pass the task-handling to a non-web instance (could be a Spring managed singleton) and just keep an identifier in your component/model.
Normally, billings should execute in the background on a scheduled date (I haven't figured out how to do that yet, but that's another topic).
But occasionally, the user may wish to execute a billing manually. Once clicked, I would like to be sure the operation runs to completion regardless of what happens on the user side (e.g. closes browser, machine dies, network goes down, whatever).
I'm pretty sure db.SaveChanges() wraps its DB operations in a transaction, so from a server perspective I believe the whole thing will either finish or fail, with no partial effect.
But what about all the work between the POST and the db.SaveChanges()? Is there a way to be sure the user can't inadvertently or intentionally stop that from completing?
I guess a corollary to this question is what happens to a running Asynchronous Controller or a running Task or Thread if the user disconnects?
My previous project was actually doing a billing system in MVC. I distinctly remember testing out what would happen if I used Task and then quickly exited the site. It did all of the calculations just fine, ran a stored procedure in SQL Server, and sent me an e-mail when it was done.
So, to answer your question: If you wrap the operations in a Task it should finish anyways with no problems.