Concurrency and restriction to Serializable - ajax

I am currently writing on an application which requires me to compute something which will take some time to complete. Therefore, I am doing this computation in the background. I now implemented a solution which starts new Threads for each such request via an ExecutorService. These threads regularly report their progress back to a (volatile) IModel. Additionally, I am using an AjaxSelfUpdatingTimerBehavior which updates the website by printing the progress which is represented by this IModel to the screen. By doing so, the website stays responsive, the task can be interrupted by a button click and the HTTP request which was requesting the long lasting task does not time out.
However, Wicket does not like non-Serializable references in its WebPage or Panelinstances and I wonder what would be the best way of solving this problem. For now, I wrote a little manager class which uses a cash which is referenced by a static variable which is how I am avoiding the serialization restriction. The WebPage instance which was triggering the task now only holds a reference to a unique ID which was assigned to it by my manager class when invoking the task.
Of course, with this approach I have to clean up after myself and I am also concerned about security, since I did not yet take actions to avoid interferences of tasks started by different users. Also, it just feels wrong to me, since I want to keep this task on the scope of the WebPage instead of letting the task escape into a global environment. I am sure, there is a better way to do this!
Thanks for any thoughts on this matter and for sharing your experience!

Your approach sound perfectly reasonable: Pass the task-handling to a non-web instance (could be a Spring managed singleton) and just keep an identifier in your component/model.

Related

Workflow Waiting Forever

I have a workflow that runs when an entity is created and it creates two other entities and puts them on a queue. It then waits until each entity's status reason is set to done. After which is continues.
Basically two teams will work an order and then it will continue processing after both teams are done.
Most of the time it works. However sometimes it waits forever. I'll re-active and re-resolve the other tasks, but it just never wakes up.
What can I do? The workflows aren't really powerful enough for me to have it poll with a timeout (there are no loops). I'd like to avoid on-change plugins for these other entities to get workflow behavior all scattered about.
Edit:
Restarting the CRM services (not sure which did it, I restarted them all) allowed the workflow to resume. However, I'd still like to know how to make this more reliable.
I had the same problem (and a lot more) with workflows in CRM 2011 and decided not to use them (except for very special purposes).
The main reason is because of their very limited error handling. Another reason is that it is inconvenient to put them under source control. Another reasons are: Worflows cannot run offline and user impersonation is also not supported. For a comparison look here: http://goo.gl/9ht1QJ
Use plugins instead of workflows, then you have full control.
But keep in mind that plugins (unlike workflows) are not designed for long running tasks.
So they have a default max execution time of 120 sec and are not stateful/persisted. But in most cases (and i think also in your case) that is not a problem.
Just change your eventing a little bit:
Implement and register a plugin step for: entity is created and it creates two other entities and puts them on a queue
Implement and register another step: entity's status reason is set to done, query for other entity and check status, if done continue processing
If you really do not want use plugins for you business logic you can consider implementing a plugin which restarts/resumes faulted workflows.
But thats not a very nice solution.

Start a background task from a Web Api request

I have an ASP.WEB Web Api controller that needs to fire and forget some slow code. What would be a good way to do that? That is I want the controller to return an HTML response to the browser, while the slow code keeps running somewhere.
Is it a good idea to grab a worker thread from the tread pool and pass in a complex object created by the controller? Or do I need to write a separate windows service to do the work?
Your solution depends on the specifics or your situation and your workload.
You can certainly start of a new task Factory.StartNew when you receive a request.
There is nothing wrong with this technically.
Things you should think about though:
Do I have to return data back to the customer?
This task will use up web server resources so if those tasks take very long time and you get a lot of traffic you may run into situation where your customers are waiting in line to just start being processed. In this situation I think backend server with Windows Service be a much better idea.
All tasks above are subject to IIS Resets. They may be killed during processing your background task.

Long running task in WebAPI

Here's my problem: I need to call multiple 3rd party methods inside an ApiController. The signature for those methods is Task DoSomethingAsync(SomeClass someData, SomeOtherClass moreData). I want those calls to continue running in the background, after the ApiController has sent the data back to the client. When DoSomethingAsync completes I want to do some logging and maybe save some data to the file system. How can I do that? I'd prefer to use the asyny/await syntax.
Great news, there is a new solution in .NET 4.5.2 called the QueueBackgroundWorkItem API. It's really simple to use:
HostingEnvironment.QueueBackgroundWorkItem(ct => DoSomething(a, b, c));
Here's an article that describes it in detail.
https://blogs.msdn.microsoft.com/webdev/2014/06/04/queuebackgroundworkitem-to-reliably-schedule-and-run-background-processes-in-asp-net/
And here's anohter article that mentions a few other approaches not mentioned in this thread.
http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
You almost never want to do this. It is almost always a big mistake.
ASP.NET (and most other servers) work on the assumption that it's safe to tear down your service once all requests have completed. So you have no guarantee that your logging will be done, or that your data will be written to disk. Particularly with the disk writes, it's entirely possible that your writes will be corrupted.
That said, if you are absolutely sure that you want to implement this extremely dangerous design, you can use the BackgroundTaskManager from my blog.
Update: I've written a blog series that goes into detail on a proper solution for request-extrinsic code. In summary, what you really want to do is move the request-extrinsic code out of ASP.NET. Introduce a durable queue and an independent processor; the ASP.NET controller action will place a request onto the queue, and the independent processor will read requests and execute them. This "processor" can be an Azure Function/WebJob, Win32 Service, etc.
Stephen described why starting essentially long running fire-and-forget tasks inside an ApiController is a bad idea.
Perhaps you should create a separate service to execute those fire-and-forget tasks. That service could be a different ApiController, a worker behind a queue, anything that can be hosted on its own and have an independent lifetime.
This would make management of the different task lifetimes much easier and separate the concerns of the long-running tasks from the ApiController's core responsibilities.
As pointed out by others, it is not recommended. However, whenever there is a need there is a way, so take a look at IRegisteredObject
See also
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/
Though the question is several years old, best possible solution now is to use Singal R in this case.
https://github.com/Myrmex/signalr-notify-progress

Can I run Android GeoFencing entirely within a background service?

I have an app which needs almost no user interaction, but requires Geofences. Can I run this entirely within a background service?
There will be an Activity when the service is first run. This Activity will start a service and register a BroadcastReceiver for BOOT_COMPLETED, so the service will start at boot. It's unlikely that this Activity will ever be run again.
The service will set an Alarm to go off periodically, which will cause an IntentService to download a list of locations from the network. This IntentService will then set up Geofences around those locations, and create PendingIntents which will fire when the locations are approached. In turn, those PendingIntents will cause another IntentService to take some action.
All this needs to happen in the background, with no user interaction apart from starting the Activity for the first time after installation. Hence, the Activity will not interact with LocationClient or any location services.
I've actually got this set up with proximityAlerts, but wish to move to the new Geofencing API for battery life reasons. However, I have heard that there can be a few problems with using LocationClient from within a service. Specifically, what I've heard (sorry, no references, just hearsay claims):
location client relies on ui availability for error handling
when called from background thread, LocationClient.connect() assumes that it is called from main ui thread (or other thread with event looper), so connection callback is never called, if we call this method from service running in background thread
When I've investigated, I can't see any reason why this would be the case, or why it would stop my doing what I want. I was hoping it would be almost a drop-in replacement for proximityAlerts...
Can anyone shed some light on things here?
The best thing would be to just try it out, right? Your strategy seems sound.
when called from background thread, LocationClient.connect() assumes that it is called from main ui thread (or other thread with event looper), so connection callback is never called, if we call this method from service running in background thread.
I know this to be not true. I have a Service that is started from an Activity, and the connection callback is called.
I dont know about proximity alerts; but I cant seem to find an API to list my GeoFences. I am worried that my database (sqlite) and the actual fences might get out of sync. That is a design flaw in my opinion.
The reason LocationClient needs UI, is that the device may not have Google Play Services installed. Google has deviced a cunning and complex mechanism that allows your app to prompt the user to download it. The whole thing is horrible and awful in my opinion. Its all "what-if what-if" programming.
(They rushed a lot of stuff out the door for google IO 2013. Not all of it are well documented, and some of it seems a bit "rough around the edges").

What benefit does MSDN article on CoRevokeClassObject talk about?

MSDN article on CoRevokeGetClassObject() says that when the COM server calls it the class object referenced by clients is not released. Then the following comes:
If other clients still have pointers to the class object and have caused the reference >count to be incremented by calls to IUnknown::AddRef, the reference count will not be >zero. When this occurs, applications may benefit if subsequent calls (with the obvious >exceptions of IUnknown::AddRef and IUnknown::Release) to the class object fail.
What is meant by "applications may benefit"? The class object is not released, but creation requests fail. Sounds reasonable but where's the benefit?
Yeah, it's a pretty strange turn of words...
I think what they're trying to say is that clients may end up in a tricky situation if they create objects from a server that just called CoRevokeClassObjects, because it's likely it'll disappear very soon (CoRevokeClassObjects is routinely called when a server is shut down.)
So, if the activation calls (IClassFactory::CreateInstance) don't fail, the client will get an interface pointer back, and as soon as they call a method on it, they'll get an error from the RPC layer that the server is gone.
I suppose that's 'beneficial' in some way :-)
That said, I'm not sure how to detect the case where IUnknown::Release is called via CoRevokeClassObjects vs some other client, but I suppose the code revoking the factories could set some global state or per-factory state that they can check before letting creation requests come through.

Resources