MSDN article on CoRevokeGetClassObject() says that when the COM server calls it the class object referenced by clients is not released. Then the following comes:
If other clients still have pointers to the class object and have caused the reference >count to be incremented by calls to IUnknown::AddRef, the reference count will not be >zero. When this occurs, applications may benefit if subsequent calls (with the obvious >exceptions of IUnknown::AddRef and IUnknown::Release) to the class object fail.
What is meant by "applications may benefit"? The class object is not released, but creation requests fail. Sounds reasonable but where's the benefit?
Yeah, it's a pretty strange turn of words...
I think what they're trying to say is that clients may end up in a tricky situation if they create objects from a server that just called CoRevokeClassObjects, because it's likely it'll disappear very soon (CoRevokeClassObjects is routinely called when a server is shut down.)
So, if the activation calls (IClassFactory::CreateInstance) don't fail, the client will get an interface pointer back, and as soon as they call a method on it, they'll get an error from the RPC layer that the server is gone.
I suppose that's 'beneficial' in some way :-)
That said, I'm not sure how to detect the case where IUnknown::Release is called via CoRevokeClassObjects vs some other client, but I suppose the code revoking the factories could set some global state or per-factory state that they can check before letting creation requests come through.
Related
I have an app which needs almost no user interaction, but requires Geofences. Can I run this entirely within a background service?
There will be an Activity when the service is first run. This Activity will start a service and register a BroadcastReceiver for BOOT_COMPLETED, so the service will start at boot. It's unlikely that this Activity will ever be run again.
The service will set an Alarm to go off periodically, which will cause an IntentService to download a list of locations from the network. This IntentService will then set up Geofences around those locations, and create PendingIntents which will fire when the locations are approached. In turn, those PendingIntents will cause another IntentService to take some action.
All this needs to happen in the background, with no user interaction apart from starting the Activity for the first time after installation. Hence, the Activity will not interact with LocationClient or any location services.
I've actually got this set up with proximityAlerts, but wish to move to the new Geofencing API for battery life reasons. However, I have heard that there can be a few problems with using LocationClient from within a service. Specifically, what I've heard (sorry, no references, just hearsay claims):
location client relies on ui availability for error handling
when called from background thread, LocationClient.connect() assumes that it is called from main ui thread (or other thread with event looper), so connection callback is never called, if we call this method from service running in background thread
When I've investigated, I can't see any reason why this would be the case, or why it would stop my doing what I want. I was hoping it would be almost a drop-in replacement for proximityAlerts...
Can anyone shed some light on things here?
The best thing would be to just try it out, right? Your strategy seems sound.
when called from background thread, LocationClient.connect() assumes that it is called from main ui thread (or other thread with event looper), so connection callback is never called, if we call this method from service running in background thread.
I know this to be not true. I have a Service that is started from an Activity, and the connection callback is called.
I dont know about proximity alerts; but I cant seem to find an API to list my GeoFences. I am worried that my database (sqlite) and the actual fences might get out of sync. That is a design flaw in my opinion.
The reason LocationClient needs UI, is that the device may not have Google Play Services installed. Google has deviced a cunning and complex mechanism that allows your app to prompt the user to download it. The whole thing is horrible and awful in my opinion. Its all "what-if what-if" programming.
(They rushed a lot of stuff out the door for google IO 2013. Not all of it are well documented, and some of it seems a bit "rough around the edges").
I am currently writing on an application which requires me to compute something which will take some time to complete. Therefore, I am doing this computation in the background. I now implemented a solution which starts new Threads for each such request via an ExecutorService. These threads regularly report their progress back to a (volatile) IModel. Additionally, I am using an AjaxSelfUpdatingTimerBehavior which updates the website by printing the progress which is represented by this IModel to the screen. By doing so, the website stays responsive, the task can be interrupted by a button click and the HTTP request which was requesting the long lasting task does not time out.
However, Wicket does not like non-Serializable references in its WebPage or Panelinstances and I wonder what would be the best way of solving this problem. For now, I wrote a little manager class which uses a cash which is referenced by a static variable which is how I am avoiding the serialization restriction. The WebPage instance which was triggering the task now only holds a reference to a unique ID which was assigned to it by my manager class when invoking the task.
Of course, with this approach I have to clean up after myself and I am also concerned about security, since I did not yet take actions to avoid interferences of tasks started by different users. Also, it just feels wrong to me, since I want to keep this task on the scope of the WebPage instead of letting the task escape into a global environment. I am sure, there is a better way to do this!
Thanks for any thoughts on this matter and for sharing your experience!
Your approach sound perfectly reasonable: Pass the task-handling to a non-web instance (could be a Spring managed singleton) and just keep an identifier in your component/model.
I am trying to override the singe instance limit of an application for which I don't have the source. I know that the app is using the good ol' trick of using CreateMutex to determine whether there is another instance running. (If the mutex is created successfully it proceeds, if getlasterror says that the mutex has been created it quits immediately). I found that through sniffing the Win32 api calls.
I thought using Detours would do the trick, but it doesn't quite work out. I am intercepting CreateMutexW, but for some reason, it doesn't catch the first four calls to it. (Again I know what these calls are by sniffing win32 calls and looking at the name of the mutexes). I do get the fifth one intercepted, but the one I actually want to intercept is the first one.
I am using detours through the sample application withdll. I wonder if the problem is that detours is kicking in too late or because of some kind of protection these calls may have. Is detours the best approach? Perhaps using something else may be a better idea?
There might be several reasons for the situation you describe. Here are the most probable of them:
The CreateMutexW call you need to catch occurs within the DllMain
method of one of the DLLs that are imported by the process, and you
are using the DetoursCreateProcessWithDll() function to inject your
code. Detours injects your DLL by placing it at the end of the
process executable import list, and hence all the DLLs that are
imported by the process would be loaded and initialized within the
process prior to yours. In order to overcome this, try using
CreateProcess(CREATE_SUSPENDED) and CreateRemoteThread()-based
injection, although this method raises its own challenges.
The API that is used in the first call is different. Have you tried
overriding CreateMutexExW? Are you sure ANSI methods call Unicode
ones?
Hope this helps.
Before the application terminates its
execution, COM must be shut down
again. (Failure to shut down COM could
result in execution errors when
another program attempts to use COM
services .)
The above quote implies that, right?
No it doesn't.
If you fail to properly release all references to an out of process COM server and correctly close down COM it could lead to that instance of that service being in an odd state (everything should be OK after releasing all references, but sometimes COM might cache part of the out of process marshalling layer).
An out of process COM service can be designed to have separate component instances for each client (within or across services) that are completely independent (even if hosted in the same process), in which case it is hard to see how a failure of one client would affect other instances (other than wasting memory on instances until COM finally times them out). If the instances share state they can of course interfere even if the clients operate perfectly to the rules.
It is rather important that you quote the source of that quote so we can get the context. As near as I can see, you got that from a book about DirectShow programming. What it actually refers to is the need to call CoUninitialize().
Yes, that's kinda important. A thread should call CoInitializeEx() to initialize the COM infrastructure before it starts using any of the COM API functions. You really should call CoUninitialize() when that threads ends so stuff is properly cleaned up. Typically at the end of your program's main() function. Failure to do so may make another app fail when it finds a register class factory that in fact is dead.
This otherwise has nothing to do with a COM out-of-process server having to restrict itself in any way. You specify sharing mode with the REGCLS argument to CoRegisterClassObject(). Of course, a server should not exit and call CoUninitialize until all its objects are released.
I have noticed that some of my ajax-heavy sites (ones I visit, not ones I have built), have certain auto-refresh features. For example, in GMail, if I get a new message, I see the new message without a page reload. It's the same with the Facebook browser-based IM client. From what I can tell, there aren't any java applets handling the server-browser binding, so I'm left to assume it's being done by AJAX and perhaps some element I'm unaware of. So by my best guess, it's done in one of two ways:
The javascript does a steady "ping" to a server-side script, checking for any updates that might be available (which would explain why some of these pages bring any other heavy-duty pages to a crawl). or
The javascript sits idly by and a server-side script actually "Pushes" any updates to the browser. But I'm not sure if this is possible. I'd imagine there is some kind of AJAX function that still pings, but all it simply asks "any updates?" and the server-script has a simple boolean that says "nope" or "I'm glad you asked." But if this is the case, any data changes would need to call the script directly so that it has the data changes ready and makes the change to that boolean function.
So is that possible/feasible/how it works? I imagine something like:
Someone sends an email/IM/DB update to the server, the server calls the script using the script's URL plus some relevant GET variable, the script notes the change and updates the "updates available" variable, the AJAX gets the response that there are in fact updates, the AJAX runs its normal "update page" functions, which executes the normal update scripts and outputs them to the browser.
I ask because it seems really inefficient that the js is just doing a constant check which requires a) the server to do work every 1.5 seconds, and b) my browser to do work every 1.5 seconds just so that on my end I can say "Oh boy, I got an IM! just like a real IM client!"
Read about Comet
I've actually been working on a small .NET Web App that uses the Ajax with long polling technique described.
Depending on what technology you're using, you could use thread signaling mechanisms to hold your request until an update is retrieved.
With ASP.NET I'm running my server on a single machine, so I store a reference to my Producer object (which contains a thread that processes the data). To initiate the data pull, my service's Subscribe method is called, which creates a Consumer object that's registered with the Producer. If the Consumer is long polling mode, it has a AutoResetEvent which is signaled whenever it receives new data, and whenever the web client makes a request for data, the Consumer first waits on the reset event, and then returns it.
But you're mentioning something about PHP - as far as I know persistence is maintained through serialization, not actually keeping the object in memory, so I don't know how you could reference a Producer object using $_CACHE[] or $_SESSION[]. When I developed in PHP I never really knew anything about multithreading so I didn't play around with it, but I guess you can look into that.
Using infinite loops is going to consume a lot of your processing power - I would exhaust all other options first.