Windows service not receiving some events, particularly SERVICE_CONTROL_POWEREVENT - winapi

tldr Why don't I receive PBT_APMRESUMEAUTOMATIC, PBT_APMRESUMESUSPEND, and PBT_APMSUSPEND as the payload to service events of type SERVICE_ACCEPT_POWEREVENT?
I'm trying to detect when a windows device as woken back up from sleep. I have a constellation of processes that interact via IPC, which includes both UI applications with an event handler function provided to RegisterClassEx and services using RegisterServiceCtrlHandlerExW.
My preference is to receive these events in a service. My understanding is that I can get SERVICE_ACCEPT_POWEREVENT in dwControlsAccepted, and can then distinguish specific kinds of power event by looking at the dwEventType parameter, as per these docs https://learn.microsoft.com/en-us/windows/win32/api/winsvc/nc-winsvc-lphandler_function_ex. However, I only ever receive PBT_APMPOWERSTATUSCHANGE, corresponding to fiddling with the power cord on the laptop. I expected to also receive some combination of PBT_APMRESUMEAUTOMATIC, PBT_APMRESUMESUSPEND, and PBT_APMSUSPEND.
When testing on the UI side, I do get WM_POWERBROADCAST events of any kind. Obviously I've missed some part of setup there. Again, the process that actually needs this info is a service, so I would have to IPC the event to a service if this is what ended up working.
For full credit, I also experimented with SERVICE_CONTROL_CONTINUE and SERVICE_CONTROL_PAUSE (enabled via SERVICE_ACCEPT_PAUSE_CONTINUE), but never receive these events at all. I had expected those to correlate with sleeping the laptop but apparently not.

Related

Where in Hexagonal Architecture do periodic background tasks fit?

I am working on a program in golang, which I am sructuring based on Hexagonal Architecture. I think I have my head wrapped mostly around the idea, but there is something I just can't figure out.
The function of the program is to monitor multiple IP cameras for alarm events, which a receiver can receive a live stream of alarm events over a HTTP2.0 PUSH REQUEST. (Just in-case thats not the technical term, my service establishes a TCP/HTTP connection from a GET request and keeps it open, and when the cameras triggers an alarm event, the camera pushes it back to the services)
Layers of Architecture
Adaptors
HTTP Handler
In-memory JSON Store
Ports
DeviceService Interface
EventService Interface
DeviceRepo Interface
EventRepo Interface
Services
DeviceService
EventService
Domain
DeviceDomain
EventDomain
The user adds a device to the system via API, the request includes the desired monitoring schedule (When the receiver should start and stop daily) and url.
A scheduler is responsible to periodically checking if a receiver is meant to be started based on its schedule. If it's meant to be running for a device it starts a receiver for that device.
The receiver establishes connection to the IP camera and loops over the alarm event stream processing the alarm events and passing them to the EventService.
The EventService receives the event, and is responsible for handling the event, based on the domain logic, and decides to send an email or ignore it. It also saves all events to the eventrepo.
The two parts of code i'm not sure where they sit is the scheduler and receiver. So should they be;
a. Both in the same package and placed at the Adaptors layer
b. The receiver in the Adaptors layer and the scheduler in the Service layer
c. Both scheduler and receivers in the Service layer?
I am just confused, as the receiver isn't started by the user directly, but started by a running loop which continually checks a condition. But I also might have different receivers for different brands of cameras. Which is an implementation detail, which means the receiver should be in the Adaptors layer. Which makes me think option b is best.
I'm possibly over thinking it, but let me know what you all think the best option is or suggest a better one.
If it can help you, my design would be as follow:
Driver actors:
Human User: Interacts with the app using a driver port: "for adding devices"
Device (IP camera): Sends alarm events to the app using another driver port: "for receiving alarm events"
Driven actors:
Device (IP camera): The app interacts with the device using the driven port "for checking device", in order to start and stop it daily, according to the schedule of the device.
Warning Recipients: The app sends an email to them when an alarm event is received and it is not ignored.
Alarm Event Store: For persisting the alarm events the app receives.
The app ("Alarm Monitor") does the following business logic:
Maintains a collection of devices it has to monitor ("for adding devices").
It has a "worker" (the scheduler) that periodically checks the devices status and starts/stops them according to the schedule of the device.
It handles alarm events received from the devices. When an alarm event is received, the app either sends an email or ignore it. And stores the event in a repository.
So for me:
The scheduler is part of the business logic.
The receiver is the adapter of a device. It deels with http stuff.
Here is the picture:
"A scheduler is responsible to periodically checking if a receiver is meant to be started based on its schedule"
Ultimately it doesn't really matter to the application whether a human presses an "autoStartReceivers" button peridically or it's done by a scheduling process. Therefore that's an infrastructure concern and the scheduler is a driver adapter. You'd probably have a ReceiverService.autoStartReceivers service command that would be invoked by the scheduler periodically.
Now for the Receiver I'd say it depends on the implementation. If the Receiver doesn't know about infrastructure/vendor-specific details, but only does coordination then it may belong to the application/service layer.
For instance perhaps the receiver works with an abstract EventSource (HTTP, WebSockets, etc.) and uses an EventDecoder (vendor-specific) to adapt events and then relays them to an EventProcessor then it really only is doing orchestration. The EventSource & EventDecoder would be adapters. However if the Receiver knows about specific infrastructure details then it becomes an adapter.
Ultimately all the above is supporting logic for your core domain of event processing. The core domain logic wouldn't really care how events were captured and probably wouldn't care either how resulting actions are carried on. Therefore, your core domain in it's most simplistic form is probably actions = process(event) pure functions.
a. Both in the same package and placed at the Adaptors layer
b. The receiver in the Adaptors layer and the scheduler in the Service layer
c. Both scheduler and receivers in the Service layer?
The receiver and scheduler are both adapters. I don't think that they must be placed in the same package, but you can do that. So a is the best answer for me, because...
The receiver connects your application with an external device - the ip camara. Thus the receiver is an adapter for the EventService port.
The scheduler indirectly manages the lifecycle of the receiver through the DeviceService port. It enables or disables an ip camara and this leads to a connect and disconnect of the receiver.
From the perspective of your application core the scheduler is just another adapter that tells the DeviceService port to enable or disable some ip camara. This could also be done by a user who clicks on a button in the UI. The scheduler is just a technical assistance for the user which executes tasks that the user wants based on a schedule. Thus the scheduler is also an adapter.

How do I achieve a redelivery delay in azure service bus with amqp using rhea

I'm using rhea in a nodejs application to send messages around over Azure Service Bus using AMQP. My problem is as follows:
Sometimes a message processing attempt can fail because of something that is out of our hands. For instance, a call to some API could fail because a service is down. At that point we unlock the message so it can be picked up at a later time or by another instance. After a certain amount of retries (when delivery-count has hit a certain max) it just ends up in DLQ.
What I want to achieve is that between each delivery attempt there is an increasing pause so the X amount of retries don't just occur in rapid succession until the max is hit. This way I can give whatever is causing the failure some time to come back up if it's just a matter of waiting for some service to become available again. If that doesn't work the message can go to DLQ anyway.
Is there some setting in azure service bus that will achieve this or will I have to program this into my own application?
if you explicitly want to delay processing you can en-queue a new message with ScheduledEnqueueTime set of later delivery (using the message.Clone() function can help in creating the cloned message). You also have the ability to call message.Defer() and will not deliver this message again until you call Receive(Sequenceid) for that specific message at a later time .

How to shut down external application gracefully?

I have a program that shuts down another application when certain conditions are met. Now most of the time everything works out fine, but sometimes the app is writing to a file and leaves it in a half finished state and has no way of recovering it on restart. I thought that one could send soft close signals and escalate after certain timeouts to more aggressive close signals, going trough a list like this:
1. WM_CLOSE
2. WM_QUIT
3. WM_DESTROY
4. TerminateProcess().
Now I know that the program has no code to handle any signal it receives. Is there a possibility that certain FileHandler under Windows react gracefully on such soft signals or is there no use to sending those, if the app does not handle them explicitly?
This article says:
NOTE: A console application's response to WM_CLOSE depends on whether or not it has installed a control handler.
Does this mean if no control handler is installed sending 1-4 is just as good as sending 4 directly?

Global Event Occur Multiple Times in Blackberry

I am using global event listener in my application. The event capture perfectly works in my code. The problem is the event seems to be fire multiple time. I have followed this tutorial link.
If your app is listening to global events generated by system, then these events may fire several times according to conditions you do not know. On my experience to get a univocal signal from the system I had to analyze a sequence of global events and the signal was recognized as received only when this sequence of events occured one after another in expected way.
If your app is listening to global events generated by your application, then it is fully under your control. Check your code that fires global events. Use EventLogger to log every moment when you fire and when you receive fired event. Then inspect your log to find out what is going on. Seems that your app fires the global event more times than expected.

Sending real notification after toast received

In a project I'm currently working on, we send some small info across the wire to WP7 device when we send a raw notification.
When the application is in a tombstone state and the user receives the toast message, we can't add the extra baggage in the toast. So we figured we need a way to resend the notification once the user entered the application again.
Anybody has any experience or possible solution for this problem. We are currently looking at a sort of handshaking between client and server. But it all seems a bit drastic for me.
Kind regards,
Tom
I would suggest to stop using rawNotifications and use only toast.
To handle the case when the app has been started using a toast notification, query the server at app startup to check if there's pending data.
For notifications sent while the app is running, you can detect them using the ShellToastNotificationReceived event of your channel. When the event is triggered, query the server to retrieve the payload.

Resources