Trigger indefinite notification on Windows 10 - windows

I'm trying to trigger a notification which has no expiry but must be closed by pressing the top-right X close button. Is this possible?
I've been able to trigger a timed notification which also closes when anywhere else is clicked. With this answer.
[reflection.assembly]::loadwithpartialname("System.Windows.Forms")
[reflection.assembly]::loadwithpartialname("System.Drawing")
$notify = new-object system.windows.forms.notifyicon
$notify.icon = [System.Drawing.SystemIcons]::Information
$notify.visible = $true
$notify.showballoontip(10,"New Chat!","You have received New Chat!",[system.windows.forms.tooltipicon]::None)

Per Microsofts NotifyIcon.ShowBalloonTip Method documentation, the actual timeout property is set by the current system settings.
Minimum and maximum timeout values are enforced by the operating system and are typically 10 and 30 seconds, respectively, however this can vary depending on the operating system. Timeout values that are too large or too small are adjusted to the appropriate minimum or maximum value. In addition, if the user does not appear to be using the computer (no keyboard or mouse events are occurring) then the system does not count this time towards the timeout.
According to a couple of more google searches, you can set the time for your profile through the Registry ( Regedit - HKEY_CURRENT_USER\Control Panel\Accessibility: MessageDuration - didn't work for me).
Through group policy, or using theSystemParametersInfo API which is out of my league to explain any further. Only reference I can find was configuring the Accessibility/System Parameter: SPI_SETMESSAGEDURATION.
Its C++ though and only other article I could find was this one:SystemParametersInfoA function.
Seems possible but, it will definitely be a hassle to get it working.

Related

Dataflow job has high data freshness and events are dropped due to lateness

I deployed an apache beam pipeline to GCP dataflow in a DEV environment and everything worked well. Then I deployed it to production in Europe environment (to be specific - job region:europe-west1, worker location:europe-west1-d) where we get high data velocity and things started to get complicated.
I am using a session window to group events into sessions. The session key is the tenantId/visitorId and its gap is 30 minutes. I am also using a trigger to emit events every 30 seconds to release events sooner than the end of session (writing them to BigQuery).
The problem appears to happen in the EventToSession/GroupPairsByKey. In this step there are thousands of events under the droppedDueToLateness counter and the dataFreshness keeps increasing (increasing since when I deployed it). All steps before this one operates good and all steps after are affected by it, but doesn't seem to have any other problems.
I looked into some metrics and see that the EventToSession/GroupPairsByKey step is processing between 100K keys to 200K keys per second (depends on time of day), which seems quite a lot to me. The cpu utilization doesn't go over the 70% and I am using streaming engine. Number of workers most of the time is 2. Max worker memory capacity is 32GB while the max worker memory usage currently stands on 23GB. I am using e2-standard-8 machine type.
I don't have any hot keys since each session contains at most a few dozen events.
My biggest suspicious is the huge amount of keys being processed in the EventToSession/GroupPairsByKey step. But on the other, session is usually related to a single customer so google should expect handle this amount of keys to handle per second, no?
Would like to get suggestions how to solve the dataFreshness and events droppedDueToLateness issues.
Adding the piece of code that generates the sessions:
input = input.apply("SetEventTimestamp", WithTimestamps.of(event -> Instant.parse(getEventTimestamp(event))
.withAllowedTimestampSkew(new Duration(Long.MAX_VALUE)))
.apply("SetKeyForRow", WithKeys.of(event -> getSessionKey(event))).setCoder(KvCoder.of(StringUtf8Coder.of(), input.getCoder()))
.apply("CreatingWindow", Window.<KV<String, TableRow>>into(Sessions.withGapDuration(Duration.standardMinutes(30)))
.triggering(Repeatedly.forever(AfterProcessingTime.pastFirstElementInPane().plusDelayOf(Duration.standardSeconds(30))))
.discardingFiredPanes()
.withAllowedLateness(Duration.standardDays(30)))
.apply("GroupPairsByKey", GroupByKey.create())
.apply("CreateCollectionOfValuesOnly", Values.create())
.apply("FlattenTheValues", Flatten.iterables());
After doing some research I found the following:
regarding constantly increasing data freshness: as long as allowing late data to arrive a session window, that specific window will persist in memory. This means that allowing 30 days late data will keep every session for at least 30 days in memory, which obviously can over load the system. Moreover, I found we had some ever-lasting sessions by bots visiting and taking actions in websites we are monitoring. These bots can hold sessions forever which also can over load the system. The solution was decreasing allowed lateness to 2 days and use bounded sessions (look for "bounded sessions").
regarding events dropped due to lateness: these are events that on time of arrival they belong to an expired window, such window that the watermark has passed it's end (See documentation for the droppedDueToLateness here). These events are being dropped in the first GroupByKey after the session window function and can't be processed later. We didn't want to drop any late data so the solution was to check each event's timestamp before it is going to the sessions part and stream to the session part only events that won't be dropped - events that meet this condition: event_timestamp >= event_arrival_time - (gap_duration + allowed_lateness). The rest will be written to BigQuery without the session data (Apparently apache beam drops an event if the event's timestamp is before event_arrival_time - (gap_duration + allowed_lateness) even if there is a live session this event belongs to...)
p.s - in the bounded sessions part where he demonstrates how to implement a time bounded session I believe he has a bug allowing a session to grow beyond the provided max size. Once a session exceeded the max size, one can send late data that intersects this session and is prior to the session, to make the start time of the session earlier and by that expanding the session. Furthermore, once a session exceeded max size it can't be added events that belong to it but don't extend it.
In order to fix that I switched the order of the current window span and if-statement and edited the if-statement (the one checking for session max size) in the mergeWindows function in the window spanning part, so a session can't pass the max size and can only be added data that doesn't extend it beyond the max size. This is my implementation:
public void mergeWindows(MergeContext c) throws Exception {
List<IntervalWindow> sortedWindows = new ArrayList<>();
for (IntervalWindow window : c.windows()) {
sortedWindows.add(window);
}
Collections.sort(sortedWindows);
List<MergeCandidate> merges = new ArrayList<>();
MergeCandidate current = new MergeCandidate();
for (IntervalWindow window : sortedWindows) {
MergeCandidate next = new MergeCandidate(window);
if (current.intersects(window)) {
if ((current.union == null || new Duration(current.union.start(), window.end()).getMillis() <= maxSize.plus(gapDuration).getMillis())) {
current.add(window);
continue;
}
}
merges.add(current);
current = next;
}
merges.add(current);
for (MergeCandidate merge : merges) {
merge.apply(c);
}
}

SAP Script Recording and Playback - Cumulative data

If I am using the Script Recording and Playback feature on same transaction for instance ME25, multiple times, I am getting cumulative data as part of multiple scripts rather than incremental data.
Explanation :
If I open ME25 details and enter "100-310" as Material and "Ball Bearing" as Short Text and stop the recording, I get the following script, which is expected behavior.
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/ctxtEBAN-MATNR[3,0]").text = "100-310"
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/txtEBAN-TXZ01[4,0]").text = "Ball Bearing"
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/txtEBAN-TXZ01[4,0]").setFocus
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/txtEBAN-TXZ01[4,0]").caretPosition = 12
After this, I restart the recording and type Qty Requested as "100" and delivery date as "21.04.2021" and stop the recording. I get the following script:
session.findById("wnd[0]").maximize
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/ctxtEBAN-MATNR[3,0]").text = "100-310"
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/txtEBAN-TXZ01[4,0]").text = "Ball Bearing"
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/txtEBAN-MENGE[5,0]").text = "100"
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/ctxtRM06B-EEIND[8,0]").text = "21.04.2021"
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/ctxtEBAN-EKGRP[9,0]").setFocus
session.findById("wnd[0]/usr/tblSAPMM06BTC_0106/ctxtEBAN-EKGRP[9,0]").caretPosition = 0
Instead of getting the incremental part that I typed for the second recording instance, I am getting complete script. Is there a way to achieve incremental scripts?
I can reproduce in my SAP GUI 7.60 (whatever screen it is; whatever kind of field it is, I can reproduce even with very simple fields like in a Selection Screen).
It seems that it happens all the time, even if you write your own recorder (a VBS script which mainly uses session.record = True + event handlers). It's due to the fact that SAP GUI sends all the screen events (i.e. the user actions) since the screen was entered, when the user presses a button, a function key, closes a window, or stops the SAP GUI Recorder.
If you write your own recorder, I guess you can save the screen contents when you start the recorder, and when the "change" events occur you may ignore those ones where the new field value has not changed since the recorder started. But that's a lot of work.
I think that it's far more easy to append manually the last lines of the last script to the initial script.

using REST, how does one set the fan duration for a one-time action?

three fan-related variables are defined in https://developer.nest.com/documentation/api#has_fan
has_fan (r/o, boolean)
fan_timer_active (r/w, boolean)
fan_timer_timeout (r/o, iso8601)
i suspect that fan_timer_timeout should be read-write; however, when i PUT
{"fan_timer_active": true, "fan_timer_timeout": "2014-09-30T01:07:29Z"}
i get back
No write permission(s) for field(s): fan_timer_timeout
none of the examples (on the SDK site) actually change the fan, so no guidance there.
the "non-public API" from "the early days" would have you do this:
fan_timer_duration = seconds
fan_timer_timeout = time-since-epoch-in-seconds + seconds
fan_timer_timeout isn't documented on the SDK site; however, doing that yields
No write permission(s) for field(s): fan_timer_duration,fan_timer_timeout
could someone clue me in as to what i need to send to get the fan to spin for the next 15 minutes?
many thanks!
The user configures the fan time duration in the Nest app or on the device, the API only provides a way to trigger a fan event.
It seems that fan time duration is a global setting, and would affect turning on the fan manually at the thermostat, hence why it is read only in the API. (Avoids user confusion)

Google Calendar v3 Error "The requested minimum modification time lies too far in the past. [410]"

We are using the Google Calendar v3 API to return a list of events for a user that have been updated since a point in time.
In the v2 API there was no limitation on setting this date in the past.
If we set the UpdatedMin to a date too far back (like 2 months) then the error is thrown
"The requested minimum modification time lies too far in the past. [410]"
If we set ShowDeleted to false then we do not get the error.
I cannot find any reference to a limitation here. Does anybody know the details of this limit. Unfortunately when synchronising calendars this is a show stopper when synchronisation has not run for a period of time for a calendar (other than running a full list which we would prefer to avoid)
EventsResource.ListRequest lr = new EventsResource.ListRequest(service, c.uc.calendar);
lr.UpdatedMin = c.primaryModTime.ToLocalTime();
lr.ShowDeleted = true;
Events el = lr.Execute();
if (el.Items.Count > 0)
{
the following also discusses this issue but without any resoluton.
https://groups.google.com/forum/#!msg/google-calendar-api/_rk9o45sXT0/3APXqxi8jvkJ
There is some explanation at:
https://developers.google.com/google-apps/calendar/v3/sync
It says that on 410 you should wipe your storage and perform a full sync instead.
Also consider switching to sync tokens as recommended in the last paragraph.

ScheduledAgent and GeoCoordinateWatcher - how to make them work?

I'm trying to get GPS position via GeoCoordinateWatcher running in ScheduledAgent. Unfortunately only location I get is some old one, recorded when application was running. How to get current (latest) location in ScheduledAgent?
I have come across the same problem. Unfortunaty, this is intended behaviour according to the WP7.1 APIs
According to the documentation, "This API, used for obtaining the geographic coordinates of the device, is supported for use in background agents, but it uses a cached location value instead of real-time data. The cached location value is updated by the device every 15 minutes."
http://msdn.microsoft.com/en-us/library/hh202962(v=VS.92).aspx
My 2 Centlys.
it is probably becoz the GeoCoordinateWatcher takes some time (2 seconds or so) to get the new coordinate values and to lock to GPS or Cellular Mast or Wifi or whatever. And it will give you the last recordered position in the meantime.
So, try to hook to the following events
watcher.StatusChanged += new EventHandler< GeoPositionStatusChangedEventArgs>(watcher_StatusChanged);
watcher.PositionChanged += new EventHandler< GeoPositionChangedEventArgs< GeoCoordinate>>(watcher_PositionChanged);
where watcher = new GeoCoordinateWatcher(GeoPositionAccuracy.High);
and call the NotifyComplete(); in your "watcher_PositionChanged" event handler.

Resources