I need to check events for debuging a service that has many events .
Windows event viewer sorts events by date time but there is a big problem.
for example
event 1 time is 12:32:11
events 2 time is 12:32:11
events 3 time is also 12:32:11
event 2 is after event 1 and before event 3.
but because they have same time in (HH:MM:SS) format , event viewer do not show order correclty.
I think it shows logs with same time,alpahabetically or based on other parameters .
does event logging save millisecond for windows application logs?
if yes,
Is there any way or any third party application to view events log really ordered by date and time when hour , minute and second are same?
if you checkout an event detail,in xml view ,there is a property named TimeCreated SystemTime
Thanks #chenjun for pushing me in right direction
Answering my own question to help people with same problem
unfortunately , Windows do not save event's time milliseonds part in windows log>application !
Related
When I use AppleScript for a repeating event, I am able to reference each recurrence of an event, even if one or more of the repeated occurrences is canceled.
What I am having trouble with is how a "moved" occurrence is stored. So, this is one of the occurrences that is not at the same time but moved.
Imagine a weekly event at 9am every Monday. If on the third week, I drag the event in the calendar app to be at 10am rather than 9am on the Monday. I do not see anywhere in the event metadata that shows me the moved occurrence of the event.
I would appreciate any pointers.
I have tried looking at all the relevant event metadata. I am using Script Debugger as my tool.
Here is the code. I not only changed the time of the moved event, I also changed the name. This is looking for the name.
tell application "Calendar"
set sourceCalID to calendar id "4AA1E22C-0472-44D1-A582-31A7310AF9B4"
set howManyEvents to number of event of sourceCalID
set the_events to every event of sourceCalID
repeat with current_event in the_events
set summaryEvent to summary of current_event
log summaryEvent
end repeat
end tell
I need to design a URL Callback Scheduler system for an application with potentially millions of jobs per days; the scheduler will need to do the following,
Provide an API for clients to register a URL Callback to be called at a specific date and time, the callback time is between 1 minute to 1 year; in other words, the client can register a callback to be fired in 1 minute in the future, or a year in future.
My questions are,
1- Is there a design pattern that I can utilize?
2- Are you aware of an open source application that does this?
I've been searching for days to get a clue on how to start but haven't found anything useful, you're help is greatly appreciated.
Given a query that looks like this:
SELECT
EventDate,
system.Timestamp as test
INTO
[azuretableoutput]
FROM
[csvdata] TIMESTAMP BY EventDate
According to documentation, EventDate should now be used as timestamp.
However, when storing data into blobstorage with this path:
sadata/Y={datetime:yyyy}/M={datetime:MM}/D={datetime:dd}
I seem to still get ingested time. In my case, ingested time means nothing and I need to use EventDate for the path. Is this possible?
When checking data in Visual Studio, test and EventDate should be equal, however results look like this:
EventDate ;Test
2020-04-03T11:13:07.3670000Z;2020-04-09T02:16:15.5390000Z
2020-04-03T11:13:07.0460000Z;2020-04-09T02:16:15.5390000Z
2020-04-03T11:13:07.0460000Z;2020-04-09T02:16:15.5390000Z
2020-04-03T11:13:07.3670000Z;2020-04-09T02:16:15.5390000Z
2020-04-03T11:13:08.1470000Z;2020-04-09T02:16:15.5390000Z
Late tollerance arrival window is set as: 99:23:59:59
Out of order tollerance is set as: 00:00:00:00 with out of order action set to adjust.
When running same query in Stream Analytics on Azure i get this result:
[{"eventdate":"2020-04-03T11:13:20.1060000Z","test":"2020-04-03T11:13:20.1060000Z"},
{"eventdate":"2020-04-03T11:13:20.1060000Z","test":"2020-04-03T11:13:20.1060000Z"},
{"eventdate":"2020-04-03T11:13:20.1060000Z","test":"2020-04-03T11:13:20.1060000Z"}]
So far so good. When running the query with data on Azure it produces this path:
Y=2020/M=04/D=09
It should have produced this path:
Y=2020/M=04/D=03
Interestingly enough, when checking the data that is actually stored in blobstorage I find this:
EventDate,test
2020-04-03T11:20:39.3100000Z,2020-04-09T19:33:35.3870000Z,
System.timestamp seems to only be altered when testing the query on sampled data, but is not actually altered when the query is running normally and receiving data.
I have tested this with late arrival setting set to 0 and 20 days. In reality I need to disable late arrival adjustment as I might get events that are years old through the pipeline.
This issue has been brought up and closed on the MicrosoftDocs GitHub
The Microsoft folks say:
Maximum days for late arrival is 20, so if the policy is set to 99:23:59:59 (99 days). The adjustment could be causing a discrepancy in System.Timestamp.
By definition of late arrival tolerance window, for each incoming event, Azure Stream Analytics compares the event time with the arrival time; if the event time is outside of the tolerance window, you can configure the system to either drop the event or adjust the event’s time to be within the tolerance.
Consider that after watermarks are generated, the service can potentially receive events with event time lower than the watermark. You can configure the service to either drop those events, or adjust the event’s time to the watermark value.
As a part of the adjustment, the event’s System.Timestamp is set to the new value, but the event time field itself is not changed. This adjustment is the only situation where an event’s System.Timestamp can be different from the value in the event time field, and may cause unexpected results to be generated.
For more information, please see Understand time handling in Azure Stream Analytics.
Unfortunately, testing with sample data in Azure portal doesn't take policies into account at this time.
Potentially other helpful resources:
System.Timestamp()
TIMESTAMP BY
Event ordering policies
Time handling
Job monitoring
Currently I need my app to fire notifications twice a week for 6 or 12 weeks , I am using the UNUserNotificationCenter to fire my notifications. I have got them firing twice a week using a UNTimeIntervalNotificationTrigger which repeats on the certain day of week and time ok, but I cannot seem to figure out how to get them to stop after a certain date .
I have researched and the only thing i can see is to create them all at once, is there a max how many you can create at once, as I further develop my choices will get larger ie 5 times a week for 24 weeks.
Is there any way that this possible without having to create them all at once ?
Thanks
Is there a way to stop notifications firing after a certain date?
If the user doesn't even open your app during the 6 or 12 weeks, you can't stop the notification.
If user use the app, after fire the notification at the first time, I think you can get the specific date after 6 or 12 weeks, let's say it endDate. So you can check the date between nowDate and endDate during app is running:
if (nowDate < endDate)
{
// do nothing, still fire the notification
}
else
{
//cancle the notification
}
is there a max how many you can create at once
No, you can create as much as you want.
Is there any way that this possible without having to create them all
at once ?
Here comes the same problem, if the user doesn't even open your app during the 6 or 12 weeks, how could you create the other notifications if you don't create them all
at once?
So, I would recommend you to use a remote-notification and then you can control send or not send notification to certain user everyday.
Refer :The document about-user-notifications;
I have a requirement as stated # https://kafka.apache.org/21/documentation/streams/developer-guide/dsl-api.html#window-final-results for waiting until window is closed in order to handle late out of order event by buffering it for duration of window.
Per my understanding of this feature is once windowing is created, the window works like wall clock processing, e.g. Creating for 1 hour window, The window starts ticking once first event comes. This 1hr window is closed exactly one hour later and all the events buffered so far will be forwarded to down stream. However, i need to be able to hold this window even longer say conditionally for as long as required e.g. based on state / information in external system such as database.
To be precise my requirement for event forwarding is (windows of 1 hour if external state record says it is good) or (hold for as long as required until external record says it's good and resume tracking of the event until the event make it fully 1hr, disregarding the time when external system is not good)
To elaborate this 2nd condition, e.g. if my window duration 1 1hr , my event starts at 00:00, if on 00:30 it is down and back normal on 00:45, the window should extend until 01:15.
Is it possible to pause and resume the forwarding of events conditionally based on my requirement above ?
Do I have to use transformation / processor and use value store manually to track the first processing time of my event and conditionally forwarding buffered events in punctuator ?
I appreciate all kind of work around and suggestion for this requirement.
the window works like wall clock processing
No. Kafka Streams work on event-time, hence, the timestamps as returned from the TimestampExtractor (by default the embedded record timestamp) are use to advance time.
To be precise my requirement for event forwarding is (windows of 1 hour if external state record says it is good)
This would need a custom solution IMHO.
or (hold for as long as required until external record says it's good and resume tracking of the event until the event make it fully 1hr, disregarding the time when external system is not good)
Not 100% if I understand this part.
Is it possible to pause and resume the forwarding of events conditionally based on my requirement above ?
No.
Do I have to use transformation / processor and use value store manually to track the first processing time of my event and conditionally forwarding buffered events in punctuator ?
I think this might be required.
Check out this blog post, that explains how suppress() work in details, and when it emits based on observed event-time: https://www.confluent.io/blog/kafka-streams-take-on-watermarks-and-triggers