Daylight Saving and postdated times - user-interface

We recently changed our application in an attempt to support displaying times in the timezone that the time was recorded in. We have run into a problem with getting times correct on Daylight Saving changes.
We implemented it by storing the offset separately but the UTC time for each time. Then when displaying the time, we add the static offset to the UTC time to show the time the user entered.
The issue we are running into is entering times that occurred before the time change. We are converting the entered time to UTC, but the conversion back is typically an hour off.
Obviously the problem is we are storing the offset of the "current" local time and not the offset of the time that the user entered time represents. But most of our users do not follow thru with the time change during the course of the event. So if they are documenting times at 1:55 and they need to document a time 10 minutes later they would put 2:05 instead of 3:05.
This would seem to preclude the use of using IsDaylightSavingTime and altering the display time with using the "appropriate" offset.
So I am kinda stuck on what to do to display the time that the user entered despite what the offset "should be".

Related

Cypress fails on gitlab but succeeds locally

I am facing very weird issue and I can't even identify origins of it. Maybe someone can tell where I at least should look.
I have a input field with a calendar, which is disabled. Basically it's aim to showcase which date was chosen by a user for a certain document, but doesn't let to change it.
I am checking with .should('have.value', '01.01.2023 08:00') and locally it passes. I push code to gitlab and the pipeline throws the error, that the time doesn't match, so it's 01.01.2023 09:00, I try another input and the time difference there is two hours, so the problem is not the time zone, nor those are user settings. Also tried to hardcode those. What is gitlab CI doing here, why it renders a different time, than my localhost, with the testing data base being the same?
As mentioned here, GitLab runners use UTC timezone, which would explain the time shift between actual and expected.
I try another input and the time difference there is two hours, so the problem is not the time zone
It might still be related to the fixed timezone used by the runner (UTC) which makes it "render" your input in that one UTC zone.
(assuming "another input" was another different input. If the runner display the same input differently on two different execution, that would be problematic)

AnyLogic model time only from 06:00-22:00

is there a way that my model runs only for 16 hours a day instead of 24 hours?
My goal is to skip the time between 22:00 and 06:00.
I tried to change the model times properties -> nothing found to change
Another idea is that an event occurs every evening and changes the model time to the next morning, but I could not find a function to change the model time :(
You cannot simply exclude some time. You can:
fast forward through time at 10pm (but model events will still be simulated if you have them --> your responsibility to not have anything happen)
switch to your own time mode where you convert time() steps into your frame of reference (i.e. your model is in HOUR time units and after the first 22 hours, you assume implicitly that the next hour would be 6am again)
just model 16 hrs in 1 model run but re-run that several times via a (freerun) parameter-variation experiment (where each run is a new day) and you accumulate results via the experiment

How to handle DST correctly in Airflow 2.0+ for different regions?

I am trying to get execution date using
execution_dt = f"{{{{execution_date.in_timezone('{timezone}').strftime('{date_partition_format}')}}}}"
but the issue that I am facing is that the polling is happening for the previous hour right when DST is taking over, but is able to correct itself for subsequent runs.
So for example if the DST takes place and the clock has gone from 7th->8th hour, it would still try to poll for 7th hour, but in the subsequent run which is done after 2 hours, it will poll for 10th hour (in accordance with previous 8th hour).

Visual Studio Load Test request completion and think time

I'm using load test in Visual Studio to test our web api services. But to my surprise I can't seem to test what I want to. Actually I have a single url in my .webtest file and try to send the same url time and again to see what is the avg. response time.
Here are the details
1.I use constant load of 1 user
2.Test duration of 1 hour
3.Think time of 10 seconds (not the think time between iterations)
4.The avg. response time that I get is 1.5 seconds
5.So the avg. test time comes out to be 11.5 seconds
6.Requests/sec are 0.088
7.And I'm using Sequential Test Order among 4 types of different tests
So these figures are making me think that every time a virtual user sends a request besides the specified think time it waits for the request to complete before he sends a new one (request). Thus technically the total think time becomes
Total think time = think time specified + avg. response time
But I don't want the user to wait for an already sent request to come back and then send a new one after a specified think time. I need to configure the load test in such a way that if the think time is 10 seconds then the user should send next request after every 10 seconds without waiting the first one to come back then think for another 10 seconds and then send a new request (hence making the total think time to 11.5 seconds in my case as mentioned above). And no matter what type of test I choose among 4 different types Visual Studio is always forcing the virtual user to wait for the completion of the request then add specified think time and then send a new one.
I know what Visual Studio load test is doing is more of a practical approach where the user sends the request wait till it comes back then think or interact with the website and then sends a new one.
Any help or suggestion would be appreciated towards what I'm trying to achieve.
In the properties of the scenario, set the "Test mix type" to be "Test mix based on user pace" and set the "Tests per user per hour" as appropriate. See here.
The suggestion in the question that:
Total think time = think time specified + avg. response time
is erroneous. To my mind adding the values does not provide a useful result. The two values on the right are as stated. Think time simulates the time a user spends reading the page, deciding what to do next and typing/clicking/etc their response. Response time is the "turn around" time between sending a request and getting the response. Adding them does not increase the think time in any sense, it just makes the total duration for handing the request in this specific test. Another test might make the same request with a different think time. Note that many web pages cause more than one request and response to be issued; JavaScript and other allow web pages to do many clever things.

About web_reg_find() in loadrunner

I am trying to measure time for next button one page to another. To do this I start transaction before to press button, I press the next button , when the next page loaded I end the transaction. Between this transaction process I use web_reg_find() and check specific text to verify that page.
When I use controller that transaction measured 5 sec , then I modified transaction content and delete web_reg_find() after I measured that transaction it will be 3 sec. Is that normal ?
Because I do load test , functionality is important so transaction are also important. Is there any alternative way to check content and save the performance ?
web_reg_find() does some logic based on the response sent from the server and therefore takes time. LoadRunner is aware that this is not actual time that will be perceived by the real user and therefore reports it as "wasted time" for the transaction. If you check the log for this transaction you will see something like this:
Notify: Transaction "login" ended with "Pass" status (Duration: 4.6360 Wasted Time: 0.0062).
The time the transaction took and out of that time how much time was wasted on LoadRunner internal operations.
Note that when you will open the result in Analysis the transaction times will be reported without the wasted time (i.e. Analysis will report the time as it is perceived by the real user).
The amount of time taken for the processing of web_reg_find() also seems unusually long. As web_reg_find() is both memory and CPU bound (holding the page in ram and running string comparisons) I would look at other possibilities as to why it takes an additional two seconds. My hypothesis is that you have a resource constricted, or over subscribed load generator. Look at the performance of a control group for this type of user, 1 user loaded by itself on a load generator. Compare your control group to the behavior of the global group. If you see a deviation then this is due to a local resource constriction which is showing as slowed virtual users. This would have an impact on your measurement of response time as well.
I deliberately underload my load generators to avoid any possibility of load generator coloration plus employing a control generator in the group to measure any possible coloration.
the time which is taken by web_reg_find is calculated as waste time...

Resources