I am facing very weird issue and I can't even identify origins of it. Maybe someone can tell where I at least should look.
I have a input field with a calendar, which is disabled. Basically it's aim to showcase which date was chosen by a user for a certain document, but doesn't let to change it.
I am checking with .should('have.value', '01.01.2023 08:00') and locally it passes. I push code to gitlab and the pipeline throws the error, that the time doesn't match, so it's 01.01.2023 09:00, I try another input and the time difference there is two hours, so the problem is not the time zone, nor those are user settings. Also tried to hardcode those. What is gitlab CI doing here, why it renders a different time, than my localhost, with the testing data base being the same?
As mentioned here, GitLab runners use UTC timezone, which would explain the time shift between actual and expected.
I try another input and the time difference there is two hours, so the problem is not the time zone
It might still be related to the fixed timezone used by the runner (UTC) which makes it "render" your input in that one UTC zone.
(assuming "another input" was another different input. If the runner display the same input differently on two different execution, that would be problematic)
Related
I see Cypress lets us get back to the application state during a test to debug using time-travel. Is it possible to use this state snapshot as a starting point for other tests?
Imagine a UI where options in a stepper depend on previous selections in earlier steps, and many of these rely on requests to an API. To run different tests in the last step I would need to complete the earlier steps in exactly the same way each time. This can be added to the before block to make the code simpler but we still have the delay and overheads of API requests each time to get to this exact same state. Given that Cypress already stores the state at various points, can I seed future tests with the state from previous ones?
I have a simple thing to do. I have a recurrence flow that refreshes my dataset and then the report goes out based on that.
now my issue that I need to run the flow only on TUESDAYs except the first Tuesday of the month.
I have set up a trigger condition on the above as below but it's not working. It should have run today but did not.
#greater(int(utcNow('dd')),7)
What I suspect is that timezone difference. because my original Recurrence is using +10 timezone but the conditions says utcNow() , maybe it's reading from there and 11am has still not arrived in that timezone. but I need it to run in +10AEST timezone.
Found an answer for it in case anyone else needs it.
The above expression is correct, except that I needed to use #greaterOrEquals(int(utcNow('dd')),7)
not sure why! but it just works...
I had to stop training in the middle, which set the Trains status to Aborted.
Later I continued it from the last checkpoint, but the status remained Aborted.
Furthermore, automatic training metrics stopped appearing in the dashboard (though custom metrics still do).
Can I reset the status back to Running and make Trains log training stats again?
Edit: When continuing training, I retrieved the task using Task.get_task() and not Task.init(). Maybe that's why training stats are not updated anymore?
Edit2: I also tried Task.init(reuse_last_task_id=original_task_id_string), but it just creates a new task, and doesn't reuse the given task ID.
Disclaimer: I'm a member of Allegro Trains team
When continuing training, I retrieved the task using Task.get_task() and not Task.init(). Maybe that's why training stats are not updated anymore?
Yes that's the only way to continue the same exact Task.
You can also mark it as started with task.mark_started() , that said the automatic logging will not kick in, as Task.get_task is usually used for accessing previously executed tasks and not continuing it (if you think the continue use case is important please feel free to open a GitHub issue, I can definitely see the value there)
You can also do something a bit different, and justcreate a new Task continuing from the last iteration the previous run ended. Notice that if you load the weights file (PyTorch/TF/Keras/JobLib) it will automatically connect it with the model that was created in the previous run (assuming the model was stored is the same location, or if you have the model on https/S3/Gs/Azure and you are using trains.StorageManager.get_local_copy())
previous_run = Task.get_task()
task = Task.init('examples', 'continue training')
task.set_initial_iteration(previous_run.get_last_iteration())
torch.load('/tmp/my_previous_weights')
BTW:
I also tried Task.init(reuse_last_task_id=original_task_id_string), but it just creates a new task, and doesn't reuse the given task ID.
This is a great idea for an interface to continue a previous run, feel free to add it as GitHub issue.
Laravel is (correctly) running scheduled tasks via the App\Console\Kernel#schedule method. It does this without the need for a persistance layer. Previously ran scheduled tasks aren't saved to the database or stored in anyway.
How is this "magic" achieved? I want to have a deeper understanding.
I have looked through the source, and I can see it is somewhat achieved by rounding down the current date and diffing that to the schedule frequency, along with the fact that it is required to run every minute, it can say with a certain level of confidence that it should run a task. That is my interpretation, but I still can't fully grasp how it is guaranteeing to run on schedule and how it handles failure or things being off by a few seconds.
EDIT Edit due to clarity issue pointed out in comment.
By "a few seconds" I mean how does the "round down" method work, even when it is ran every minute, but not at the same second - example: first run 00:01.00, 00:01:02, 00:02:04
Maybe to clarify further, and to assist in understanding how it works, is there any boundary guarantees on how it functions? If ran multiple times per minute will it execute per minute tasks multiple times in the minute?
Cronjob can not guarantee seconds precisely. That is why generally no cronjob interval is less than a minute. So, in reality, it doesn't handle "things being off by a few seconds."
What happens in laravel is this, after running scheduling command for the first time the server asks "Is there a queued job?" every minute. If none, it doesn't do anything.
For example, take the "daily" cronjob. Scheduler doesn't need to know when was the last time it ran the task or something like this. When it encounters the daily cronjob it simply checks if it is midnight. If it is midnight it runs the job.
Also, take "every thirty minute" cronjob. Maybe you registered the cronjob at 10:25. But still the first time it will run on 10:30, not on 10:55. It doesn't care what time you registered or when was the last time it ran. It only checks if the current minute is "00" or divisible by thirty. So at 10:30 it will run. Again, it will run on 11:00. and so on.
Similarly a ten minute cronjob by default will only check if the current minute is divisible by ten or not. So, regardless of the time you registered the command it will run only on XX:00, XX:10, XX:20 and so on.
That is why by default it doesn't need to store previously ran scheduled task. However, you can store it into a file if you want for monitoring purpose.
Is it possible to change the resolution time calculation to start not with the issue creation time, but rather with the time when an issue was transferred into a certain state?
The use case is as follows - We use a kanban-ish development method, where we create most issues/featues/stories in a backlog upfront; thus, this kills the usefulness of the resolution time gadget. In our case, the lead/resolution time should rather be calculated using the time where an issue has been pulled to the selected issues.
As this calculation is the basis for multiple gadgets, maybe it could be changed per gadget in order to avoid unforeseen issues with other gadgets?
There is a service level management tool SLAdiator (http://sladiator.com) which calculates resolution / reaction times based on the duration that ticket has spent in a certain status (or statuses). You can view these tickets online as well as get reports.