Run multiple tests from a previous saved state - cypress

I see Cypress lets us get back to the application state during a test to debug using time-travel. Is it possible to use this state snapshot as a starting point for other tests?
Imagine a UI where options in a stepper depend on previous selections in earlier steps, and many of these rely on requests to an API. To run different tests in the last step I would need to complete the earlier steps in exactly the same way each time. This can be added to the before block to make the code simpler but we still have the delay and overheads of API requests each time to get to this exact same state. Given that Cypress already stores the state at various points, can I seed future tests with the state from previous ones?

Related

Trains: Can I reset the status of a task? (from 'Aborted' back to 'Running')

I had to stop training in the middle, which set the Trains status to Aborted.
Later I continued it from the last checkpoint, but the status remained Aborted.
Furthermore, automatic training metrics stopped appearing in the dashboard (though custom metrics still do).
Can I reset the status back to Running and make Trains log training stats again?
Edit: When continuing training, I retrieved the task using Task.get_task() and not Task.init(). Maybe that's why training stats are not updated anymore?
Edit2: I also tried Task.init(reuse_last_task_id=original_task_id_string), but it just creates a new task, and doesn't reuse the given task ID.
Disclaimer: I'm a member of Allegro Trains team
When continuing training, I retrieved the task using Task.get_task() and not Task.init(). Maybe that's why training stats are not updated anymore?
Yes that's the only way to continue the same exact Task.
You can also mark it as started with task.mark_started() , that said the automatic logging will not kick in, as Task.get_task is usually used for accessing previously executed tasks and not continuing it (if you think the continue use case is important please feel free to open a GitHub issue, I can definitely see the value there)
You can also do something a bit different, and justcreate a new Task continuing from the last iteration the previous run ended. Notice that if you load the weights file (PyTorch/TF/Keras/JobLib) it will automatically connect it with the model that was created in the previous run (assuming the model was stored is the same location, or if you have the model on https/S3/Gs/Azure and you are using trains.StorageManager.get_local_copy())
previous_run = Task.get_task()
task = Task.init('examples', 'continue training')
task.set_initial_iteration(previous_run.get_last_iteration())
torch.load('/tmp/my_previous_weights')
BTW:
I also tried Task.init(reuse_last_task_id=original_task_id_string), but it just creates a new task, and doesn't reuse the given task ID.
This is a great idea for an interface to continue a previous run, feel free to add it as GitHub issue.

If I run multiple test cases together, do I need to clear previous state or will Angular do it automatically?

I have multiple test cases covering different components and in different specs. Each of them run successfully but when I run them together, some of them randomly fail, some for weird reasons like a css-selector isn't found
let routerElement = contentComponentElement.querySelector("router-outlet");
expect(routerElement).toBeTruthy(); //fails sometimes
Could it be possible that because I am running them together, a test case is picking residue or left-over state of the previous test case? Is it possible to clear all the previous data/html etc. before running a new test case?
The issue was some of the test cases used Observables and I wasn't waiting for the Observables to finish before moving to the next test case. I started calling done for such test cases and now things are in order.

Visual Studio Web Test - Recording background requests

I have a web test where my requirements need a handful of different polling requests to be going on in the background. I have created a WebTestPlugin that looks for a certain context parameter to be set, and once it is, it kicks off a thread that just loops (every X seconds) firing off the configured request.
My issue is that this is not done in the context of the test, therefore the results (# of calls, duration, etc) is not part of the final report.
Is there a way to insert this data?
Rather than starting your own thread to run the background requests I suggest using the facilities of the load test. That way the results will be properly recorded. Another reason is that the threading regime of a load test is not specified by Microsoft and adding your own thread may cause issues.
You could have one scenario for the main test. Another scenario has one or more simple tests for the background polling activity. These tests could be set with a "think time between iterations" or with "test mix based on user pace" to achieve the required background rate. To get the background web tests starting at the correct time start the test with a constant load of 0 (zero) users and use a load test plugin that adjusts the number of users whenever needed. The plugin writes the required number into m_loadTest.Scenarios[N].CurrentLoad for a suitable N. This would probably be done in the Heartbeat plugin but potentially could be in any load test plugin. If may be that the TestFinished plugin can better detect when the number of users should increase.

Sustainable solution using JMeter for a big functional flow

I have a huge flow to test using APIs. There are 3 endpoints. One is starting a process (db migration) that can last ~2-3 days, one is returning the status of the current running process (in progress, success, fail) and the last one is returning all the failed processes (as a list).
The whole flow should be:
Start the first process
Call the second endpoint until the first process ends (should get Fail or Success)
If the process failed, call the first endpoint again, if not, go to the next process.
The problem is that 1 process can last around 2-3 days and we have around 20k processes to check. (this should take a lot of time). I do have a special VM only for this.
My question: does it worth trying to implement a solution for this using JMeter?
It is not worth implementing in JMeter unless you want to use the tool as a workload automation engine that replaces functionalities provided by UC4 AppWorkr or Control-M. Based on what you describe, it does not appear to be a load test except the 2nd part that continuously queries the services for success/failure. I do not know the architecture behind that implementation. Hence, I am unable to quantify even that would be a load test or not.

Gatling: How to ramp up users in the after{} hook?

I'm an absolute Gatling beginner, and have mostly written my simulations by copying bits and pieces of other simulations in my org's code. I searched around a bunch, and didn't see a similar question (and couldn't find anything in the gatling docs), so now I'm here.
I know Gatling has an after{} hook that will run code after the simulation finishes. I need to know how to multi-thread the after{} hook the same way the simulation is multi-threaded. Basically, can I ramp up users within the after{} hook?
My issue is: My simulation ramps up 100 users, logs them into a random account (from a list of 1000 possible accounts), and then creates 500 projects within that account. This is to test the performance of a project creation endpoint.
The problem is that we have another simulation that tests the performance of an endpoint that counts the number of projects in a given account. That test is starting to suffer because of the sheer volume of projects in these accounts (they're WAYYY more loaded than even our largest real-world account -- by orders of magnitude), so I want my "project creation" simulation to clean up the accounts when it's done.
Ideally, the after would do something like this:
after {
//ramp up 1000 users
//each user should then....
//log into an account
// delete all but N projects (where N is a # of projects close to our largest user account)
}
I have the log in code, and I can write the delete code... but how do I ramp up users within the after {} hook?
Is that even doable? Every example I've seen of the after{} hook (very few) has been something simple like printing text that the test is complete or something.
Thanks in advance!
You can use a global ConcurrentHashMap to store data from exec(session => session) blocks.
Then, you would be able to access this data from an after hook. But then, as explained in the doc, you won't be able to use the Gatling DSL in there. If you're dealing with REST APIs, you can directly use AsyncHttpClient, which Gatling is built upon.
Some people would also log in a file instead of in memory, and delete from a second simulation that would use this file as a feeder.
Anyway, I agree this is a bit cumbersome, but we don't currently consider that setting/cleaning up the system under test is the responsibility of the load test tool.

Resources