I have a pretty simple flow. I have tried "mytest" in the test channel every 30 seconds since the last one fires. as you can see, sometimes it fires, sometimes it doesnt. the keyword trigger doesnt always fire. it seems sporatic. any ideas why? What other info would you need to assist?
it seems like im hitting some type of limit. i was able to make a bunch of keyword tests today and got expected results. after a few test, it stops working again.
Related
I began to write a portion of a test in which an input on a form gets filled out. But, for some reason, after that part of the test ends, the logs on Cypress are showing me a bunch of POST Requests with a little yellow box that has numbers, which I am guessing is the number of requests being made? As a result, the CPU on my laptop is in high usage, my laptop is overheated and at times, the Cypress Browser in which I am seeing these logs is breaking. Does anyone have any idea as to why this could be happening? The only thing this test case is doing so far is filling out the name input on a form. I have provided an image below to show you.
So I have maybe 10 different connections active at any one time, running a bunch of statements on different dbs. Every time a single statement/query is completed, my results view jumping to the latest completed statement in the console, on any one of the open running connections - which is annoying when its something like quickly dropping a temp table when i'm quickly reading results from another output.
Any idea if you can prevent this from happening?
Unfortunately, it is impossible now. Please file a feature request here: https://youtrack.jetbrains.com/issues/DBE
The workaround can be using 'In-editor results' mode, when you'll see the result just under your query and no one will ever grab it from you!
I'm working with cypress now for 3 months and I try to fix this problem for 2 months now and i really don't now how to fix it.
When i run all my tests there are a lot of tests failing. And every-time its another test (random).
The application that i'm testing has an button that is disabled and when the fields are stuffed with text, the button becomes active.
but the problem is that cypress clicks on the button when the button is still disabled. the button needs some time to get active, now I have put the following in the code:
cy.wait('#budgetblindsPost')
cy.wait(500)
But this is also not working. I have less errors but I still get errors.
Here is an example of an error I get
Here is also an example of my code
Using cy.wait() all over the place may eventually solve issues related to timeout, but will make your test suite unnecessarily slow. Instead, you should increase the timeout(s)
One-off
This command will only fail after 30 seconds of not being able to find the object, or, when it finds it, 30 seconds of not being able to click it.
cy.get('#model_save', {timeout: 30000}).click({timeout: 30000});
Please note that your value of 500 means half a second, which may not be enough.
Global
If you find yourself overriding the timeout with the same value in a lot of places, you may wish to increase it once for all in the config.
defaultCommandTimeout: 4000
Time, in milliseconds, to wait until most DOM based commands are considered timed out
Using ruby / cucumber, I know you can explicitly call a fail("message"), but what are your other options?
The reason I ask is that we have 0... I repeat, absolutly NO control over our test data. We have cucumber tests that test edge cases that we may or may not have users for in our database. We (for obvious reasons) do not want to throw away the tests, because they are valuable; however since our data set cannot test that edge case, it fails because the sql statement returns an empty data set. Right now, we just have those tests failing, however I would like to see something along the lines of "no_data" or something like that if the sql statement returns an empty data set. So the output would look like
Scenarios: 100 total (80 passed, 5 no_data, 15 fail)
I am willing to use the already implemented "skipped" if there is a skip("message") function.
What are my options so we can see that with the current data, we just don't have any test data for those tests? making these manual tests is also not an option. They need to be run ever week with our automation, but somehow separate from the failures. Failure means defect, no_data found means it's not a testable condition. It's the difference between a warning: we have not tested this edge case, and Alert: broken code.
You can't invoke 'skipped', but you can certainly call pending with or without an error message. I've used this in a similar situation to yours. Unless you're running in strict mode then having pending scenarios won't cause any failures. The problem I encountered was that occasionally a step would get mis-spelled causing cucumber to mark that as pending, since it was not matching a step definition. That then became lost in the sea of 'legitimate' pending scenarios and was weeks before we discovered it.
I am using VS 2010 unit tests to create a load test.
BeginTimer/EndTimer are used to measure the response time.
On success, the timer works as expected.
If an error occurs in the application, I don't want the timer to record the response time. This will throw off the analysis reports. As an example, an OpenOrder method on success may take 30 seconds, but on failure (eg order not found), the OpenOrder might return in 2 seconds. I want the response time metrics to represent only thebactions that were successful.
Alternatively, is there a way to filter out the timers/transactions
that were not successfull?
Is there another way to do this? Something else besides
BeginTimer/EndTimer?
Code Snippet:
testContextInstance.BeginTimer("trans_01");
OpenOrder("123");// Execute some code
// If the open fails, I want to disregard/kill the timer.
// If I do a 'return' here, never calling EndTimer, the timer
// is still ended and its response time recorded.
testContextInstance.EndTimer("trans_01");
This is a deficiency in the unit testing API. There is a similar lack in the web testing API. I too have wanted a way to disregard timers around failures, but I think we are out of luck here. I believe an enhancement request to Microsoft is needed.
There is (possibly) a (lengthy) workaround: add in your own timer (using Stopwatch class) that you can ignore/terminate at will, and also add the relevant code to insert a result row directly into the transactions table in the load test results database (asynchronously for best performance).
But that's awful. It would much easier if the API simply offered a 'KillTimer' method.