When running an XCTest of an asynchronous operation, calling XCTFail() does not immediately fail the test, which was my expectation. Instead, whatever timeout period is remaining from the call to wait is first exhausted, which unnecessarily extends the test time and also creates a confusing failure message implying the test failed due to timeout, when in fact it explicitly failed.
func testFoo() {
let x = expectation(description: "foo")
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
XCTFail("bar")
}
wait(for: [x], timeout: 5)
}
In the above example, although the the failure occurs after roughly 2 seconds, the test does not complete until the timeout period of 5 seconds has elapsed. When I first noticed this behavior I thought I was doing something wrong, but this seems to just be the way it works, at least with the current version of Xcode (9.2).
Since I didn't find any mention of this via google or stackoverflow searches, I'm sharing a workaround I found.
I discovered that the XCTestExpectation can still be fulfilled after calling XCTFail(), which does not count as a pass and immediately expires the wait. So, applying this to my initial example:
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
XCTFail("bar")
x.fulfill()
}
This may be what Apple expects but it wasn't intuitive to me and I couldn't find it documented anywhere. So hopefully this saves someone else the time that I spent confused.
Related
I have a very different problem here.
I have this code
xpath = "//div[#class='z-listheader-content'][normalize-space()='Name']/ancestor::table/../following-sibling::div/table"
array = #browser.table(xpath: xpath, visible: true).rows.map { |row| [row.cells[0], row.cells[0].text] }
col = array.filter_map { |x| x if x[1].eql?(result) }
When I am executing the aforementioned code, it throws the following error
timed out after 10 seconds, waiting for #<Watir::Row: located: false; {:xpath=>\"//div[#class='z-listheader-content'][normalize-space()='Name']/ancestor::table/../following-sibling::div/table\", :visible=>true, :tag_name=>\"table\"} --> {:index=>10}> to be located\""
10 is actually the last index of the table.
But it works fine if I put sleep 5 before my code segment. However, I was expecting WATIR to be automatically waited, but this is not the case, May I know why?
Here is more clear explanation
I am printing this line
p #browser.table(xpath: xpath, visible: true).rows.count
With sleep this is printing 10
without sleep, this is throwing the following error ends with {:index=>11}> to be located\, you could see the full error below
"timed out after 10 seconds, waiting for #<Watir::Row: located: false; {:xpath=>\"//div[#class='z-listheader-content'][normalize-space()='Name']/ancestor::table/../following-sibling::div/table\", :visible=>true, :tag_name=>\"table\"} --> {:index=>11}> to be located\""
Per the Documentation on waiting there are a few key points that should be noted:
The idea behind implicit waits is good, but there are two main issues with this form of implementation, so Watir does not recommend and does not provide direct access for setting them.
The wait happens during the locate instead of when trying to act on the element. This makes it impossible to immediately query the state of an element before it is there.
Implicit waits by themselves will not be sufficient to handle all of the synchronization issues in your code. The combination of delegating waiting responsibilities to the driver and leveraging polling in the code (explicit waits) can cause weirdness that is difficult to debug...
...The second and recommended approach to waiting in Selenium is to use explicit waits...
...Note that Watir does its automatic waiting when taking actions, not when attempting to locate...
In your case the wait is needed for location and thus the "automatic" wait you were expecting is not actually how watir works.
The Watir library does however provide mechanisms for explicit waits:
wait_until which waits until a specific condition is true
wait Waits until readyState of document is complete.
It appears that based on your posted issue that a waiting timeout has expired for your location before the element could be found.
It is possible that wait_until would resolve this e.g.
table = #browser.table(xpath: xpath).wait_until(&:visible?)
puts table.rows.count
I've been attempting to do a bit of performance review on an app I have, it's a back end Kotlin app that just pulls in some data, does a bit of data transformation and dumps it out, nothing too fancy. One thing that caught my eye was the final bit of execution where we dump our final data onto a queue, at first I noticed when we start up the app the final network call takes a very long time at first, sometimes over a second. Normally we run this network call in a coroutine to stop that last call blocking everything but I started trying to time the coroutine and the network call separately and got some odd results, from what I can see the coroutine takes can take forever to launch/complete compared to the network call. It's entirely possible I'm not recording things correctly but this is the general timing approach I have:
val coroutineTime - Instant.now().toEpochMillis()
GlobalScope.launch {
executionTime = measureTimeMillis { <--DO Message Sending -->}
totalTime = Instant.now().toEpochMillis() - coroutineTime
// Log out execution Time and total time
}
Now here what I'll see is something like
- totalTime = ~800ms
- executionTime = ~150ms
These aren't one-offs either, I have multiple of these processes going on at once ( up to 10 threads I think) and the first total times will always take significantly longer than the actual executionTime/network call. Eventually after a new dozen messages the overhead will calm down and these times will become equivalent at about 15ms, but having nearly 700ms overhead on coroutine start up seems insane to me.
Is this normal/expected behavior? I've tested this in a separate app and see similar but less extreme results where the first coroutine will take about 70ms to boot up, I'm struggling to find any other examples of this type of discussion outside of kotlin being used in android development.
As a first note, it's almost never a good idea to use the GlobalScope unless you really know what you're doing. This is why it was marked as delicate API. You should instead use a scope that is appropriately closed (following the lifecycle of whatever component launches this work).
Now, AFAIK, this GlobalScope runs on the default dispatcher, so maybe this is due to a cold start of that default thread pool. Later, it could also be a problem to use this dispatcher for network calls depending on the amount of concurrent coroutines you have. It would be more appropriate to use Disptachers.IO instead for IO bound work (or a custom thread pool).
It still doesn't explain the cold start, but I would first change that before investigating.
This is expected behavior if you use coroutines inappropriately ;-)
My guess is that your message sending is a blocking operation. By default GlobalScope.launch() dispatches coroutines with Dispatchers.Default which is designed to perform CPU-intensive operations, it has a limited number of threads and you should never block when using it. If you do you may run out of threads and coroutines will need to wait until some blocking operations will finish.
If you need to run blocking or IO code, you should use Dispatchers.IO instead:
GlobalScope.launch(Dispatchers.IO) {
I was facing similar issue, I have a function that loads some data from shared prefs, makes some calculations on the data (all this done in Dispatcher.Default), and return the result on Dispatcher.Main. I measured how long it took the Coroutine to actually start executing the block inside Dispatchers.Main.launch { } after calculations are done(time from tag2 to tag3 below), and got about 950ms (!!) Here is the function :
fun someName() {
CoroutineScope(Dispatchers.Default).launch {
val time = System.currentTimeMillis()
//load data and calculations
Log.d("tag2", "load and calculations took " + (System.currentTimeMillis() - time))
CoroutineScope(Dispatchers.Main.immediate).launch {
Log.d("tag3", "reached main thread code " + (System.currentTimeMillis() - time))
//do something
Log.d("tag4", "do something took " + (System.currentTimeMillis() - time))
}
}
}
But then I realized this happens while app launch, and main thread is busy creating all the UI, so even with .immediate it takes time until main thread will get to execute the dispatched code... then I tried to run this function after app already started and waiting, and found that from tag2 to tag 3 takes about 1ms (!!) (with .immediate). So looks like when dispatching something on Coroutine, when thread isn't busy it will start immediately
I am attempting to use the Cypress (at 7.6.0) retries feature as per https://docs.cypress.io/guides/guides/test-retries#How-It-Works
but for some reason it seems to be not working, in that a test that is guaranteed to always fail:
describe('Deliberate fail', function() {
it('Make an assertion that will fail', function() {
expect(true).to.be.false;
});
});
When run from the command line with the config retries set to 1,
npx cypress run --config retries=1 --env server_url=http://localhost:3011 -s cypress/integration/tmp/deliberate_fail.js
it seems to pass, with the only hint that something is being retried being the text "Attempt 1 of 2" and the fact that a screenshot has been made:
The stats on the run look also to be illogical:
1 test
0 passing
0 failing
1 skipped (but does not appear as skipped in summary)
Exactly the same behavior when putting the "retries" option in cypress.json, whether as a single number or options for runMode or openMode.
And in "open" mode, the test does not retry but just fails.
I am guessing that I'm doing something face-palmingly wrong, but what?
I think your problem is that you are not testing anything. Cypress will retry operations that involve the DOM, because the DOM takes time to render. Retrying is more efficient than a straight wait, because it might happen quicker.
So I reckon because you are just comparing 2 literal values, true and false, Cypress says, "Hey, there is nothing to retry here, these two values are never going to change, I'm outta here!"
I was going to say, if you set up a similar test with a DOM element, it might behave as you are expecting, but in fact it will also stop after the first attempt, because when it finds the DOM element, it will stop retrying. The purpose of the retry is to allow the element to be instantiated rather than retrying because the value might be different.
I will admit that I could be wrong in this, but I have definitely convinced myself - what do you think?
Found the cause .. to fix the problem that Cypress does not abort a spec file when one of the test steps (the it() steps) fails, I had the workaround for this very old issue https://github.com/cypress-io/cypress/issues/518 implemented
//
// Prevent it just running on if a step fails
// https://github.com/cypress-io/cypress/issues/518
//
afterEach(function() {
if (this.currentTest.state === 'failed') {
Cypress.runner.stop()
}
});
This means that a describe() will stop on fail, but does not play well with the retry apparently.
My real wished-for use case is to retry at the describe() level, but that may ne expensive as the above issue just got resolved but the solution is only available to those on the Business plan at $300/mo. https://github.com/cypress-io/cypress/issues/518#issuecomment-809514077
I have interesting example, not a real-life task but anyway:
const signal = new Subject();
let count = 0;
const somecalculations = (count) => console.log('do some calculations with ', count);
console.log('Start');
signal.pipe(take(1500)/*, observeOn(queueScheduler)*/)
.subscribe(() => {
somecalculations(count);
signal.next(count++);
console.log('check if reached ', count)
});
signal.next(count++);
console.log('Stop');
codepen
Subject.next works in synchronous way, so if i comment out observeOn(queueScheduler) - it causes Stack overflow (I control number of iterations with take operator, and on my computer if number is bigger then 1370 - it causes SO).
But if I put queueScheduler there - it works good. QueueScheduler is synchronous and somehow it allows current onNext handler run to finish running and then start next scheduled run.
Can someone explain it to me deeply with source code details? I tried to dig it but with partial success at the moment. It is about how observeOn works with QueueScheduler but answer is escaping me.
observeOn src QueueScheduler.ts asyncScheduler
Thanks to cartant for support. Seems like I understood why queue scheduler is working without SO.
When signal.next is called first time from observeOn _next queueScheduler.schedule->AsyncScheduler.schedule->Scheduler.schedule causes QueueAction.schedule to be called
QueueAction.flush called. this.scheduler.flush - > QueueSchedulerFlush->AsyncScheduler.flush
First time queue is empty and no task is executed so this.active is false. bc of this action.execute is called. Everything is called in sync way.
action.execute causes onNext function to be run again. So onNext calls signal.next it goes through all 1-3 points but now this.active is true (because it is actually still previous signal.next run) and we just queue action
So second signal.next is handled and we return to action.execute of first signal.next call. It works in do-while and shift actions one by one. So it finished running first signal.next action - but now we have one more in queue from second signal.next recursive call. So we run action.execute for second signal.next
And situation is being repeated. First flush call manages all the other calls like: active is true, we add task to queue and then repeat to previous flush call and grab it from queue.
I am building latency measurement into a communication middleware I am building. The way I have it working is that I periodically send a probe msg from my publishing apps. Subscribing apps receive this probe, cache it, and send an echo back at a time of their choosing, noting how much time the msg was kept “on hold”. The subscribing app receives these echos and calculates latency as (now() – time_sent – time_on_hold) / 2.
This kinda works, but the numbers are vastly different (3x) when “time on hold” is greater than 0. I.e if I echo the msg back immediately I get around 50us on my dev env and if I wait, then send the msg back the time jumps to 150us (though I discount whatever time I was on hold). I use QueryPerfomanceCounter for all measurements.
This is all inside a single Windows 7 box. What am I missing here?
TIA.
A bit more information. I am using the following to measure time:
static long long timeFreq;
static struct Init
{
Init()
{
QueryPerformanceFrequency((LARGE_INTEGER*) &timeFreq);
}
} init;
long long OS::now()
{
long long result;
QueryPerformanceCounter((LARGE_INTEGER*)&result);
return result;
}
double OS::secondsDiff(long long ts1, long long ts2)
{
return (double) (ts1-ts2)/timeFreq;
}
On the publish side I do something like:
Probe p;
p.sentTimeStamp = OS::now();
send(p);
Response r = recv();
latency=OS::secondsDiff(OS::now()- r.sentTimeStamp) - r.secondsOnHoldOnReceiver;
And on the receiver side:
Probe p = recv();
long long received = OS::now();
sleep();
Response r;
r.sentTimeStamp = p.timeStamp;
r.secondOnHoldOnReceiver = OS::secondsDiff(OS::now(), received);
send(r);
Ok, I have edited my answer to reflect your answer: Sorry for the delay, but I didn't notice that you had elaborated on the question by creating an answer.
It's seems that functionally you are doing nothing wrong.
I think that when you distribute your application outside of localhost conditions, the additional 100us (if it is indeed roughly constant) will pale into insignificance compared to the average latency of a functioning network.
For the purposes of answering your question am thinking that there is a thread/interrupt scheduling issue on the server side that needs to be investigated, as you do not seem to be doing anything on the client that is not accounted for.
Try the following test scenario:
Send two Probes to clients A and B. (all localhost)
Send the Probe to 'Client B' one second (or X/2 seconds) after you send the probe to Client A.
Ensuring that 'Client A' waits for two seconds (or X seconds) and 'Client B' waits 'one second (or X/2 seconds)
The idea being that hopefully, both clients will send back their probe answers at roughly the same time and both after a sleep/wait (performing the action that exposes the problem). The objective is to try to get one of the clients responses to 'wake up' the publisher to see if the next clients answer will be processed immediately.
If one of these returned probes is not showing the anomily (most likely the second response) it could point to the fact that the publisher thread is waking from a sleep cycle (on recv 1st responce) and is immediately available to process the second response.
Again, if it turns out that the 100us delay is roughly constant, it will be +-10% of 1ms which is the timeframe appropriate for realworld network conditions.