Xcode Unit Testing, wait in `class setUp()` - xcode

I'm currently adding a lot of unit tests in my app, to check if WS are in a running state.
I know perfectly how to wait inside a testMethod, using expectations.
What I want is less common. I have a test case working on user favorites data. To be sure of the state of this user, I want to create a fully new user before the case ( and not before each test )
So I want to use
public class func setUp() {
//call here WS ( async call )
}
My issue is, as I am in a class func, I cannot use expectation and waitForExpectation because these are instance methods.
Does anybody as an idea on how to wait for my WS complete before executing the tests ?
Thanks!

You can use semaphore technique to accomplish what you want.
let fakeWSCallProcessingTimeInSeconds = 2.0
let delayTime = dispatch_time(DISPATCH_TIME_NOW, Int64(fakeWSCallProcessingTimeInSeconds * Double(NSEC_PER_SEC)))
let semaphore = dispatch_semaphore_create(0)
dispatch_after(delayTime, dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0)) {
dispatch_semaphore_signal(semaphore)
}
let timeoutInNanoSeconds = 5 * 1000000000
let timeout = dispatch_time(DISPATCH_TIME_NOW, Int64(timeoutInNanoSeconds))
if dispatch_semaphore_wait(semaphore, timeout) != 0 {
XCTFail("WS operation timed out")
}
This code will fake a webservice call with 2 seconds delay (dispatch_after) which in your case should be replaced with the actual call. Once the call is done and your user object is set up, you use "dispatch_semaphore_signal(semaphore)" to free up the semaphore object. If it's not available within 5 seconds (see timeoutInNanoSeconds), then the test is treated as failed. Obviously, you can alter the values as you wish.
The rest of the tests will run either after semaphore is free or when timeout happened.
More info on semaphores can be found in Apple docs.

I found it myself. I should not use expectation because, the setup part is not part of the testing.
I should instead use dispatch_group:
override class func setUp() {
super.setUp()
let group = dispatch_group_create()
dispatch_group_enter(group)
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) {
let account = MyAccount()
WS.worker.createAccount(account, password: "test") { (success, error) in
dispatch_group_leave(group)
}
}
dispatch_group_wait(group, dispatch_time(DISPATCH_TIME_NOW, Int64(50 * Double(NSEC_PER_SEC))))
}

Related

Is CoroutineScope(SupervisorJob()) runs in Main scope?

I was doing this code lab
https://developer.android.com/codelabs/android-room-with-a-view-kotlin#13
and having a question
class WordsApplication : Application() {
// No need to cancel this scope as it'll be torn down with the process
val applicationScope = CoroutineScope(SupervisorJob())
// Using by lazy so the database and the repository are only created when they're needed
// rather than when the application starts
val database by lazy { WordRoomDatabase.getDatabase(this, applicationScope) }
val repository by lazy { WordRepository(database.wordDao()) }
}
private class WordDatabaseCallback(
private val scope: CoroutineScope
) : RoomDatabase.Callback() {
override fun onCreate(db: SupportSQLiteDatabase) {
super.onCreate(db)
INSTANCE?.let { database ->
scope.launch {
var wordDao = database.wordDao()
// Delete all content here.
wordDao.deleteAll()
// Add sample words.
var word = Word("Hello")
wordDao.insert(word)
word = Word("World!")
wordDao.insert(word)
// TODO: Add your own words!
word = Word("TODO!")
wordDao.insert(word)
}
}
}
}
this is the code I found, as you can see, it is directly calling scope.launch(...)
my question is that:
isn't all the Room operations supposed to run in non-UI scope? Could someone help me to understand this? thanks so much!
Is CoroutineScope(SupervisorJob()) runs in Main scope?
No. By default CoroutineScope() uses Dispatchers.Default, as can be found in the documentation:
CoroutineScope() uses Dispatchers.Default for its coroutines.
isn't all the Room operations supposed to run in non-UI scope?
I'm not very familiar specifically with Room, but generally speaking it depends if the operation is suspending or blocking. You can run suspend functions from any dispatcher/thread. deleteAll() and insert() functions in the example are marked as suspend, therefore you can run them from both UI and non-UI threads.

How to wait for next #Composable function in jetpack compose test?

Suppose I have 3 #Composable functions: Start, Loading, Result.
In the test, I call the Start function, click the Begin button on it, and the Loading function is called.
The Loading function displays the loading procedure, takes some time, and then calls the Result function.
The Result function renders a field with the text OK.
How to wait in the test for the Result or few seconds function to be drawn and check that the text is rendered OK?
composeTestRule
.onNodeWithText("Begin")
.performClick()
// Here you have to wait ...
composeTestRule
.onNodeWithText("OK")
.assertIsDisplayed()
You can use the waitUntil function, as suggested in the comments:
composeTestRule.waitUntil {
composeTestRule
.onAllNodesWithText("OK")
.fetchSemanticsNodes().size == 1
}
There's a request to improve this API but in the meantime you can get the helpers from this blog post and use it like so:
composeTestRule.waitUntilExists(hasText("OK"))
So the options are:
It is possible to write to the global variable which function was last called. The disadvantage is that you will need to register in each function.
Subscribe to the state of the screen through the viewmodel and track when it comes. The disadvantage is that you will need to pull the viewmodel into the test and know the code. The plus is that the test is quickly executed and does not get stuck, as is the case with a timer.
I made this choice. I wrote a function for starting an asynchronous timer, that is, the application is running, the test waits, and after the timer ends, it continues checking in the test. The disadvantage is that you set a timer with a margin of time to complete the operation and the test takes a long time to idle. The advantage is that you don't have to dig into the source code.
Implemented the function like this.
fun asyncTimer (delay: Long = 1000) {
AsyncTimer.start (delay)
composeTestRule.waitUntil (
condition = {AsyncTimer.expired},
timeoutMillis = delay + 1000
)
}
object AsyncTimer {
var expired = false
fun start(delay: Long = 1000){
expired = false
Timer().schedule(delay) {
expired = true
}
}
}
Then I created a base class for the test and starting to write a new test, I inherit and I have the necessary ready-made functionality for tests.
Based on Pitry's answer I created this extension function:
fun ComposeContentTestRule.waitUntilTimeout(
timeoutMillis: Long
) {
AsyncTimer.start(timeoutMillis)
this.waitUntil(
condition = { AsyncTimer.expired },
timeoutMillis = timeoutMillis + 1000
)
}
object AsyncTimer {
var expired = false
fun start(delay: Long = 1000) {
expired = false
Timer().schedule(delay) {
expired = true
}
}
}
Usage in compose test
composeTestRule.waitUntilTimeout(2000L)

noticed strange behavior, not sure if it using coroutineScope incorrectly

Having a list of the data processors. Each data processor has its own dpProcess() which returns true if the process successfully done.
all of the processor.dpProcess() run in parallel with others (via the launch builder).
but only after all data porcessors are completed then the final onAllDataProcessed() should be called.
This is achived by the coroutineScope {} block.
and also the coroutineScope {} block is controled by the withTimeoutOrNull(2000), so that if the processing of data is not completed in 2 sec then will abort it
suspend fun processData(): Boolean {
var allDone = dataProcesserList.size > 0
val result = withTimeoutOrNull(2000) { // should timeout at 2000
coroutineScope { // block the execution sequence until all sub coroutines are completes
for (processor in dataProcesserList) {
launch { // launch each individual process in parallel
allDone = allDone && processor.dpProcess() // update the global final allDone, if anyone returns false it should be return false
}
}
}
}
/**
* strange: one of the processor.dpProcess() has return false, but the allDone at here still shows true (and it is not timed out)
*/
if (result != null && allDone) { // not timeout and all process completed successfully
onAllDataProcessed()
} else {
allDone = false // timeout or some process failed
}
return allDone
}
Noticed a strange behavior and happens more on device of Android 7.1.1 device than others
Some time the final allDone is true even one of the processor.dpProcess() returns false and not timed out.
But same case sometime returns false as expected
Is it because that how it is coded which brings this behavior?
What is better way to put a few functions to run in parallel and wait until all of them are done then advanced to next step.
And the whole process should timeout and abort
the
launch { // launch each individual process in parallel
allDone = allDone && processor.dpProcess() // update the global final allDone, if anyone returns false it should be return false
}
is the problem that launch actually could considered to run the block in different thread, so there is concurrence accessing the global allDone, one may have read it before other update it.

Time-based cache for REST client using RxJs 5 in Angular2

I'm new to ReactiveX/RxJs and I'm wondering if my use-case is feasible smoothly with RxJs, preferably with a combination of built-in operators. Here's what I want to achieve:
I have an Angular2 application that communicates with a REST API. Different parts of the application need to access the same information at different times. To avoid hammering the servers by firing the same request over and over, I'd like to add client-side caching. The caching should happen in a service layer, where the network calls are actually made. This service layer then just hands out Observables. The caching must be transparent to the rest of the application: it should only be aware of Observables, not the caching.
So initially, a particular piece of information from the REST API should be retrieved only once per, let's say, 60 seconds, even if there's a dozen components requesting this information from the service within those 60 seconds. Each subscriber must be given the (single) last value from the Observable upon subscription.
Currently, I managed to achieve exactly that with an approach like this:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = this.restService.get('/information/')
.cache(1, 60000);
}
return this.information;
}
In this example, restService.get(...) performs the actual network call and returns an Observable, much like Angular's http Service.
The problem with this approach is refreshing the cache: While it makes sure the network call is executed exactly once, and that the cached value will no longer be pushed to new subscribers after 60 seconds, it doesn't re-execute the initial request after the cache expires. So subscriptions that occur after the 60sec cache will not be given any value from the Observable.
Would it be possible to re-execute the initial request if a new subscription happens after the cache timed out, and to re-cache the new value for 60sec again?
As a bonus: it would be even cooler if existing subscriptions (e.g. those who initiated the first network call) would get the refreshed value whose fetching had been initiated by the newer subscription, so that once the information is refreshed, it is immediately passed through the whole Observable-aware application.
I figured out a solution to achieve exactly what I was looking for. It might go against ReactiveX nomenclature and best practices, but technically, it does exactly what I want it to. That being said, if someone still finds a way to achieve the same with just built-in operators, I'll be happy to accept a better answer.
So basically since I need a way to re-trigger the network call upon subscription (no polling, no timer), I looked at how the ReplaySubject is implemented and even used it as my base class. I then created a callback-based class RefreshingReplaySubject (naming improvements welcome!). Here it is:
export class RefreshingReplaySubject<T> extends ReplaySubject<T> {
private providerCallback: () => Observable<T>;
private lastProviderTrigger: number;
private windowTime;
constructor(providerCallback: () => Observable<T>, windowTime?: number) {
// Cache exactly 1 item forever in the ReplaySubject
super(1);
this.windowTime = windowTime || 60000;
this.lastProviderTrigger = 0;
this.providerCallback = providerCallback;
}
protected _subscribe(subscriber: Subscriber<T>): Subscription {
// Hook into the subscribe method to trigger refreshing
this._triggerProviderIfRequired();
return super._subscribe(subscriber);
}
protected _triggerProviderIfRequired() {
let now = this._getNow();
if ((now - this.lastProviderTrigger) > this.windowTime) {
// Data considered stale, provider triggering required...
this.lastProviderTrigger = now;
this.providerCallback().first().subscribe((t: T) => this.next(t));
}
}
}
And here is the resulting usage:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = new RefreshingReplaySubject(
() => this.restService.get('/information/'),
60000
);
}
return this.information;
}
To implement this, you will need to create your own observable with custom logic on subscribtion:
function createTimedCache(doRequest, expireTime) {
let lastCallTime = 0;
let lastResult = null;
const result$ = new Rx.Subject();
return Rx.Observable.create(observer => {
const time = Date.now();
if (time - lastCallTime < expireTime) {
return (lastResult
// when result already received
? result$.startWith(lastResult)
// still waiting for result
: result$
).subscribe(observer);
}
const disposable = result$.subscribe(observer);
lastCallTime = time;
lastResult = null;
doRequest()
.do(result => {
lastResult = result;
})
.subscribe(v => result$.next(v), e => result$.error(e));
return disposable;
});
}
and resulting usage would be following:
this.information = createTimedCache(
() => this.restService.get('/information/'),
60000
);
usage example: https://jsbin.com/hutikesoqa/edit?js,console

struggling with asynchronous patterns using NSURLSession

I'm using Xcode 7 and Swift 2 but my question isn't necessarily code specific, I'll gladly take help of any variety.
In my app I have a list of favorites. Due to API TOS I can't store any data, so I just keep a stub I can use to lookup when the user opens the app. I also have to look up each favorite one by one as there is no batch method. Right now I have something like this:
self.api.loadFavorite(id, completion: { (event, errorMessage) -> Void in
if errorMessage == "" {
if let rc = self.refreshControl {
dispatch_async(dispatch_get_main_queue()) { () -> Void in
rc.endRefreshing()
}
}
dispatch_async(dispatch_get_main_queue()) { () -> Void in
self.viewData.append(event)
self.viewData.sortInPlace({ $0.eventDate.compare($1.eventDate) == NSComparisonResult.OrderedDescending })
self.tableView.reloadData()
}
} else {
// some more error handling here
}
})
in api.loadFavorite I'm making a typical urlSession.dataTaskWithURL which is itself asynchronous.
You can see what happens here is that the results are loaded in one by one and after each one the view refreshes. This does work but its not optimal, for long lists you get a noticeable "flickering" as the view sorts and refreshes.
I want to be able to get all the results then just refresh once. I tried putting a dispatch group around the api.loadFavorites but the async calls in dataTaskWith URL don't seem to be bound by that group. I also tried putting the dispatch group around just the dataTaskWithURL but didn't have any better luck. The dispatch_group_notify always fires before all the data tasks are done.
Am I going at this all wrong? (probably) I considered switching to synchronous calls in the background thread since the api only allows one connection per client anyway but that just feels like the wrong approach.
I'd love to know how to get async calls that make other async calls grouped up so that I can get a single notification to update my UI.
For the record I've read about every dispatch group thread I could find here and I haven't been able to make any of them work. Most examples on the web are very simple, a series of print's in a dispatch group with a sleep to prove the case.
Thanks in advance.
If you want to invoke your method loadFavorite asynchronously in a loop for all favorite ids - which executes them in parallel - you can achieve this with a new method as shown below:
func loadFavorites(ids:[Int], completion: ([Event], ErrorType?) -> ()) {
var count = ids.count
var events = [Event]()
if count == 0 {
dispatch_async(dispatch_get_global_queue(0, 0)) {
completion(events, nil)
}
return
}
let sync_queue = dispatch_queue_create("sync_queue", dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_SERIAL, QOS_CLASS_USER_INITIATED, 0))
for i in ids {
self.api.loadFavorite(i) { (event, message) in
dispatch_async(sync_queue) {
if message == "" {
events.append(event)
if --count == 0 {
dispatch_async(dispatch_get_global_queue(0, 0)) {
completion(events, nil)
}
}
}
else {
// handle error
}
}
}
}
}
Note:
- Use a sync queue in order to synchronise access to shared array
events and the counter!
- Use a global dispatch queue where you invoke the completion handler!
Then call it like below:
self.loadFavorites(favourites) { (events, error) in
if (error == nil) {
events.sortInPlace({ $0.eventDate.compare($1.eventDate) == NSComparisonResult.OrderedDescending })
dispatch_async(dispatch_get_main_queue()) { () -> Void in
self.viewData = events
self.tableView.reloadData()
}
}
if let rc = self.refreshControl {
dispatch_async(dispatch_get_main_queue()) { () -> Void in
rc.endRefreshing()
}
}
Note also, that you need a different approach when you want to ensure that your calls to loadFavorite should be sequential.
If you need to support cancellation (well, who does not require this?), you might try to cancel the NSURLSession's tasks. However, in this case I would recommend to utilise a third party library which already supports cancellation of network tasks.
Alternatively, and in order to greatly simplify your asynchronous problems like those, build your network task and any other asynchronous task around a general utility class, frequently called Future or Promise. A future represents an eventual result, and is quite light wight. They are also "composable", that is you can define "continuations" which get invoked when the future completes, which in turn returns yet another future where you can add more continuations, and so force. See wiki Futures and Promises.
There are a couple of implementations in Swift and Objective-C. Ideally, these should also support cancellation. Unfortunately, I don't know any Swift library implementing Futures or Promises which support cancellation at this time - except my own library, which is not yet Open Source.
Another library which helps to solve common and also very complex asynchronous patterns is ReactiveCocoa, though it has a very steep learning curve and adds quite a lot of code to your project.
This is what finally worked for me. Easy once I figured it out. My problem was trying to take ObjC examples and rework them for swift.
func migrateFavorites(completion:(error: Bool) -> Void) {
let migrationGroup = dispatch_group_create()
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
// A lot of other code in there, fetching some core data etc
dispatch_group_enter(migrationGroup)
self.api.loadFavorite(id, completion: { (event, errorMessage) -> Void in
if errorMessage == "" {
if let rc = self.refreshControl {
dispatch_async(dispatch_get_main_queue()) { () -> Void in
rc.endRefreshing()
}
}
dispatch_async(dispatch_get_main_queue()) { () -> Void in
self.viewData.append(event)
self.viewData.sortInPlace({ $0.eventDate.compare($1.eventDate) == NSComparisonResult.OrderedDescending })
self.tableView.reloadData()
}
} else {
// some more error handling here
}
dispatch_group_leave(migrationGroup)
})
dispatch_group_notify(migrationGroup, queue) { () -> Void in
NSLog("Migration Queue Complete")
dispatch_async(dispatch_get_main_queue()) { () -> Void in
completion(error: migrationError)
}
}
}
The key was:
ENTER the group just before the async call
LEAVE the group as the last line in the completion handler
As I mentioned all this is wrapped up in a function so I put the function's completion handler inside the dispatch_group_notify. So I call this function and the completion handler only gets invoked when all the async tasks are complete. Back on my main thread I check for the error and refresh the ui.
Hopefully this helps someone with the same problem.

Resources