how use coroutine to remodel / replace the callback implementation - kotlin-coroutines

Trying to replace the using callback with coroutines. Having a implementation using callback and not sure what is the right approach if coroutines could help.
This is the implementation with callback.
It has a class repository to provide data from either local database or network remote.
class Repository() {
var callback = //callback is provided by the caller
var isReady = false
var data = null
var firstimeCall = true //only need to get from database at first time call
fun getData(cb: ICallback) {
callback = cb
isReady = false
if (firstimeCall) {
firstimeCall = false
data = doGetDataFromDatabase() //sync call to get data from database
isReady = true
callback.onComplete(this)
}
isReady = false
data = doGetDataFromNetwork() {// async call and lamda as the callback
isReady = true
saveToDatabase(data)
callback.onComplete(this)
}
}
}
the repository.getData() could be called multiple times, only first time it will return the data from database first, then
getting from network and saving, then call callback.onComplete() to return the data.
Any other time, it will only do getting from network/save/return data through the callback.
the use case are:
directly using Repository, like
repository.getData() -- 1st time call
repository.getData() -- later call it again
there are multiple repositories, the data from each one will be aggregated into a final data.
for this case there is a Aggregator to hold the repositories, and provides onComplete() callback to process data if all
repositories are ready.
class Aggregator {
var repos = ArrayList<Repository>()
fun getData() {
for (r in repos) {
Thread {
r.getData(callback)
}.start()
}
}
fun processData(data: ArrayList()) {
......
}
val callback = object ICallback (onComplete{repo->
val hasAllComplete = repos.all {
it.isReady
}
if (hasAllComplete) {
var finalData = ArrayList<Data>()
for (r in repos) {
finalData.add(r.data)
}
processData(finalData)
}
})
}
so in the case it has two Repository, the Aggregator.getData() will get data from the two repositories.
when one Repository is complete its getData() call, it will callback to the callback's onComplete() where
the Aggregator will check wether all repositories are ready for data to be processed.
The same callback flow is used for the network call aswell.
Question:
In this case how to change to use coroutines, so that only after getting data from the database are complete for both repositories,
then it will start to get data from the network, without using callbacks.

I'm not sure if it's relevant anymore, but you could have a look at callbackFlow.
More info here:
https://medium.com/#elizarov/callbacks-and-kotlin-flows-2b53aa2525cf#1aaf
I have a similar problem, and I think this might be the solution to it.
Make sure you also read more about Flow and its usage before actually using it, since there are some caveats with handling exceptions (exception transparency), etc.

Related

Is CoroutineScope(SupervisorJob()) runs in Main scope?

I was doing this code lab
https://developer.android.com/codelabs/android-room-with-a-view-kotlin#13
and having a question
class WordsApplication : Application() {
// No need to cancel this scope as it'll be torn down with the process
val applicationScope = CoroutineScope(SupervisorJob())
// Using by lazy so the database and the repository are only created when they're needed
// rather than when the application starts
val database by lazy { WordRoomDatabase.getDatabase(this, applicationScope) }
val repository by lazy { WordRepository(database.wordDao()) }
}
private class WordDatabaseCallback(
private val scope: CoroutineScope
) : RoomDatabase.Callback() {
override fun onCreate(db: SupportSQLiteDatabase) {
super.onCreate(db)
INSTANCE?.let { database ->
scope.launch {
var wordDao = database.wordDao()
// Delete all content here.
wordDao.deleteAll()
// Add sample words.
var word = Word("Hello")
wordDao.insert(word)
word = Word("World!")
wordDao.insert(word)
// TODO: Add your own words!
word = Word("TODO!")
wordDao.insert(word)
}
}
}
}
this is the code I found, as you can see, it is directly calling scope.launch(...)
my question is that:
isn't all the Room operations supposed to run in non-UI scope? Could someone help me to understand this? thanks so much!
Is CoroutineScope(SupervisorJob()) runs in Main scope?
No. By default CoroutineScope() uses Dispatchers.Default, as can be found in the documentation:
CoroutineScope() uses Dispatchers.Default for its coroutines.
isn't all the Room operations supposed to run in non-UI scope?
I'm not very familiar specifically with Room, but generally speaking it depends if the operation is suspending or blocking. You can run suspend functions from any dispatcher/thread. deleteAll() and insert() functions in the example are marked as suspend, therefore you can run them from both UI and non-UI threads.

RxJava2 have remote data override local data in Observable

Currently I have a method in a repository class which fetches data from both a local cache and a remote API.
public Observable<List<Items>> getItemsForUser(String userId {
return Observable.concatArrayEager(
getUserItemsLocal(userId), // returns Observable<List<Items>>
getUserItemsRemote(userId) // returns Observable<List<Items>>
);
}
Currently, the method fetches the local data first (which may be outdated) and returns it, then updates it with the fresh data from the remote API.
I want to change the implementation to use Observable.merge so that if the remote API request completes first, that data gets shown first. However, if I just use Observable.merge I'm concerned that the local database request may return stale data, which will then overwrite the fresh data from the remote.
Basically, I want something like:
public Observable<List<ShoutContent>> getItemsForUser(String userId, ErrorCallback errorCallback) {
return Observable.merge(
getUserItemsRemote(userId),
getUserItemsLocal(userId)
.useOnlyIfFirstResponse()
}
So if the remote API request completes first, then that response is the only one that gets returned. But if the local request completes first, I want to return that, and then return the remote request once it is completed. Does RxJava have anything like this built in?
Edit: I would like to add that getUserItemsRemote does update the local database when the Observable emits, but I don't think that I can ensure that the database will be updated before the local request completes, which leaves the possibility that the local request will respond with stale data.
You can make use of the takeUntil operator.
takeUntil returns an Observable that emits the items emitted by the source Observable until a second ObservableSource emits an item.
In your case, you need to stop observing the local observable, once the remote Observable is emitted. The code is demonstrated below.
public Observable<String> getUserItemsLocal() {
return Observable.just("Local db response")
.delay(5, TimeUnit.SECONDS); // assume local db takes 5 seconds to emit
}
public Observable<String> getUserItemsRemote() {
return Observable.just("Remote Data")
.delay(1, TimeUnit.SECONDS); // remote data comes quicker, in 1 second
}
Your repository code goes like
Observable<String> remoteResponse = getUserItemsRemote();
getUserItemsLocal().takeUntil(remoteResponse)
.mergeWith(remoteResponse)
.subscribe(new Consumer<String>() {
#Override
public void accept(String s) throws Exception {
Log.d(TAG, "result: " + s);
}
});

Time-based cache for REST client using RxJs 5 in Angular2

I'm new to ReactiveX/RxJs and I'm wondering if my use-case is feasible smoothly with RxJs, preferably with a combination of built-in operators. Here's what I want to achieve:
I have an Angular2 application that communicates with a REST API. Different parts of the application need to access the same information at different times. To avoid hammering the servers by firing the same request over and over, I'd like to add client-side caching. The caching should happen in a service layer, where the network calls are actually made. This service layer then just hands out Observables. The caching must be transparent to the rest of the application: it should only be aware of Observables, not the caching.
So initially, a particular piece of information from the REST API should be retrieved only once per, let's say, 60 seconds, even if there's a dozen components requesting this information from the service within those 60 seconds. Each subscriber must be given the (single) last value from the Observable upon subscription.
Currently, I managed to achieve exactly that with an approach like this:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = this.restService.get('/information/')
.cache(1, 60000);
}
return this.information;
}
In this example, restService.get(...) performs the actual network call and returns an Observable, much like Angular's http Service.
The problem with this approach is refreshing the cache: While it makes sure the network call is executed exactly once, and that the cached value will no longer be pushed to new subscribers after 60 seconds, it doesn't re-execute the initial request after the cache expires. So subscriptions that occur after the 60sec cache will not be given any value from the Observable.
Would it be possible to re-execute the initial request if a new subscription happens after the cache timed out, and to re-cache the new value for 60sec again?
As a bonus: it would be even cooler if existing subscriptions (e.g. those who initiated the first network call) would get the refreshed value whose fetching had been initiated by the newer subscription, so that once the information is refreshed, it is immediately passed through the whole Observable-aware application.
I figured out a solution to achieve exactly what I was looking for. It might go against ReactiveX nomenclature and best practices, but technically, it does exactly what I want it to. That being said, if someone still finds a way to achieve the same with just built-in operators, I'll be happy to accept a better answer.
So basically since I need a way to re-trigger the network call upon subscription (no polling, no timer), I looked at how the ReplaySubject is implemented and even used it as my base class. I then created a callback-based class RefreshingReplaySubject (naming improvements welcome!). Here it is:
export class RefreshingReplaySubject<T> extends ReplaySubject<T> {
private providerCallback: () => Observable<T>;
private lastProviderTrigger: number;
private windowTime;
constructor(providerCallback: () => Observable<T>, windowTime?: number) {
// Cache exactly 1 item forever in the ReplaySubject
super(1);
this.windowTime = windowTime || 60000;
this.lastProviderTrigger = 0;
this.providerCallback = providerCallback;
}
protected _subscribe(subscriber: Subscriber<T>): Subscription {
// Hook into the subscribe method to trigger refreshing
this._triggerProviderIfRequired();
return super._subscribe(subscriber);
}
protected _triggerProviderIfRequired() {
let now = this._getNow();
if ((now - this.lastProviderTrigger) > this.windowTime) {
// Data considered stale, provider triggering required...
this.lastProviderTrigger = now;
this.providerCallback().first().subscribe((t: T) => this.next(t));
}
}
}
And here is the resulting usage:
public getInformation(): Observable<Information> {
if (!this.information) {
this.information = new RefreshingReplaySubject(
() => this.restService.get('/information/'),
60000
);
}
return this.information;
}
To implement this, you will need to create your own observable with custom logic on subscribtion:
function createTimedCache(doRequest, expireTime) {
let lastCallTime = 0;
let lastResult = null;
const result$ = new Rx.Subject();
return Rx.Observable.create(observer => {
const time = Date.now();
if (time - lastCallTime < expireTime) {
return (lastResult
// when result already received
? result$.startWith(lastResult)
// still waiting for result
: result$
).subscribe(observer);
}
const disposable = result$.subscribe(observer);
lastCallTime = time;
lastResult = null;
doRequest()
.do(result => {
lastResult = result;
})
.subscribe(v => result$.next(v), e => result$.error(e));
return disposable;
});
}
and resulting usage would be following:
this.information = createTimedCache(
() => this.restService.get('/information/'),
60000
);
usage example: https://jsbin.com/hutikesoqa/edit?js,console

Angular2: Example with multiple http calls (typeahead) with observables

So I am working on couple of cases in my app where I need the following to happen
When event triggered, do the following
List item
check if the data with that context is already cached, serve cached
if no cache, debounce 500ms
check if other http calls are running (for the same context) and kill them
make http call
On success cache and update/replace model data
Pretty much standard when it comes to typeahead functionality
I would like to use observables with this... in the way, I can cancel them if previous calls are running
any good tutorials on that? I was looking around, couldn't find anything remotely up to date
OK, to give you some clue what I did now:
onChartSelection(chart: any){
let date1:any, date2:any;
try{
date1 = Math.round(chart.xAxis[0].min);
date2 = Math.round(chart.xAxis[0].max);
let data = this.tableService.getCachedChartData(this.currentTable, date1, date2);
if(data){
this.table.data = data;
}else{
if(this.chartTableRes){
this.chartTableRes.unsubscribe();
}
this.chartTableRes = this.tableService.getChartTable(this.currentTable, date1, date2)
.subscribe(
data => {
console.log(data);
this.table.data = data;
this.chartTableRes = null;
},
error => {
console.log(error);
}
);
}
}catch(e){
throw e;
}
}
Missing debounce here
-- I ended up implementing lodash's debounce
import {debounce} from 'lodash';
...
onChartSelectionDebaunced: Function;
constructor(...){
...
this.onChartSelectionDebaunced = debounce(this.onChartSelection, 200);
}
For debaunce you can use Underscore.js. The function will look this way:
onChartSelection: Function = _.debounce((chart: any) => {
...
});
Regarding the cancelation of Observable, it is better to use Observable method share. In your case you should change the method getChartTable in your tableService by adding .share() to your Observable that you return.
This way there will be only one call done to the server even if you subscribe to it multiple times (without this every new subscription will invoke new call).
Take a look at: What is the correct way to share the result of an Angular 2 Http network call in RxJs 5?

Non-Blocking Endpoint: Returning an operation ID to the caller - Would like to get your opinion on my implementation?

Boot Pros,
I recently started to program in spring-boot and I stumbled upon a question where I would like to get your opinion on.
What I try to achieve:
I created a Controller that exposes a GET endpoint, named nonBlockingEndpoint. This nonBlockingEndpoint executes a pretty long operation that is resource heavy and can run between 20 and 40 seconds.(in the attached code, it is mocked by a Thread.sleep())
Whenever the nonBlockingEndpoint is called, the spring application should register that call and immediatelly return an Operation ID to the caller.
The caller can then use this ID to query on another endpoint queryOpStatus the status of this operation. At the beginning it will be started, and once the controller is done serving the reuqest it will be to a code such as SERVICE_OK. The caller then knows that his request was successfully completed on the server.
The solution that I found:
I have the following controller (note that it is explicitely not tagged with #Async)
It uses an APIOperationsManager to register that a new operation was started
I use the CompletableFuture java construct to supply the long running code as a new asynch process by using CompletableFuture.supplyAsync(() -> {}
I immdiatelly return a response to the caller, telling that the operation is in progress
Once the Async Task has finished, i use cf.thenRun() to update the Operation status via the API Operations Manager
Here is the code:
#GetMapping(path="/nonBlockingEndpoint")
public #ResponseBody ResponseOperation nonBlocking() {
// Register a new operation
APIOperationsManager apiOpsManager = APIOperationsManager.getInstance();
final int operationID = apiOpsManager.registerNewOperation(Constants.OpStatus.PROCESSING);
ResponseOperation response = new ResponseOperation();
response.setMessage("Triggered non-blocking call, use the operation id to check status");
response.setOperationID(operationID);
response.setOpRes(Constants.OpStatus.PROCESSING);
CompletableFuture<Boolean> cf = CompletableFuture.supplyAsync(() -> {
try {
// Here we will
Thread.sleep(10000L);
} catch (InterruptedException e) {}
// whatever the return value was
return true;
});
cf.thenRun(() ->{
// We are done with the super long process, so update our Operations Manager
APIOperationsManager a = APIOperationsManager.getInstance();
boolean asyncSuccess = false;
try {asyncSuccess = cf.get();}
catch (Exception e) {}
if(true == asyncSuccess) {
a.updateOperationStatus(operationID, Constants.OpStatus.OK);
a.updateOperationMessage(operationID, "success: The long running process has finished and this is your result: SOME RESULT" );
}
else {
a.updateOperationStatus(operationID, Constants.OpStatus.INTERNAL_ERROR);
a.updateOperationMessage(operationID, "error: The long running process has failed.");
}
});
return response;
}
Here is also the APIOperationsManager.java for completness:
public class APIOperationsManager {
private static APIOperationsManager instance = null;
private Vector<Operation> operations;
private int currentOperationId;
private static final Logger log = LoggerFactory.getLogger(Application.class);
protected APIOperationsManager() {}
public static APIOperationsManager getInstance() {
if(instance == null) {
synchronized(APIOperationsManager.class) {
if(instance == null) {
instance = new APIOperationsManager();
instance.operations = new Vector<Operation>();
instance.currentOperationId = 1;
}
}
}
return instance;
}
public synchronized int registerNewOperation(OpStatus status) {
cleanOperationsList();
currentOperationId = currentOperationId + 1;
Operation newOperation = new Operation(currentOperationId, status);
operations.add(newOperation);
log.info("Registered new Operation to watch: " + newOperation.toString());
return newOperation.getId();
}
public synchronized Operation getOperation(int id) {
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
return op;
}
}
Operation notFound = new Operation(-1, OpStatus.INTERNAL_ERROR);
notFound.setCrated(null);
return notFound;
}
public synchronized void updateOperationStatus (int id, OpStatus newStatus) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setStatus(newStatus);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
public synchronized void updateOperationMessage (int id, String message) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setMessage(message);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
private synchronized void cleanOperationsList() {
Date now = new Date();
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if((now.getTime() - op.getCrated().getTime()) >= Constants.MIN_HOLD_DURATION_OPERATIONS ) {
log.info("Removed operation from watchlist: " + op.toString());
iterator.remove();
}
}
}
}
The questions that I have
Is that concept a valid one that also scales? What could be improved?
Will i run into concurrency issues / race conditions?
Is there a better way to achieve the same in boot spring, but I just didn't find that yet? (maybe with the #Async directive?)
I would be very happy to get your feedback.
Thank you so much,
Peter P
It is a valid pattern to submit a long running task with one request, returning an id that allows the client to ask for the result later.
But there are some things I would suggest to reconsider :
do not use an Integer as id, as it allows an attacker to guess ids and to get the results for those ids. Instead use a random UUID.
if you need to restart your application, all ids and their results will be lost. You should persist them to a database.
Your solution will not work in a cluster with many instances of your application, as each instance would only know its 'own' ids and results. This could also be solved by persisting them to a database or Reddis store.
The way you are using CompletableFuture gives you no control over the number of threads used for the asynchronous operation. It is possible to do this with standard Java, but I would suggest to use Spring to configure the thread pool
Annotating the controller method with #Async is not an option, this does not work no way. Instead put all asynchronous operations into a simple service and annotate this with #Async. This has some advantages :
You can use this service also synchronously, which makes testing a lot easier
You can configure the thread pool with Spring
The /nonBlockingEndpoint should not return the id, but a complete link to the queryOpStatus, including id. The client than can directly use this link without any additional information.
Additionally there are some low level implementation issues which you may also want to change :
Do not use Vector, it synchronizes on every operation. Use a List instead. Iterating over a List is also much easier, you can use for-loops or streams.
If you need to lookup a value, do not iterate over a Vector or List, use a Map instead.
APIOperationsManager is a singleton. That makes no sense in a Spring application. Make it a normal PoJo and create a bean of it, get it autowired into the controller. Spring beans by default are singletons.
You should avoid to do complicated operations in a controller method. Instead move anything into a service (which may be annotated with #Async). This makes testing easier, as you can test this service without a web context
Hope this helps.
Do I need to make database access transactional ?
As long as you write/update only one row, there is no need to make this transactional as this is indeed 'atomic'.
If you write/update many rows at once you should make it transactional to guarantee, that either all rows are updated or none.
However, if two operations (may be from two clients) update the same row, always the last one will win.

Resources