RX programming - how to get previous messages from MessageBroker? - rxjs

I use UniRX (C#) which tries to resemble RXJS and others.
I try to make sure my network-dependent objects initialize after the data arrived.
Some of my objects get created and Subscribe later than MSGPlayerDataLoaded actually fired thus never proceed to OnPlayerDataLoaded.
protected virtual void Awake()
{
MessageBroker.Default.Receive<BaseMessage>().Where(msg => msg.id == GameController.MSGPlayerDataLoaded).Subscribe(msg => OnPlayerDataLoaded());
}
Is it possible to look into the past and grab old events since creation of MessageBroker?
From the documentation of RXJS I suspect that something like withLatestFrom could be of help, but it would need a dummy auxiliary stream that looks hacky.

Related

Are hot non completing database observables a Rx usecase? Side-effect writing issue

I have more of a opinions question, asi if this, what many people do, should be a Rx use case.
In apps there is usually sql database, which is queried by UI as a observable, which emits after the query is loaded + anytime data changes (Room / SqlDelight etc)
Reads sound okay, however, is it possible to have "pure" writes to the database?
Writing to the database might look like this
fun sync() = Completable.fromCallable {
// do something
database.writeSomethingSynchronously()
}
SomeUi {
init {
database.someQueryObservable()
.subscribe { show list }
}
}
Imagine you want to display progressbar while this Completable is in flight.
What is effectively happening here is sideffecting to the database. Which means the opened database observable will re-emit when the data is written, but still before the sync() returns (assuming single threaded for simplicity)
Now there is point in time where there is new data in the UI and the progressbar is shown. (and worse with multithreading timings) This is invalid state.
In imperative world, sync would provide a completion callback, in which one would reload the query manually + show/hide progressbar synchronously. (And somehow block the database change listener for duration of the sync writes?)
Is there a way around this at all?

Difference between "Signals" (js-signals) and "Observables" (mobx, mobx-react)?

Could they work together for a perfect states management and bidirectional data binding?
Mobx implements observable pattern in javascript. By using mobx and mobx-react, people can refer mobx observables in react and assign autorun, reaction, and comptued routines to them. Every time an observable changes its references relationship, autorun, reaction and computed routines fired.
This is really helpful when you developed a rich content application, say editor.
While js-signals works differently, a signal can register callbacks and its priority. When a component changes, a programmer have a choice to dispatch the signal to fire all associated callbacks (just like events)
Which pattern is better, could they work together smoothly?
Background
I am working on an editor which uses signals intensively. I also prefer to use observable patterns to manage states of the editor. My personal feelings, when the observable grows up (just like 200 global variables), it becomes hard to maintain.
I am appreciated for your thoughts. Developers who succeeded in using those techniques are welcomed.
js-signals is just an event emitter library and mobx is just a state/observer libary.
You can simply fire and handle events. as long as the handler wraps the changes of mobx State in a mobx.action. the changes in state are handled properly and react components are updated properly/observer events fired properly:
class Store {
#mobx.observable name = "test"
}
var store = new Store();
//custom object that dispatch a `started` signal
var myObject = {
started : new signals.Signal()
};
function onStarted(name){
mobx.runInAction(() => {
store.name = name;
});
}
myObject.started.add(onStarted); //add listener
mobx.observe(store,"name",change=> {
myObject.started.dispatch(change.name+'x'); //woops now we have an infinite loop!
});
myObject.started.dispatch('foo'); //dispatch signal passing custom parameters
// myObject.started.remove(onStarted); //remove a single listener

Using Observables to process queue messages which require a callback at end of processing?

This is a bit of a conceptual question, so let me know if it's off topic.
I'm looking at writing yet another library to process messages off a queue - in this case an Azure storage queue. It's pretty easy to create an observable and throw a message into it every time a message is available.
However, there's a snag here that I'm not sure how to handle. The issue is this: when you're done processing the message, you need to call an API on the storage queue to actually delete the message. Otherwise the visibility timeout will expire and the message will reappear to be dequeued again.
As an example, here's how this loop looks in C#:
public event EventHandler<string> OnMessage;
public void Run()
{
while(true)
{
// Read message
var message = queue.GetMessage();
if (message != null)
{
// Run any handlers
OnMessage?.Invoke(this, message.AsString);
// Delete off queue when done
queue.DeleteMessage(message);
}
else
{
Thread.Sleep(2500);
}
}
}
The important thing here is that we read the message, trigger any registered event handlers to do things, then delete the message after the handlers are done. I've omitted error handling here, but in general if the handler fails we should NOT delete the message, but instead let it return to visibility automatically and get redelivered later.
How do you handle this kind of thing using Rx? Ideally I'd like to expose the observable for anyone to subscribe to. But I need to do stuff at the end of processing for that message, whatever the "end" happens to mean here.
I can think of a couple of possible solutions, but I don't really like any of them. One would be to have the library call a function supplied by the consumer, that takes in the source observable, hooks up whatever it wants, then returns a new observable that the library can then subscribe on to do the final cleanup. But that's pretty limiting, as consumers basically only have one shot to hook up to the messages, which seems pretty limiting.
I guess I could put the call to delete the message after the call to onNext, but then I don't know if the processing succeeded or failed unless there's some sort of back channel in that api I don't know about?
Any ideas/suggestions/previous experience here?
Try having a play with this:
IObservable<int> source =
Observable
.Range(0, 3)
.Select(x =>
Observable
.Using(
() => Disposable.Create(() => Console.WriteLine($"Removing {x}")),
d => Observable.Return(x)))
.Merge();
source
.Subscribe(x => Console.WriteLine($"Processing {x}"));
It produces:
Processing 0
Removing 0
Processing 1
Removing 1
Processing 2
Removing 2

Observing when stream is unsubscribed

I have an RxJS observable stream that I'm sharing like the following:
var sub = Observable.create(obs => {
// logic here
return () => {
// call rest service to notify server
};
})
.publish()
.refCount();
When the last subscriber unsubscribes, I need to make a REST request. The obvious choice is to add that call into the return cleanup function - but you then have broken out of any observable sequence and any errors etc aren't easily handled.
I could just use a Subject, push a value onto it in the cleanup function, and observe it elsewhere with the REST call hanging off that.
Ideally I'd do something like concatenating to the disposed stream with my REST call (concat obviously wouldn't work as it's not completing).
Does anyone have any suggestions for the cleanest way of handling this? All the options above seem a bit clunky and I feel like I've missed something.
You could implement a finally(...) in your stream, that does the cleanup.
The finally is automatically executed when the stream finalizes (error or complete).
Note: This will not work when you unsubscribe manually and not call complete on your stream.

Closing over java.util.concurrent.ConcurrentHashMap inside a Future of Actor's receive method?

I've an actor where I want to store my mutable state inside a map.
Clients can send Get(key:String) and Put(key:String,value:String) messages to this actor.
I'm considering the following options.
Don't use futures inside the Actor's receive method. In this may have a negative impact on both latency as well as throughput in case I've a large number of gets/puts because all operations will be performed in order.
Use java.util.concurrent.ConcurrentHashMap and then invoke the gets and puts inside a Future.
Given that java.util.concurrent.ConcurrentHashMap is thread-safe and providers finer level of granularity, I was wondering if it is still a problem to close over the concurrentHashMap inside a Future created for each put and get.
I'm aware of the fact that it's a really bad idea to close over mutable state inside a Future inside an Actor but I'm still interested to know if in this particular case it is correct or not?
In general, java.util.concurrent.ConcurrentHashMap is made for concurrent use. As long as you don't try to transport the closure to another machine, and you think through the implications of it being used concurrently (e.g. if you read a value, use a function to modify it, and then put it back, do you want to use the replace(key, oldValue, newValue) method to make sure it hasn't changed while you were doing the processing?), it should be fine in Futures.
May be a little late, but still, in the book Reactive Web Applications, the author has indicated an indirection to this specific problem, using pipeTo as below.
def receive = {
case ComputeReach(tweetId) =>
fetchRetweets(tweetId, sender()) pipeTo self
case fetchedRetweets: FetchedRetweets =>
followerCountsByRetweet += fetchedRetweets -> List.empty
fetchedRetweets.retweets.foreach { rt =>
userFollowersCounter ! FetchFollowerCount(
fetchedRetweets.tweetId, rt.user
)
}
...
}
where followerCountsByRetweet is a mutable state of the actor. The result of fetchRetweets() which is a Future is piped to the same actor as a FetchedRetweets message, which then acts on the message on to modify the state of the acto., this will mitigate any concurrent operation on the state

Resources