Multiple publishers in a session, how subscribe to just one? - opentok

I have an application that has multiple publishers and I want to go from one to the other, subscribe for a few seconds, and move on.
I put a unique id in the Data property when I created the tokens.
How can I select a specific stream from the session object?... or just how to I select the desired stream?

OpenTok QA staff here,
You're right. For every publisher that creates a stream, you will receive a streamCreated event. So, you can store the streamIds, and subscribe to them in a loop, for instance.

I think I have the answer but I'm not sure it's the best way.
On the publisher streamCreated event, I capture the event.stream from the event in a hash table with the key set as the person's id. Then I call the subscribe method on session object and pass in the stream from the hash table and set the target to that of the publishing person's video.
Seems to work fine but I've done so much guessing, I'm not sure if it's luck or correct!

Related

Slack API Channel name to conversation id

I have an old slack app that relies on channel.join. I need to migrate conversations.join. The APIs have two different keys. Channels allows you to use the channel name, and conversations requires a conversation id. How do you convert them? conversations.list -> loop?
You will probably have to loop. The new API is very focused on IDs and the advice is to store the IDs of the channels you want to join not their names, because name can be changed at any time. It feels like they’re saying you can join by name if you want but we won’t make it easy for you :) It's probably good advice tbh.

Is it possible to define a single saga which will process many messages

My team is considering if we can use mass transit as a primary solution for sagas in RabbitMq (vs NServiceBus). I admit that our experience which solution like masstransit and nserviceBus are minimal and we have started to introduce messaging into our system. So I sorry if my question will be simple or even stupid.
However, when I reviewed the mass transit documentation I noticed that I am not sure if that is possible to solve one of our cases.
The case looks like:
One of our components will produce up to 100 messages which will be "sent" to queue. These messages are a result of a single operation in a system. All of the messages will have the same Correlated Id and our internal publication id (same too).
1) is it possible to define a single instance saga (by correlated id) which will wait until it receives all messages from a queue and then process them as a single batch?
2) otherwise, is there any solution to ensure all of the sent messages was processed? (Consistency batch?) I assume that correlated Id will serve as a way to found an existing saga instance (singleton). In the ideal case, I would like to complete an instance of a saga When the system will process every message which belongs to a single group (to one publication)
I look at CompositeEvent too but I do not sure if I could use it to "ensure" that every message was processed and then I would let to complete saga for specific correlated Id.
Can you explain how could it be achieved? And into what mechanism I should look at in order to correlated id a lot of messages with the same id to the single saga and then complete if all of msg will be consumed?
Thank you in advance for any response
What you describe is how correlation by id works. It is like that out of the box.
So, in short - when you configure correlation for your messages correctly, all messages with the same correlation id will be handled by the same saga instance.
Concerning the second question - unless you publish a separate event that would inform the saga about how messages it should expect, how would it know that? You can definitely schedule a long timeout, attempting and assuming that within the timeout all the messages will be received by the saga, but it's not reliable.
Composite events won't help here since they are for messages with different types to be handled as one when all of them arrive and it doesn't count for the number of messages of each type. It just waits for one message of each type.
The ability to receive a series of messages and then operate on them in a batch is a common case, so much so that there is a sample showing how to do just that:
Batch Sample
Each saga instance has a unique correlation identifier, and as long as those messages can be correlated to that single instance, MassTransit will manage the concurrency (either optimistic or pessimistic, and depending upon the saga storage engine).
I'd suggest reviewing the state machine in the sample, and seeing how that compares to your scenario.

How to handle and updated a shared hash map in Actor System?

Hi, I have meta-data , HashMap object , need to be accessed inside my
Actor class, and need to be looked up. If found key I should used it
inside the Actor business logic. If not found I need to create one key
and value and update the HashMap object.
So how to handle this in Actor System? as you know each actor instance
would generate a not-found-key simultaneously which would result in
duplicate and inconstant. So what is the industry standard to
handle this scenario? Please provide your advice how to handle it.
If I understand you correctly, your situation is such that you have one hash map that is accessed in multiple actors, and you want to know how to keep a consistent state in the hash map across all the actors.
There should be one publisher actor and several subscriber actors. The publisher holds the canonical copy of the hash map. The publisher first sends a copy of the hash map to all the subscriber actors. These subscribers start performing business logic on the hash map.
When the business logic in a subscriber actor wants to update the hash map, it sends a message to the publisher actor. The subscriber does not update the hash map in its local actor state. Instead, it waits for the published hash map from the publisher.
The publisher actor accepts the key-value pair from the subscriber and uses it to update its canonical copy of the hash map. It then publishes that updated hash map to all the subscribers.
There are two ways for the subscriber to send its key value pair to the publisher. One is asynchronous, the other is synchronous. The first uses tell, the second uses ask. Tell is lightweight, ask is heavyweight. Ask has the advantage that there is no gap between sending the update to the publisher and receiving the updated hash map back. Of course, the subscriber that did the ask will receive two copies of the updated hash map: the first time as the response to the ask, the second time when the publisher publishes the hash map to all the subscribers. This does not cause any issues since Akka guarantees that messages sent first will be received first. For that reason, and the fact that no local updates are occurring on the hash map, the second published version of the hash map will never cause a recent write to be temporarily deleted from the hash map. Just to be safe, you may want to include a flag in the published message telling the subscriber actors which one of the subscribers can ignore the published hash map since it has already received it as the response to an ask.
This solution will guarantee a consistent hash map state, but this synchronization method may not be adequate for your application. The subscriber actors can overwrite each others' changes, and this may not be what you want. To prevent this situation, it may be necessary to separate business logic in the various subscriber actors.

CQRS+ES: Client log as event

I'm developing small CQRS+ES framework and develop applications with it. In my system, I should log some action of the client and use it for analytics, statistics and maybe in the future do something in domain with it. For example, client (on web) download some resource(s) and I need save date, time, type (download, partial,...), from region or country (maybe IP), etc. after that in some view client can see count of download or some complex report. I'm not sure how to implement this feather.
First solution creates analytic context and some aggregate, in each client action send some command like IncreaseDownloadCounter(resourced) them handle the command and raise domain event's and updating view, but in this scenario first download occurred and after that, I send command so this is not really command and on other side version conflict increase.
The second solution is raising event, from client side and update the view model base on it, but in this type of handling my event not store in event store because it's not raise by command and never change any domain context. If is store it in event store, no aggregate to handle it after fetch for some other use.
Third solution is raising event, from client side and I store it on other database may be for each type of event have special table, but in this manner of event handle I have multiple event storage with different schema and difficult on recreating view models and trace events for recreating contexts states so in future if I add some domain for use this type of event's it's difficult to use events.
What is the best approach and solution for this scenario?
First solution creates analytic context and some aggregate
Unquestionably the wrong answer; the event has already happened, so it is too late for the domain model to complain.
What you have is a stream of events. Putting them in the same event store that you use for your aggregate event streams is fine. Putting them in a separate store is also fine. So you are going to need some other constraint to make a good choice.
Typically, reads vastly outnumber writes, so one concern might be that these events are going to saturate the domain store. That might push you towards storing these events separately from your data model (prior art: we typically keep the business data in our persistent book of record, but the sequence of http requests received by the server is typically written instead to a log...)
If you are supporting an operational view, push on the requirement that the state be recovered after a restart. You might be able to get by with building your view off of an in memory model of the event counts, and use something more practical for the representations of the events.
Thanks for your complete answer, so I should create something like the ES schema without some field (aggregate name or type, version, etc.) and collect client event in that repository, some offline process read and update read model or create command to do something on domain space.
Something like that, yes. If the view for the client doesn't actually require any validation by your model at all, then building the read model from the externally provided events is fine.
Are you recommending save some claim or authorization token of the user and sender app for validation in another process?
Maybe, maybe not. The token describes the authority of the event; our own event handler is the authority for the command(s) that is/are derived from the events. It's an interesting question that probably requires more context -- I'd suggest you open a new question on that point.

Tokbox- don't let the same user publish twice

If a user is publishing to a tokbox session and for any reason that same user logs in on a different device or re-opens the session in another browser window I want to stop the 2nd one from publishing.
Luckily, on the metadata for the streams, I am saving the user id, so when there is a list of streams it's easy to see if an existing stream belongs to the user that is logged in.
When a publisher gets initialized the following happens:
Listen for session.on("streamCreated") when this happens, subscribe to new streams
Start publishing
The problem is, when the session gets initialized, there is no way to inspect the current streams of the session to see if this user is already publishing. We don't know what the streams are until the on("streamCreated") callback fires.
I have a hunch that there is an easy solution that I am missing. Any ideas?
I assume that when you said you save the user ID on the stream metadata, that means when you initialize the Publisher, you set the "name" property. That's a great technique.
My idea is slightly a hack, but its the best I can come up with right now. I would solve this problem by essentially breaking up the subscription of streams into 2 phases:
all streams created before this client connection
all streams created after
During #1 I would check each stream's "name" property to see if it belongs to the user at this client connection. If it does, then you know they are entering the session twice and you can set a flag (lets call it "userRejoining". In order to know that #1 is complete, I would set a timer (this is why I call it a hack) for a reasonable amount of time such as 1 second each time a "streamCreated" event arrives, and remove the any previous timer.
Then, if the "userRejoining" flag is not set, the Publisher is initialized and published to the session.
During #2, you just subscribe to any stream that is created.
The downside is that you've now delayed your user experience of publishing by ~1 second everywhere. In larger group scenarios this could be a deal breaker, but in smaller (1:1) types of sessions this should be acceptable. I hope this explanation is clear, and if not I can try to write some sample code for you.

Resources