I am experimenting/learning RxJava2, RxAndroid2 and Socket.IO 1.0. So far the samples, books and tutorials available are all related to RxJava 1 and there has been a major change in RxJava2 which is rewritten according Reactive Streams standard. In Socket.IO all I need is to establish a connection and then wait for any events like EVENT_CONNECT, EVENT_CONNECT_ERROR and EVENT_CONNECTING, so I could know when the internet is dropped and Socket.IO is trying to re-establish the connection. Then I need another event listener like MESSAGE to know when message is arriving.
After going through RxJava2 documentation, all I knew is that it can be implemented with Flowable and CompositeDisposable but I don't know how to use them and besides this what could be the best approach. Even the book which I have purchased last night covers only RxJava 1.
Related
I am trying mostly for learning purposes to implement a module similar to SignalR(still a beginner in SignalR) using raw websockets. (I am already very familiar with websockets)
Is there any guide or something that explains what functionality does SignalR have on top of websockets? (so that i know what features i need to implement) ? .
From what i understood it keeps a persistent connection , and can fallback to other protocols if websockets are not supported (long polling ...etc).
I have already checked this video but i need something more in detail.
I had written one article regarding SignalR one year back. It contains SignalR basic information and code example.
Following is the link of it -
https://medium.com/#aparnagadgil/real-time-web-functionality-using-signalr-ba483efcb959
Hope this helps you!
i am just new into this concepts about sockets and i am so confused.Firstly i found that i can use pusher for realtime messages but it limits concurrent connections 100 and number of messages to send..then what about socket.io, does it have some kind of limitations? From what i have researched i assume it has no such limitations as pusher but i want to be sure.can anyone explain me how socket.io can do this, i mean if socket.io have no such limitations why would pusher be even used with payment plans?
You do not get the same limitations with socket.io however it can be a little daunting to set it up, I found this introduction on laracasts really useful when I was looking into it - https://laracasts.com/series/real-time-laravel-with-socket-io/episodes/1
As stated in the docs in version 3.x of zeromq in PUB/SUB scenarios messages are being filtered on publisher side (rather than on subscriber side which is trivial).
To me this sounds like that the publisher has to hold a list of all connected sockets and message filters to accomplish this.
Would you agree?
Based on this assumption I'd now like to know whether or not a specific filter is active or not. This would make it possible for me to not retrieve specific data from some (maybe very slow) other data provider when I know it's not being used anyway.
Is there a way to see what filters are active on a given PUB socket in a recent version of ZeroMQ?
I know there already has been some work on this, see here but that's been two years now..
So far as I know, there's no way to get this information from ZMQ. If you want the most up-to-date information on this, the best place to ask would be the ZMQ dev mailing list, the actual developers are over there.
Looking a little further back, I found this discussion on the mailing list that, while it doesn't speak specifically about subscriber topics, does address why that information isn't available - namely, that knowing a subscriber is subscribed to a topic means knowing that they're connected, and that information goes against the ZMQ abstraction design concept of letting connections/disconnections be more seamless.
There is a solution, just probably not the one you're looking for: spin up another pair of meta-sockets to communicate from client to server what topics it is interested in, so this information goes from ZMQ abstraction into explicit message passing. You keep track of that info there and use it to control your information gathering. It may seem a bit of a kludge (when the information is already technically there in the publisher, as you note), but it's the ZMQ way of doing things.
I am looking for a realtime hosted push/socket service (paid is fine) which will handle many connections/channels from many clients (JS) and server api which can subscribe/publish to those channels from a PHP script.
Here is an example:
The client UI has a fleet of 100 trucks rendered, when a truck is modified its data is pushed to channel (eg. /updates/truck/34) to server (PHP subscriber), DB is updated and receipt/data is sent back to the single truck channel.
We have a prototype working in Firebase.io but we don't need the firebase database, we just need the realtime transmission. One of the great features of firebase.io is that its light and we can subscribe to many small channels at once. This helps reduce payload as only that object data that has changed is transmitted.
Correct me if I am wrong but I think pusher and pubnub will allow me to create 100 truck pub/subs (in this example) for each client that opens the site?
Can anyone offer a recommendation?
I can confirm that you can use Pusher to achieve this - I work for Pusher.
PubNub previously counted each channel as a connection, but they now seem to have introduced multiplexing. This FAQ states you can support 100 channels over the multiplexed connection.
So, both of these services will be able to achieve what you are looking for. There will also be more options available via this Realtime Web Tech guide which I maintain.
[I work for Firebase]
Firebase should continue to work well for you even if you don't need the persistence features. We're not aware of any case where our persistence actually makes things harder, and in many cases it actually makes your life a lot easier. For example, you probably want to be able to ask "what is the current position of a truck" without needing to wait for the next time an update is sent.
If you've encountered a situation where persistence is actually a negative for you, we'd love to hear about it. That certainly isn't our intention.
Also - we're not Firebase.io -- we're just Firebase (though we do own the firebase.io domain name).
I'm having a play around with websockets and I'm having a bit of trouble wrapping my head around some stuff. Specifically, being able to send a whole bunch of subscribers different data without using a stupid amount of resources.
For example, if you had some sort of twitter like service, how would you send all followers of a person a newly posted tweet that they have made (and do the same for the other hundreds of people doing the same). It just seems that handling that many separate people is a bit absurd.
Can someone talk me through how you would go about treating each client individually? Please tell me if I have the whole idea of websockets wrong.
Thanks in advance!
P.S. for reference, I'm probably going to play around using either node or clojure (with aleph)
Use an established messaging protocol and broker on top of websockets.
It seems you are looking at websockets at the application layer when it is more of a network protocol. A variety of messaging APIs exists (such as JMS) with open source message brokers that are designed to do the complex and scalable message routing.