I am trying to get the connection quality of the video call in real time, but the session object returns the null quality.
How can I get the quality of the connection in real time?
let session = OT.initSession(apiKey, sessionID);
console.log(session) // return Object where connection is null
Manik here from the Vonage Video API Team.
To look at quality data during the session, I recommend using the getStats APIs for both Publishers and Subscribers.
Publisher getStats
Subscrbier getStats
Related
How to stream response from reactive HTTP client to the controller without having the whole response body in the application memory at any time?
Practically all examples of project reactor client return Mono<T>. As far as I understand reactive streams are about streaming, not loading it all and then sending the response.
Is it possible to return kind of Flux<Byte> to make it possible to transfer big files from some external service to the application client without a need of using a huge amount of RAM memory to store intermediate result?
It should be done naturally by simply returning a Flux<WHATEVER>, where each WHATEVER will be flushed on the network as soon as possible. In such a case, the response uses chunked HTTP encoding, and the bytes from each chunk are discarded once they've been flused to the network.
Another possibility is to upgrade the HTTP response to SSE (Server Sent Events), which can be achieved in WebFlux by setting the Controller method to something like #GetMapping(path = "/stream-flux", produces = MediaType.TEXT_EVENT_STREAM_VALUE) (the produces part is the important one).
I dont think that in your scenario you need to create an event stream because event stream is more used to emit event in real time i think you better do it like this.
#GetMapping(value = "bytes")
public Flux<Byte> getBytes(){
return byteService.getBytes();
}
and you can send it es a stream.
if you still want it as a stream
#GetMapping(value = "bytes",produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public Flux<List<Byte>> getBytes(){
return byteService.getBytes();
}
I'm trying to have clients publish a A/V stream, turn them off, and then turn them back on. The first time I tell them to publish and then unpublish, it works fine. However, the next time I tell them to publish (Using the same session ID and token), I get the error "Cannot Connect, the session is already undefined".
Why is the "session" getting destroyed?.. is it the unpublish? My code is pretty much taken from the tutorials:
clientSession = OT.initSession(apiKey, sessionId);
clientSession.connect(token, function (error) {
if (error) {
handleError(error);
} else {
clientPublisher = OT.initPublisher(container, {
insertMode: 'append',
width: '100%',
height: '100%'
}, handleError);
}
});
}
To unpublish:
clientSession.unpublish(clientPublisher);
There are 2 ways you could do this. You could initialise a single publisher object once and keep reusing it everytime you republish. Or you could keep destroying and reinitialising a new publisher each time. I've written up an example of both approaches for you:
Reuse same publisher: https://jsbin.com/tobabos/edit?html
Create new publisher each time: https://jsbin.com/jawuxez/edit?html
Note: Please provide your own API key, session ID and token to run the above JSbins
The key difference is that to reuse a publisher you need to do this:
pub.on('streamDestroyed', e => e.preventDefault());
This is documented here: https://tokbox.com/developer/sdks/js/reference/Publisher.html#.event:streamDestroyed
It makes sure that when you unpublish, the publisher object is not destroyed so it can be reused.
What also happens is if you reuse a publisher, the publisher remains on the page and the user can still see themselves. Even if the publisher is not streaming to the session. You could use CSS or DOM manipulation to hide the publisher, but the webcam light will remain on.
However, if you destroy and recreate the publisher each time, the publisher disappears from the page and the webcam light turns off while unpublished. Depending on the browser and the user's settings, they may be asked to permit access to their webcam again.
I have to implement a chat application using websocket, users will chat via groups, there can be thousands of groups and a user can be in multiple groups. I'm thinking about 2 solutions:
[1] for each group chat, I create a websocket endpoint (using camel-atmosphere-websocket), users in the same group can subscribe to the group endpoint and send/receive message over that endpoint. it means there can be thousands of websocket endpoints. Client side (let's say iPhone) has to subscribes to multiple wbesocket endpoints. is this a good practice?
[2] I just create one websocket endpoint for all groups. Client side just subscribes to this endpoint and I manage the messages distribution myself on server: get group members, pick the websocket of each member from list of connected websockets then write the message to each member via websocket.
Which solution is better in term of performance and easy to implement on both client and server?
Thanks.
EDIT 2015-10-06
I chose the second approach and did a test with jetty websocket client, I use camel atmosphere websocket on server side. On client side, I create websocket connections to server in threads. There was a problem with jetty that I can just create around 160 websocket connections (it means around 160 threads). The result is that I almost see no difference when the number of clients increases from 1 to 160.
Yes, 160 is not a big number, but I think I will do more test when I actually see the performance problem, for now, I'm ok with second approach.
If you are interested in the test code, here it is:
http://www.eclipse.org/jetty/documentation/current/jetty-websocket-client-api.html#d0e22545
I think second approach will be better to use for performance. I am using the same for my application, but it is still in testing phase so can't comment about the real time performance. Now its running for 10-15 groups and working fine. In my app, there is similar condition like you in which user can chat based on group. I am handling the the group creation on server side using node.js. Here is the code to create group, but it is for my app specific condition. Just pasting here for the reference. Getting homeState and userId from front-end. Creating group based on the homeState. This code is only for example, it won't work for you. To improve performance you can use clustering.
this.ConnectionObject = function(homeState, userId, ws) {
this.homeState = homeState;
this.userId = userId;
this.wsConnection = ws;
},
this.createConnectionEntry = function(homeState, userId,
ws) {
var connObject = new ws.thisRefer.ConnectionObject(homeState, userId,
ws);
var connectionEntryList = null;
if (ws.thisRefer.connectionMap[homeState] != undefined) {
connectionEntryList = ws.thisRefer.connectionMap[homeState];
} else {
connectionEntryList = new Array();
}
connectionEntryList.push(connObject);
console.log(connectionEntryList.length);
ws.thisRefer.connectionMap[homeState] = connectionEntryList;
ws.thisRefer.connecteduserIdMap[userId] = "";
}
Browsers implement a restriction on the numbers of websocket that can be opened by the same tab. You can't rely on being able to create as many connection as possible. Go for solution #2
I'm looking to develop a chat application with Pubnub where I want to make sure all the chat messages that are send is been stored in the database and also want to send messages in chat.
I found out that I can use the Parse with pubnub to provide storage options, But I'm not sure how to setup those two in a way where the messages and images send in the chat are been stored in the database.
Anyone have done this before with pubnub and parse? Are there any other easy options available to use with pubnub instead of using parse?
Sutha,
What you are seeking is not a trivial solution unless you are talking about a limited number of end users. So I wouldn't say there are no "easy" solutions, but there are solutions.
The reason is your server would need to listen (subscribe) to every chat channel that is active and store the messages being sent into your database. Imagine your app scaling to 1 million users (doesn't even need to get that big, but that number should help you realize how this can get tricky to scale where several server instances are listening to channels in a non-overlapping manner or with overlap but using a server queue implementation and de-duping messages).
That said, yes, there are PubNub customers that have implemented such a solution - Parse not being the key to making this happen, by the way.
You have three basic options for implementing this:
Implement a solution that will allow many instances of your server to subscribe to all of the channels as they become active and store the messages as they come in. There are a lot of details to making this happen so if you are not up to this then this is not likely where you want to go.
There is a way to monitor all channels that become active or inactive with PubNub Presence webhooks (enable Presence on your keys). You would use this to keep a list of all channels that your server would use to pull history (enable Storage & Playback on your keys) from in an on-demand (not completely realtime) fashion.
For every channel that goes active or inactive, your server will receive these events via the REST call (and endpoint that you implement on your server - your Parse server in this case):
channel active: record "start chat" timetoken in your Parse db
channel inactive: record "end chat" timetoken in your Parse db
the inactive event is the kickoff for a process that uses start/end timetokens that you recorded for that channel to get history from for channel from PubNub: pubnub.history({channel: channelName, start:startTT, end:endTT})
you will need to iterate on this history call until you receive < 100 messages (100 is the max number of messages you can retrieve at a time)
as you retrieve these messages you will save them to your Parse db
New Presence Webhooks have been added:
We now have webhooks for all presence events: join, leave, timeout, state-change.
Finally, you could just save each message to Parse db on success of every pubnub.publish call. I am not a Parse expert and barely know all of its capabilities but I believe they have some sort or store local then sync to cloud db option (like StackMob when that was a product), but even if not, you will save msg to Parse cloud db directly.
The code would look something like this (not complete, likely errors, figure it out or ask PubNub support for details) in your JavaScript client (on the browser).
var pubnub = PUBNUB({
publish_key : your_pub_key,
subscribe_key : your_sub_key
});
var msg = ... // get the message form your UI text box or whatever
pubnub.publish({
// this is some variable you set up when you enter a chat room
channel: chat_channel,
message: msg
callback: function(event){
// DISCLAIMER: code pulled from [Parse example][4]
// but there are some object creation details
// left out here and msg object is not
// fully fleshed out in this sample code
var ChatMessage = Parse.Object.extend("ChatMessage");
var chatMsg = new ChatMessage();
chatMsg.set("message", msg);
chatMsg.set("user", uuid);
chatMsg.set("channel", chat_channel);
chatMsg.set("timetoken", event[2]);
// this ChatMessage object can be
// whatever you want it to be
chatMsg.save();
}
error: function (error) {
// Handle error here, like retry until success, for example
console.log(JSON.stringify(error));
}
});
You might even just store the entire set of publishes (on both ends of the conversation) based on time interval, number of publishes or size of total data but be careful because either user could exit the chat and the browser without notice and you will fail to save. So the per publish save is probably best practice if a bit noisy.
I hope you find one of these techniques as a means to get started in the right direction. There are details left out so I expect you will have follow up questions.
Just some other links that might be helpful:
http://blog.parse.com/learn/building-a-killer-webrtc-video-chat-app-using-pubnub-parse/
http://www.pubnub.com/blog/realtime-collaboration-sync-parse-api-pubnub/
https://www.pubnub.com/knowledge-base/discussion/293/how-do-i-publish-a-message-from-parse
And we have a PubNub Parse SDK, too. :)
I am building an extension with Firefox's ADD ON SDK (v1.9) that will be able to read all HTTP requests / responses and calculate the time they took to load. This includes not only the main frame but any other loading file (sub frame, script, css, image, etc.).
So far, I am able to use the "observer-service" module to listen for:
"http-on-modify-request" when a HTTP request is created.
"http-on-examine-response" when a HTTP response is received
"http-on-examine-cached-response" when a HTTP response is received entirely from cache
"http-on-examine-merged-response" when a HTTP response is received partially from cache
My application follows the following sequence:
A request is created and registered through the observer.
I save the current time and mark it as start_time of the request load.
A response for a request is received and registered through one of the observers.
I save the current time and use the previously saved time to calculate load time of the request.
Problem:
I am not able to link the start and end times of the load since I cannot find a request ID (or other unique value) that will tie the request with the response.
I am currently using the URL of the request / response to tie them together but this is not correct since it will raise a "race condition" if two or more equal urls are loading at the same time. Google Chrome solves this issue by providing unique requestIds, but I have not been able to find a similar functionality on Firefox.
I am aware of two ways to recognize a channel that you receive in this observer. The "old" solution is to use nsIWritablePropertyBag interface to attach data to the channel:
var {Ci} = require("chrome");
var channelId = 0;
...
// Attach channel ID to a channel
if (channel instanceof Ci.nsIWritablePropertyBag)
channel.setProperty("myExtension-channelId", ++channelId);
...
// Read out channel ID for a channel
if (channel instanceof Ci.nsIPropertyBag)
console.log(channel.getProperty("myExtension-channelId"));
The other solution would be using WeakMap API (only works properly starting with Firefox 13):
var channelMap = new WeakMap();
var channelId = 0;
...
// Attach channel ID to a channel
channelMap.set(channel, ++channelId);
...
// Read out channel ID for a channel
console.log(channelMap.get(channel));
I'm not sure whether WeakMap is available in the context of Add-on SDK modules, you might have to "steal" it from a regular JavaScript module:
var {Cu} = require("chrome");
var {WeakMap} = Cu.import("resource://gre/modules/FileUtils.jsm", null);
Obviously, in both cases you can attach more data to the channel than a simple number.
Firebug does what you're thinking of by implementing a central observer for these events:
https://github.com/firebug/firebug/blob/master/extension/modules/firebug-http-observer.js
This might be a good place to start, although eventually Firefox will ship a more complete network monitor / debugger by default. I think I read somewhere that it will be based on Firebug's.