publisher initiates twice, one proper one only to self - opentok

for some reason my publisher initiates twice when I create a new a new session. However the 2nd one, isn't in the div where it's supposed to be. Also if you connect to the session you'll get the same so it only show for yourself.
I'm trying to find out why it's appearing. Here's some snippets:
var getApiAndToken, initializeSession;
​
getApiAndToken = function() {
var apiKey, customer_id, sessionId, token;
if (gon) {
apiKey = gon.api_key;
}
if (gon) {
sessionId = gon.session_id;
}
if (gon) {
token = gon.token;
}
if (gon) {
customer_id = gon.customer_id;
}
initializeSession();
};
​
initializeSession = function() {
var publishStream, session;
session = OT.initSession(apiKey, sessionId);
session.connect(token, function(error) {
if (!error) {
session.publish(publishStream(true));
layout();
} else {
console.log('There was an error connecting to the session', error.code, error.message);
}
});
$('#audioInputDevices').change(function() {
publishStream(false);
});
$('#videoInputDevices').change(function() {
publishStream(false);
});
return publishStream = function(loadDevices) {
var publisherOptions;
publisherOptions = {
audioSource: $('#audioInputDevices').val() || 0,
videoSource: $('#videoInputDevices').val() || 0
};
OT.initPublisher('publisherContainer', publisherOptions, function(error) {
if (error) {
console.log(error);
} else {
if (loadDevices) {
OT.getDevices(function(error, devices) {
var audioInputDevices, videoInputDevices;
audioInputDevices = devices.filter(function(element) {
return element.kind === 'audioInput';
});
videoInputDevices = devices.filter(function(element) {
return element.kind === 'videoInput';
});
$.each(audioInputDevices, function() {
$('#audioInputDevices').append($('<option></option>').val(this['deviceId']).html(this['label']));
});
$.each(videoInputDevices, function() {
$('#videoInputDevices').append($('<option></option>').val(this['deviceId']).html(this['label']));
});
});
}
}
});
};
};
it also asks me for device access twice.

I see two general problems in the code you provided:
The variables api_key, session_id, and token inside the getApiAndToken() function are scoped to only that function, and therefore not visible inside initializeSession() where you try to use them.
The goal of the publishStream() function is not clear and its use is not consistent. Each time you invoke it (once the session connects and each time the dropdown value changes) this function creates a new Publisher. It also does not return anything, so when using it in the expression session.publish(publishStream(true)), you are effectively just calling session.publish() which results in a new Publisher being added to the end of the page because there is no element ID specified. That last part is the reason why you said its not in the <div> where its supposed to be.
It sounds like what you want is a Publisher with a dropdown to select which devices its using. I created an example of this for you: https://jsbin.com/sujufog/11/edit?html,js,output.
Briefly, the following is how it works. It first initializes a dummy publisher so that the browser can prompt the user for permission to use the camera and microphone. This is necessary for reading the available devices. Note that if you use a page served over HTTPS, browsers such as Chrome will remember the permissions you allowed on that domain earlier and will not have to prompt the user again. Therefore on Chrome, the dummy publisher doesn't cause any prompt be shown for a user who has already run the application. Next, the dummy publisher is thrown away, and OT.getDevices() is called to read the available devices and populate the dropdown menu. While this is happening, the session would have also connected, and on every change of the selection in either of the dropdowns, the publish() function is called. In that function, if a previous publisher existed, it is first removed, and then a new publisher is created with the devices that are currently selected. Then that new publisher is passed into session.publish().

Related

Canceling request with browser.webRequest.onBeforeRequest also cancels previous pending tab requests

The following code is used in an add-on to cancel all main-frame requests and re-initiates them in a new tab:
browser.webRequest.onBeforeRequest.addListener(
filterRequest,
{urls: ['<all_urls>'], types: ['main_frame']},
['blocking']
);
function filterRequest(details) {
const match = details.url.match(/\/container$/);
if (!match) {
return {};
}
browser.tabs.create({url: details.url.replace(/\/container$/, '')});
return {cancel: true}
}
However, if the initial tab had a heavy web-page loading, it stops when the new request is cancelled. I thought that since the request is cancelled, it would be like it was never initiated, so that previous web-page would continue to load. Why is that happening and how can I allow the web-page to finish loading?
Save the created tab's id in a global list and check it at the beginning:
const tabIds = new Set();
function filterRequest(details) {
if (tabIds.has(details.tabId)) {
tabIds.delete(details.tabId);
} else {
browser.tabs.create({url: details.url})
.then(tab => tabIds.add(tab.id));
return {cancel: true};
}
}

addStream in firefox dont work - webrtc

I try use webrtc in a app, for realtime comunication, this in chrome work fine but in firefox i get error in function addStream, i am using adapter.js i suppose what it will solved all error of compatibility but the error keep.
pc = new RTCPeerConnection(pc_config);
pc.onicecandidate = function (evt) {
// my code here
}
pc.onnegotiationneeded = function (evt) {
// my code here
}
if(isChromium) {
object_user.pc.onaddstream = function (evt) {
};
} else {
object_user.pc.ontrack = function (evt) {
};
}
if(isChromium) {
object_user.pc.addStream(window.localstream); // <- get error in firefox
}else{
object_user.pc.addTrack(window.localstream);
}
I try to change addStream by addTrack of firefox but I get "Not enough arguments to RTCPeerConnection.addTrack."
The documentation for addTrack requires 2 argumuments, track and stream, which is probably why you get an error.
Syntax
rtpSender = RTCPeerConnection.addTrack(track, stream...);
Parameters
track
A MediaStreamTrack object representing the media track to add to the
peer connection.
stream...
One or more MediaStream objects in which the specified track are to be
contained.
https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/addTrack

Firefox file downloading process structure figure

For my project, I need to study some info like "FireFox/Gecko file downloading structure overview"(if any), or somewhat "file downloading process flow chart of FireFox/Gecko". I couldn't find something like that in the Internet so far. Is there any info about it? Thanks a lot.
PS: It must include the paths about all file downloading through FireFox browser, which are via the network connection info APIs and file handling APIs, just like "httpOpenRequest" or "DoFileDownload" API(if any).
What would be the Firefox downloading process API paths?? Is there any figure or chart?
Please help me...
You are probably going to need to look at the code to get the information you desire. You will need to build the flowchart yourself.
There are a couple of different ways downloading is done in the code.
If you are talking about a Firefox add-on performing a download, then it is probably being done using Downloads.jsm (although there is an older method for doing so). The source code for that JavaScript module is at resource://gre/modules/Downloads.jsm (This URL is only valid in Firefox). There appear to be several files all located in the jsloader\resource\gre\modules directory within the zip format file called omni.ja in the root of the Firefox distribution. You can just copy that file and change the name to omni.zip and access it as a normal .zip file.
If you are wanting to know how Firefox saves a page when it is requested by the user: It is defined in the context menu with the oncommand value being gContextMenu.saveLink();. saveLink() is defined in: chrome://browser/content/nsContextMenu.js. It does some housekeeping and then calls saveHelper() which is in the same file.
The saveHelper() code is the following:
// Helper function to wait for appropriate MIME-type headers and
// then prompt the user with a file picker
saveHelper: function(linkURL, linkText, dialogTitle, bypassCache, doc) {
// canonical def in nsURILoader.h
const NS_ERROR_SAVE_LINK_AS_TIMEOUT = 0x805d0020;
// an object to proxy the data through to
// nsIExternalHelperAppService.doContent, which will wait for the
// appropriate MIME-type headers and then prompt the user with a
// file picker
function saveAsListener() {}
saveAsListener.prototype = {
extListener: null,
onStartRequest: function saveLinkAs_onStartRequest(aRequest, aContext) {
// if the timer fired, the error status will have been caused by that,
// and we'll be restarting in onStopRequest, so no reason to notify
// the user
if (aRequest.status == NS_ERROR_SAVE_LINK_AS_TIMEOUT)
return;
timer.cancel();
// some other error occured; notify the user...
if (!Components.isSuccessCode(aRequest.status)) {
try {
const sbs = Cc["#mozilla.org/intl/stringbundle;1"].
getService(Ci.nsIStringBundleService);
const bundle = sbs.createBundle(
"chrome://mozapps/locale/downloads/downloads.properties");
const title = bundle.GetStringFromName("downloadErrorAlertTitle");
const msg = bundle.GetStringFromName("downloadErrorGeneric");
const promptSvc = Cc["#mozilla.org/embedcomp/prompt-service;1"].
getService(Ci.nsIPromptService);
promptSvc.alert(doc.defaultView, title, msg);
} catch (ex) {}
return;
}
var extHelperAppSvc =
Cc["#mozilla.org/uriloader/external-helper-app-service;1"].
getService(Ci.nsIExternalHelperAppService);
var channel = aRequest.QueryInterface(Ci.nsIChannel);
this.extListener =
extHelperAppSvc.doContent(channel.contentType, aRequest,
doc.defaultView, true);
this.extListener.onStartRequest(aRequest, aContext);
},
onStopRequest: function saveLinkAs_onStopRequest(aRequest, aContext,
aStatusCode) {
if (aStatusCode == NS_ERROR_SAVE_LINK_AS_TIMEOUT) {
// do it the old fashioned way, which will pick the best filename
// it can without waiting.
saveURL(linkURL, linkText, dialogTitle, bypassCache, false,
doc.documentURIObject, doc);
}
if (this.extListener)
this.extListener.onStopRequest(aRequest, aContext, aStatusCode);
},
onDataAvailable: function saveLinkAs_onDataAvailable(aRequest, aContext,
aInputStream,
aOffset, aCount) {
this.extListener.onDataAvailable(aRequest, aContext, aInputStream,
aOffset, aCount);
}
}
function callbacks() {}
callbacks.prototype = {
getInterface: function sLA_callbacks_getInterface(aIID) {
if (aIID.equals(Ci.nsIAuthPrompt) || aIID.equals(Ci.nsIAuthPrompt2)) {
// If the channel demands authentication prompt, we must cancel it
// because the save-as-timer would expire and cancel the channel
// before we get credentials from user. Both authentication dialog
// and save as dialog would appear on the screen as we fall back to
// the old fashioned way after the timeout.
timer.cancel();
channel.cancel(NS_ERROR_SAVE_LINK_AS_TIMEOUT);
}
throw Cr.NS_ERROR_NO_INTERFACE;
}
}
// if it we don't have the headers after a short time, the user
// won't have received any feedback from their click. that's bad. so
// we give up waiting for the filename.
function timerCallback() {}
timerCallback.prototype = {
notify: function sLA_timer_notify(aTimer) {
channel.cancel(NS_ERROR_SAVE_LINK_AS_TIMEOUT);
return;
}
}
// set up a channel to do the saving
var ioService = Cc["#mozilla.org/network/io-service;1"].
getService(Ci.nsIIOService);
var channel = ioService.newChannelFromURI(makeURI(linkURL));
if (channel instanceof Ci.nsIPrivateBrowsingChannel) {
let docIsPrivate = PrivateBrowsingUtils.isWindowPrivate(doc.defaultView);
channel.setPrivate(docIsPrivate);
}
channel.notificationCallbacks = new callbacks();
let flags = Ci.nsIChannel.LOAD_CALL_CONTENT_SNIFFERS;
if (bypassCache)
flags |= Ci.nsIRequest.LOAD_BYPASS_CACHE;
if (channel instanceof Ci.nsICachingChannel)
flags |= Ci.nsICachingChannel.LOAD_BYPASS_LOCAL_CACHE_IF_BUSY;
channel.loadFlags |= flags;
if (channel instanceof Ci.nsIHttpChannel) {
channel.referrer = doc.documentURIObject;
if (channel instanceof Ci.nsIHttpChannelInternal)
channel.forceAllowThirdPartyCookie = true;
}
// fallback to the old way if we don't see the headers quickly
var timeToWait =
gPrefService.getIntPref("browser.download.saveLinkAsFilenameTimeout");
var timer = Cc["#mozilla.org/timer;1"].createInstance(Ci.nsITimer);
timer.initWithCallback(new timerCallback(), timeToWait,
timer.TYPE_ONE_SHOT);
// kick off the channel with our proxy object as the listener
channel.asyncOpen(new saveAsListener(), null);
}

Filtering a loaded kml file in OpenLayers

I'm trying to create an interactive search engine (for finding event tickets) of which one of its features is a visual map that shows related venues using OpenLayers. I have a plethora of venues (3000+) in a kml file that I would like to selectively show a filtered subsection of. Below is the code I have but when I try to run it has a JavaScript error. Running firebug and chrome developer tools makes me think that it is not getting passed the parameters I give because it says that the variables are null. However, I cannot figure out why they are not getting passed. Any insight is greatly appreciated.
var map, drawControls, selectControl, selectedFeature, select;
$('#kml').load('venuesComplete.kml');
kml=$('#kml').html();
function showVenues(state, city, venue){
filterStrategy = new OpenLayers.Strategy.Filter({});
var kmllayer = new OpenLayers.Layer.Vector("KML", {
strategies: [filterStrategy,
new OpenLayers.Strategy.Fixed()],
protocol: new OpenLayers.Protocol.HTTP({
url: "venuesComplete.kml",
format: new OpenLayers.Format.KML({
extractStyles: true,
extractAttributes: true
})
})
});
select = new OpenLayers.Control.SelectFeature(kmllayer);
kmllayer.events.on({
"featureselected": onFeatureSelect,
"featureunselected": onFeatureUnselect
});
map.addControl(select);
select.activate();
filter = new OpenLayers.Filter.Comparison({
type: OpenLayers.Filter.Comparison.LIKE,
property: "",
value: ""
});
function clearFilter(){
filterStrategy.setFilter(null);
}
function setFilter(property, value){
filter.value = value;
filter.property = property;
filterStrategy.setFilter(filter);
}
var vector_style = new OpenLayers.Style();
if(venue!=""){
setFilter('name', venue);
}else if(city!=""){
setFilter('description', city);
}else if(state!=""){
setFilter('description', state);
}
map.addLayer(kmllayer);
function onPopupClose(evt) {
select.unselectAll();
}
function onFeatureSelect(event) {
var feature = event.feature;
var selectedFeature = feature;
var popup = new OpenLayers.Popup.FramedCloud("chicken",
feature.geometry.getBounds().getCenterLonLat(),
new OpenLayers.Size(100,100),
"<h2>"+feature.attributes.name + "</h2>" + feature.attributes.description +'<br>'+feature.attributes,
null,
true,
onPopupClose
);
document.getElementById('venueName').value=feature.attributes.name;
document.getElementById("output").innerHTML=event.feature.id;
feature.popup = popup;
map.addPopup(popup);
}
function onFeatureUnselect(event) {
var feature = event.feature;
if(feature.popup) {
map.removePopup(feature.popup);
feature.popup.destroy();
delete feature.popup;
}
}
}
function init() {
map = new OpenLayers.Map('map');
var google_map_layer = new OpenLayers.Layer.Google(
'Google Map Layer',
{type: google.maps.MapTypeId.HYBRID}
);
map.addLayer(google_map_layer);
state="";
state+=document.getElementById('stateProvDesc').value;
city="";
city+=document.getElementById('cityZip').value;
venue="";
venue+=document.getElementById('venueName').value;
showVenues(state,city,'Michie Stadium');
map.addControl(new OpenLayers.Control.LayerSwitcher({}));
map.zoomToMaxExtent();
}
IF I UNDERSTAND CORRECTLY, your kml does not load properly. if this is not the case, please disconsider my answer.
it is very important to check if your kml layer was properly loaded. i have a map that loads multiple dynamic (from php) kml layers and it is not uncommon to have a large layer simply not load. when that happens, the operation is aborted, but, as far as openlayers is concerned, the layer was properly loaded.
so i do 2 things: i check if the amount of loaded data meets the expected number of features in my orginal php kml parser (i use a jquery or ajax call for that) and then, in case there is a discrepancy, i try reloading (since this is a loop, i limit it to 5 attempts, so as not to loop infinitely).
check out some of my code here

Should i use threads when executing action method through AJAX?

I am building a questionnarie. When a user clicks on an answer possibility for a multiple choice question (this is a radio button), i call an action method to save this answer.
The code:
<script language="javascript">
$(document).ready(function () {
$('.MCQRadio').click(function () {
var question_id = $(this).attr('question-id');
var mcq_id = $(this).attr('mcq-id');
$.ajax({
url: '/SaveSurveyAnswers/SaveMCQAnswer',
data: { "mcq_id": mcq_id, "question_id": question_id },
success: function (data) {
}
});
});
});
The code to save the answer:
public EmptyResult SaveMCQAnswer(int mcq_id, int question_id)
{
MCQ_Answers mcqa = null;
try
{
mcqa = db.MCQ_Answers.Single(x => x.question_ID == question_id);
}
catch (InvalidOperationException e)
{
}
if (mcqa != null)
{
mcqa.mcq_id = mcq_id;
}
else
{
MCQ_Answers mcq_answer = new MCQ_Answers()
{
question_ID = question_id,
respondent_id = 1
};
db.MCQ_Answers.AddObject(mcq_answer);
}
db.SaveChanges();
return new EmptyResult();
}
If a question has 5 answer possibilities, and i click on them randomly and fast, and then go back to the previous page, ie, when i return the correct answer wont be saved. Should i use threading to make sure the correct answer is saved? And how?
Thanks
rather than saving your answer by post all the time, you can just create a JSOn object and save the answers within json. you can then at the end post all completed answers in one go.
take a look at this: http://msdn.microsoft.com/en-us/scriptjunkie/ff962533
basically this will allow you to store session data - json on the remote machine you then just need an add, delete function and away you go....
i use this to huge extent in an application that would require the server to be updated with the location of objects on a canvas, however with sessvars i just keep all the X and Y locations within there and do a final push of JSON when i am done.
if you change pages, you can then get your values from the JSON object without a server call.
as a note you may also be better off with tabs or hiden sections of form, and therfor reduce the need to re-populate say page1, page2 etc as they will already be there, just hidden!

Resources