WebRTC: Renegotiation in firefox - firefox

According to this article, re-negotiation is implemented in firefox v38 and we can add remove streams from same peerconnections without creating new ones, but I am unable to find any working demos to support that claim, and when I tried it, two users chating in video mode, I changing one of their stream to audio, I get the error:
NotSupportedError: removeStream not yet implemented
this tells the same, but this tells renegotiation events are supported, but isn't removeStream key part of renegotiation? I am using firefox version 39 in windows 7. I am confused, renegotiation is not yet supported in firefox, right?

Renegotiation is supported in Firefox.
Firefox just never implemented removeStream because the spec had changed to addTrack and removeTrack by the time renegotiation was implemented (some are suggesting that its removal was too hasty, so it may come back). addStream still works for backwards compatibility, because Firefox already supported it.
Note that removeTrack confusingly takes the RTCRtpSender returned from addTrack so the API is not a drop-in.
A polyfill would look something like this:
mozRTCPeerConnection.prototype.removeStream = function(stream) {
this.getSenders().forEach(sender =>
stream.getTracks().includes(sender.track) && this.removeTrack(sender));
}
The move to tracks was done to give users more flexibility, since tracks may belong to multiple streams, and not all tracks in a stream need be sent over a PeerConnection (or the same PeerConnection).
See this answer to a different question for a renegotiation example that works in Firefox.

Try using replaceTrack for individual tracks, instead of replacing the entire stream. This example assumes you have a peer connection pc1 and a new stream newStream to replace on it. Get the senders, and replace the tracks with appropriate tracks from the new stream. Working sample here.
Promise.all(pc1.getSenders().map(sender =>
sender.replaceTrack((sender.track.kind == "audio")?
newStream.getAudioTracks()[0] :
newStream.getVideoTracks()[0])))
.then(() => log("Flip!"))
.catch(failed);
Also note this, from your first link:
function screenShare() {
let screenConstraints = {video: {mediaSource: "screen"}};
navigator.mediaDevices.getUserMedia(screenConstraints)
.then(stream) {
stream.getTracks().forEach(track) {
screenStream = stream;
screenSenders.push(pc1.addTrack(track, stream));
});
});
}
Notice that this example calls pc1.addTrack not pc1.addStream
And in the same in reverse, for removing - pc1.removeTrack:
function stopScreenShare() {
screenStream.stop();
screenSenders.forEach(sender) {
pc1.removeTrack(sender);
});
}

Related

How and in what way do content scripts share content scoped variables across different web pages?

There are some key parts of the MDN content script documentation I am having trouble understanding regarding variable scope:
There is only one global scope per frame, per extension. This means that variables from one content script can directly be accessed by another content script, regardless of how the content script was loaded.
This paragraph may also be relevant (my italics) given the questions below:
... you can ask the browser to load a content script whenever the browser loads a page whose URL matches a given pattern.
I assume (testing all this has proved difficult, please bear with me) that this means:
The content script code will be run every time a new page is loaded that matches the URLs provided in manifest.json (in my case "matches": [<"all_urls">]
The content script code will run every time a page is refreshed/reloaded.
The content script code will not be run when a user switches between already open tabs (requires listening to window focus or tabs.onActivated events).
The variables initialized on one page via the content script share a global scope with those variables initialized by the same content script on another page.
Given the following code,
background-script.js:
let contentPort = null
async function connectToActiveTab () {
let tab = await browser.tabs.query({ active: true, currentWindow: true })
tab = tab[0]
const tabName = `tab-${tab.id}`
if (contentPort) {
contentPort.disconnect()
}
contentPort = await browser.tabs.connect(tab.id, { name: tabName })
}
// listening to onCreated probably not needed so long as content script reliably sends 'connecToActiveTab' message
browser.tabs.onCreated.addListener(connectToActiveTab)
browser.runtime.onMessage.addListener(connectToActiveTab)
content-script.js
let contentPort = null
function connectionHandler (port, info) {
if (contentPort && contentPort.name !== port.name) {
// if content port doesn't match port we have changed tabs/windows
contentPort.disconnect()
}
if (!contentPort) {
contentPort = port
contentPort.onMessage.addListener(messageHandler)
}
}
async function main () {
// should always be true when a new page opens since content script code is run on each new page, testing has shown inconsistent results
if (!contentPort) {
await browser.runtime.sendMessage({ type: 'connectToActiveTab' })
}
}
browser.runtime.onConnect.addListener(connectionHandler)
main()
And from assumptions 1 and 4, I also assume:
The existing contentPort (if defined) will fire a disconnect event (which I could handle in the background script) and be replaced by a new connection to the currently active tab each time a new tab is opened.
The behaviour I have seen in Firefox while testing has so far been a bit erratic and I think I may be doing some things wrong. So now, here are the specific questions:
Are all of my 5 assumptions true? If not, which ones are not?
Is firing the disconnect() event unnecessary, since I should rely on Firefox to properly clean up and close existing connections without manually firing a disconnect event once the original contentPort variable is overwritten? (the code here would suggest otherwise)
Are the connect() methods synchronous, thus negating the need for await and asynchronous functions given the example code?
The tabs.connect() examples don't use await but neither the MDN runtime or connect docs explicitly say whether the methods are synchronous or not.
Thanks for taking the time to go through these deep dive questions regarding content script behaviour, my hope is that clear and concise answers to these could perhaps be added to the SO extension FAQ pages/knowledge base.

Tips on solving 'DevTools was disconnected from the page' and Electron Helper dies

I've a problem with Electron where the app goes blank. i.e. It becomes a white screen. If I open the dev tools it displays the following message.
In ActivityMonitor I can see the number of Electron Helper processes drops from 3 to 2 when this happens. Plus it seems I'm not the only person to come across it. e.g.
Facing "Devtools was disconnected from the page. Once page is reloaded, Devtools will automatically reconnect."
Electron dying without any information, what now?
But I've yet to find an answer that helps. In scenarios where Electron crashes are there any good approaches to identifying the problem?
For context I'm loading an sdk into Electron. Originally I was using browserify to package it which worked fine. But I want to move to the SDKs npm release. This version seems to have introduced the problem (though the code should be the same).
A good bit of time has passed since I originally posted this question. I'll answer it myself in case my mistake can assist anyone.
I never got a "solution" to the original problem. At a much later date I switched across to the npm release of the sdk and it worked.
But before that time I'd hit this issue again. Luckily, by then, I'd added a logger that also wrote console to file. With it I noticed that a JavaScript syntax error caused the crash. e.g. Missing closing bracket, etc.
I suspect that's what caused my original problem. But the Chrome dev tools do the worst thing by blanking the console rather than preserve it when the tools crash.
Code I used to setup a logger
/*global window */
const winston = require('winston');
const prettyMs = require('pretty-ms');
/**
* Proxy the standard 'console' object and redirect it toward a logger.
*/
class Logger {
constructor() {
// Retain a reference to the original console
this.originalConsole = window.console;
this.timers = new Map([]);
// Configure a logger
this.logger = winston.createLogger({
level: 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.printf(({ level, message, timestamp }) => {
return `${timestamp} ${level}: ${message}`;
})
),
transports: [
new winston.transports.File(
{
filename: `${require('electron').remote.app.getPath('userData')}/logs/downloader.log`, // Note: require('electron').remote is undefined when I include it in the normal imports
handleExceptions: true, // Log unhandled exceptions
maxsize: 1048576, // 10 MB
maxFiles: 10
}
)
]
});
const _this = this;
// Switch out the console with a proxied version
window.console = new Proxy(this.originalConsole, {
// Override the console functions
get(target, property) {
// Leverage the identical logger functions
if (['debug', 'info', 'warn', 'error'].includes(property)) return (...parameters) => {
_this.logger[property](parameters);
// Simple approach to logging to console. Initially considered
// using a custom logger. But this is much easier to implement.
// Downside is that the format differs but I can live with that
_this.originalConsole[property](...parameters);
}
// The log function differs in logger so map it to info
if ('log' === property) return (...parameters) => {
_this.logger.info(parameters);
_this.originalConsole.info(...parameters);
}
// Re-implement the time and timeEnd functions
if ('time' === property) return (label) => _this.timers.set(label, window.performance.now());
if ('timeEnd' === property) return (label) => {
const now = window.performance.now();
if (!_this.timers.has(label)) {
_this.logger.warn(`console.timeEnd('${label}') called without preceding console.time('${label}')! Or console.timeEnd('${label}') has been called more than once.`)
}
const timeTaken = prettyMs(now - _this.timers.get(label));
_this.timers.delete(label);
const message = `${label} ${timeTaken}`;
_this.logger.info(message);
_this.originalConsole.info(message);
}
// Any non-overriden functions are passed to console
return target[property];
}
});
}
}
/**
* Calling this function switches the window.console for a proxied version.
* The proxy allows us to redirect the call to a logger.
*/
function switchConsoleToLogger() { new Logger(); } // eslint-disable-line no-unused-vars
Then in index.html I load this script first
<script src="js/logger.js"></script>
<script>switchConsoleToLogger()</script>
I had installed Google Chrome version 79.0.3945.130 (64 bit). My app was going to crash every time when I was in debug mode. I try all the solutions I found on the web but no one was useful. I downgrade to all the previous version:
78.x Crashed
77.x Crashed
75.x Not Crashed
I had to re-install the version 75.0.3770.80 (64 bit). Problem has been solved. It can be a new versions of Chrome problem. I sent feedback to Chrome assistence.
My problem was that I was not loading a page such as index.html. Once I loaded problem went away.
parentWindow = new BrowserWindow({
title: 'parent'
});
parentWindow.loadURL(`file://${__dirname}/index.html`);
parentWindow.webContents.openDevTools();
The trick to debugging a crash like this, is to enable logging, which is apparently disabled by default. This is done by setting the environment variable ELECTRON_ENABLE_LOGGING=1, as mentioned in this GitHub issue.
With that enabled, you should see something along the lines of this in the console:
You can download Google Chrome Canary. I was facing this problem on Google Chrome where DevTools was crashing every time on the same spot. On Chrome Canary the debugger doesn't crash.
I also faced the exact same problem
I was trying to require sqlite3 module from renderer side
which was causing a problem but once i removed the request it was working just fine
const {app , BrowserWindow , ipcMain, ipcRenderer } = require('electron')
const { event } = require('jquery')
const sqlite3 = require('sqlite3').verbose(); // <<== problem
I think the best way to solve this (if your code is really really small) just try to remove functions and run it over and over again eventually you can narrow it down to the core problem
It is a really tedious , dumb and not a smart way of doing it , but hey it worked
I encountered this issue, and couldn't figure out why the the DevTool was constantly disconnecting. So on a whim I launched Firefox Developer edition and identified the cause as an undefined variable with a string length property.
if ( args.length > 1 ) {
$( this ).find( "option" ).each(function () {
$( $( this ).attr( "s-group" ) ).hide();
});
$( args ).show();
}
TL;DR Firefox Developer edition can identify these kinds of problems when Chrome's DevTool fails.
After reading the comments above it is clear to me that there is a problem at least in Chrome that consists of not showing any indication of what the fault comes from. In Firefox, the program works but with a long delay.
But, as Shane Gannon said, the origin of the problem is certainly not in a browser but it is in the code: in my case, I had opened a while loop without adding the corresponding incremental, which made the loop infinite. As in the example below:
var a = 0;
while (a < 10) {
...
a ++ // this is the part I was missing;
}
Once this was corrected, the problem disappeared.
I found that upgrading to
react 17.0.2
react-dom 17.0.2
react-scripts 4.0.3
but also as react-scripts start is being used to run electron maybe its just react scripts that needs updating.
Well I nearly went crazy but with electron the main problem I realized I commented out the code to fetch (index.html)
// and load the index.html of the app.
mainWindow.loadFile('index.html');
check this side and make sure you have included it. without this the page will go black or wont load. so check your index.js to see if there's something to load your index.html file :) feel free to mail : profnird#gmail.com if you need additional help
Downgrade from Electron 11 to Electron 8.2 worked for me in Angular 11 - Electron - Typeorm -sqlite3 app.
It is not a solution as such, but it is an assumption of why the problem.
In the angular 'ngOnInit' lifecycle I put too many 'for' and 'while' loops, one inside the other, after cleaning the code and making it more compact, the problem disappeared, I think maybe because it didn't finish the processes within a time limit I hope someone finds this comment helpful. :)
I have stumbled upon the similar problem, My approach is comment out some line that I just added to see if it works. And if that is the case, those problem is at those lines of code.
for(var i = 0;i<objLen; i+3 ){
input_data.push(jsonObj[i].reading2);
input_label.push(jsonObj[i].dateTime);
}
The console works fine after i change the code to like this.
for(var i = 0;i<objLen; i=i+space ){
input_data.push(jsonObj[i].reading2);
input_label.push(jsonObj[i].dateTime);
}
Open your google dev console (Ctrl + shift + i). Then press (fn + F1) or just F1, then scroll down and click on the Restore defaults and reload.

How should I test if a PDU is too big in WinSNMP?

I am building an SNMP Agent for a Windows application using the Microsoft WinSNMP API. Currently everything is working for single-item get and set-request, and also for get-next to allow walking the defined tree (albeit with some caveats that are not relevant to this question).
I am now looking at multi-item get and also get-bulk.
My current procedure is to iterate through the list of requested items (the varbindlist within the PDU), treating each one individually, effectively causing an internal get. The result is added to the VBL, set into the PDU, and then sent back to the SNMP Manager, taking into account invalid requests, etc.
My question is how should I handle "too much" data (data that cannot fit into a single transport layer message)? Or more accurately, is there a way to test whether data is "too big" without actually attempting to transmit? The only way I can see in the API is to try sending, check the error, and try again.
In the case of a get-request this isn't a problem - if you can't return all of the requested data, you fail: so attempt sending, and if the error report is SNMPAPI_TL_PDU_TOO_BIG, send a default "error" PDU.
However, it is allowable for a response to bulk-get to return partial results.
The only way I can see to handle this is a tedious (?) loop of removing an item and trying again. Something similar to the following (some detail removed for brevity):
// Create an empty varbindlist
vbl = SnmpCreateVbl(session, NULL, NULL);
// Add all items to the list
SnmpSetVb(vbl, &oid, &value); // for each OID/Value pair
// Create the PDU
pdu = SnmpCreatePdu(session, SNMP_PDU_RESPONSE, ..., vbl);
bool retry;
do {
retry = false;
smiINT failed = SnmpSendMsg(session, ..., pdu);
if (failed && SNMPAPI_TL_PDU_TOO_BIG == SnmpGetLastError()) {
// too much data, delete the last vb
SnmpDeleteVb(vbl, SnmpCountVbl(vbl));
SnmpSetPduData(pdu, ..., vbl);
retry = true;
};
} while(retry);
This doesn't seem like an optimal approach - so is there another way that I've missed?
As a side-note, I know about libraries such as net-snmp, but my question is specific to the Microsoft API.
The RFC does require you to do what you pasted,
https://www.rfc-editor.org/rfc/rfc3416
Read page 16.
There does not seem to be a function exposed by WinSNMP API that can do this for you, so you have to write your own logic to handle it.

XMLHttpRequest.send(Int8Array) POST fails in Firefox only

I'm trying to post data (a chunk of a file) using a XmlHttpRequest object with an Int8Array as the data but it fails in FF18, but works perfect in IE 10 & Chrome.
Here's my JS:
//dataObj is an Int8Array with approx. 33,000 items
var oReq = new XMLHttpRequest();
oReq.open("POST", "Ajax/PostChunk");
oReq.onload = function (oEvent) {
//
};
oReq.send(dataObj);
I use Firebug in Firefox to debug my JS and when I watch the activity under the Net tab, nothing ever shows up for this XHR call. As if it was never called.
Also, prior to this call, I call jQuerys .ajax() method for "Ajax/PostChunkSize" and that works fine in all browsers, although that doesn't use an Int8Array for its data. I can't use .ajax() for this since .ajax() doesn't support Int8Array objects, as far as I know.
Does anyone know why Firefox doesn't even attempt to send this? Any questions, please ask.
Thanks in advance.
The ability to send a typed array (as opposed to an arraybuffer) is a recent addition to the in-flux XMLHttpRequest2 spec. It'll be supported in Firefox 20 in April or so (see https://bugzilla.mozilla.org/show_bug.cgi?id=819741 ) but in the meantime if your Int8Array covers its entire buffer, doing send(dataObj.buffer) should work...
Note that per the old spec the code above should have sent a string that looks something like "[object Int8Array]" instead of throwing; you may want to check to make sure that other browsers really are sending the array data and not that string.

Adding custom code to mootools addEvent

Even though I've been using mootools for a while now, I haven't really gotten into playing with the natives yet. Currently I'm trying to extend events by adding a custom addEvent method beside the original. I did that using the following code(copied from mootools core)
Native.implement([Element, Window, Document], {
addMyEvent:function(){/* code here */}
}
Now the problem is that I can't seem to figure out, how to properly overwrite the existing fireEvent method in a way that I can still call the orignal method after executing my own logic.
I could probably get the desired results with some ugly hacks but I'd prefer learning the elegant way :)
Update: Tried a couple of ugly hacks. None of them worked. Either I don't understand closures or I'm tweaking the wrong place. I tried saving Element.fireEvent to a temporary variable(with and without using closures), which I would then call from the overwritten fireEvent function(overwritten using Native.implement - the same as above). The result is an endless loop with fireEvent calling itself over and over again.
Update 2:
I followed the execution using firebug and it lead me to Native.genericize, which seems to act as a kind of proxy for the methods of native classes. So instead of referencing the actual fireEvent method, I referenced the proxy and that caused the infinite loop. Google didn't find any useful documentation about this and I'm a little wary about poking around under the hood when I don't completely understand how it works, so any help is much appreciated.
Update 3 - Original problem solved:
As I replied to Dimitar's comment below, I managed to solve the original problem by myself. I was trying to make a method for adding events that destroy themselves after a certain amount of executions. Although the original problem is solved, my question about extending natives remain.
Here's the finished code:
Native.implement([Element, Window, Document], {
addVolatileEvent:function(type,fn,counter,internal){
if(!counter)
counter=1;
var volatileFn=function(){
fn.run(arguments);
counter-=1;
if(counter<1)
{
this.removeEvent(type,volatileFn);
}
}
this.addEvent(type,volatileFn,internal);
}
});
is the name right? That's the best I could come up with my limited vocabulary.
document.id("clicker").addEvents({
"boobies": function() {
console.info("nipple police");
this.store("boobies", (this.retrieve("boobies")) ? this.retrieve("boobies") + 1 : 1);
if (this.retrieve("boobies") == 5)
this.removeEvents("boobies");
},
"click": function() {
// original function can callback boobies "even"
this.fireEvent("boobies");
// do usual stuff.
}
});
adding a simple event handler that counts the number of iterations it has gone through and then self-destroys.
think of events as simple callbacks under a particular key, some of which are bound to particular events that get fired up.
using element storage is always advisable if possible - it allows you to share data on the same element between different scopes w/o complex punctures or global variables.
Natives should not be modded like so, just do:
Element.implement({
newMethod: function() {
// this being the element
return this;
}
});
document.id("clicker").newMethod();
unless, of course, you need to define something that applies to window or document as well.

Resources