How to mix / combine multiple WebRTC media streams (screen capture + webcam) into a single stream? - html5-canvas

I have a live screen capture media stream returned from getDisplayMedia(),
and a live webcam media stream returned from getUserMedia().
I currently render the webcam video on top of the screen share video, to create a picture-in-picture effect:
I want to mix / combine them into one video stream, in order to render it inside a single HTML video element.
I want to keep both streams active & live just as if they were two separate videos rendering two different media streams.
I also need to maintain the streams positions - keeping the webcam stream small and on top of the screen share stream. It's also really important for me to keep the original resolution and bitrate.
How can I do that?

You can draw both video streams on an HTMLCanvasElement, and then create a MediaStream from this HTMLCanvasElement calling its captureStream method.
To draw the two video streams, you'd have to read them in <video> elements and drawImage() these <video> elements onto the canvas in a timed loop (e.g requestAnimationFrame can be used for this timed loop).
async function getOverlayedVideoStreams( stream1, stream2 ) {
// prepare both players
const vid1 = document.createElement("video");
const vid2 = document.createElement("video");
vid1.muted = vid2.muted = true;
vid1.srcObject = stream1;
vid2.srcObject = stream2;
await Promise.all( [
vid1.play(),
vid2.play()
] );
// craete the renderer
const canvas = document.createElement("canvas");
let w = canvas.width = vid1.videoWidth;
let h = canvas.height = vid1.videoHeight;
const ctx = canvas.getContext("2d");
// MediaStreams can change size while streaming, so we need to handle it
vid1.onresize = (evt) => {
w = canvas.width = vid1.videoWidth;
h = canvas.height = vid1.videoHeight;
};
// start the animation loop
anim();
return canvas.captureStream();
function anim() {
// draw bg video
ctx.drawImage( vid1, 0, 0 );
// caculate size and position of small corner-vid (you may change it as you like)
const cam_w = vid2.videoWidth;
const cam_h = vid2.videoHeight;
const cam_ratio = cam_w / cam_h;
const out_h = h / 3;
const out_w = out_h * cam_ratio;
ctx.drawImage( vid2, w - out_w, h - out_h, out_w, out_h );
// do the same thing again at next screen paint
requestAnimationFrame( anim );
}
}
Live demo as a glitch since StackSnippets won't allow capture APIs.

I tried rendering on canvas but the refresh rate drops whenever i change/minimize my web-app tab. I was able to fix that using audioTimerLoop.
But it still did'nt work on firefox.
As i was limited to chrome now, i used PictureInPicture api to display the userMedia as a PiP and then just recorded the screenMedia.
This let the user to adjust their cam video coordinates which were fixed in the canvas method.
P.S. For getting pip mode.I displayed the userMedia onscreen and set its opacity to 0% and created a button to toggle it.

Related

How to achieve high quality cropped images from canvas?

I am desperately searching for a good cropping tool. There are a bunch out there, for example:
Croppic
Cropit
Jcrop
The most important thing that I am trying to find is a cropping tool, that crops images without making the cropped image low in resolution. You can hack this by using the canvas tag by resizing the image. This way the image itself stays native, only the representation is smaller.
DarkroomJS was also something near the solution, but, unfortunately, the downloaded demo did not work. I'll try to figure out whats wrong. Does someone know some great alternatives, or how to get the cropped images in...let's say "native" resolution?
Thanks in advance!
You are relying on the cropping tool to provide an interface for the users. the problem is that the image returned is sized to the interface and not the original image. Rather than me sifting through the various API's to see if they provide some way of controlling this behaviour (I assume at least some of them would) and because it is such a simple procedure I will show how to crop the image manually.
To use JCrop as an example
Jcrop provides various events for cropstart, cropmove, cropend... You can add a listener to listen to these events and keep a copy of the current cropping interface state
var currentCrop;
jQuery('#target').on('cropstart cropmove cropend',function(e,s,crop){
currentCrop = crop;
}
I don't know where you have set the interface size and I am assuming the events return the crop details at the interface scale
var interfaceSize = { //you will have to work this out
w : ?,
h : ?.
}
Your original image
var myImage = new Image(); // Assume you know how to load
So when the crop button is clicked you can create the new image by scaling the crop details back to the original image size, creating a canvas at the cropped size, drawing the image so that the cropped area is corectly positioned and returning the canvas as is or as a new image.
// image = image to crop
// crop = the current cropping region
// interfaceSize = the size of the full image in the interface
// returns a new cropped image at full res
function myCrop(image,crop,interfaceSize){
var scaleX = image.width / interfaceSize.w; // get x scale
var scaleY = image.height / interfaceSize.h; // get y scale
// get full res crop region. rounding to pixels
var x = Math.round(crop.x * scaleX);
var y = Math.round(crop.y * scaleY);
var w = Math.round(crop.w * scaleX);
var h = Math.round(crop.h * scaleY);
// Assume crop will never pad
// create an drawable image
var croppedImage = document.createElement("canvas");
croppedImage.width = w;
croppedImage.height = h;
var ctx = croppedImage.getContext("2d");
// draw the image offset so the it is correctly cropped
ctx.drawImage(image,-x,-y);
return croppedImage
}
You then only need to call this function when the crop button is clicked
var croppedImage;
myButtonElement.onclick = function(){
if(currentCrop !== undefined){ // ensure that there is a selected crop
croppedImage = myCrop(myImage,currentCrop,interfaceSize);
}
}
You can convert the image to a dataURL for download, and upload via
imageData = croppedImage.toDataURL(mimeType,quality) // quality is optional and only for "image/jpeg" images

How to improve display quality in pdf.js

I'm using open source library for PDF documents from mozilla(pdf.JS).
When i'm trying to open pdf documents with bad quality, viewer displays it with VERY BAD quality.
But if I open it in reader, or in browser (drag/drop into new window), whis document displays well
Is it possible to change?
Here is this library on github mozilla pdf.js
You just have to change the scaling of your pdf i.e. when rendering a page:
pdfDoc.getPage(num).then(function(page) {
var viewport = page.getViewport(scale);
canvas.height = viewport.height;
canvas.width = viewport.width;
...
It is the scale value you have to change. Then, the resulting rendered image will fit into the canvas given its dimensions e.g. in CSS. What this means is that you produce a bigger image, fit it into the container you had before and so you effectively improve the resolution.
There is renderPage function in web/viewer.js and print resolution is hard-coded in there as 150 DPI.
function renderPage(activeServiceOnEntry, pdfDocument, pageNumber, size) {
var scratchCanvas = activeService.scratchCanvas;
var PRINT_RESOLUTION = 150;
var PRINT_UNITS = PRINT_RESOLUTION / 72.0;
To change print resolution to 300 DPI, simply change the line below.
var PRINT_RESOLUTION = 300;
See How to increase print quality of PDF file with PDF.js viewer for more details.
Maybe it's an issue related with pixel ratio, it used to happen to me when device pixel ratio is bigger than 1 (for example iPhone, iPad, etc.. you can read this question for a better explanation.
Just try that file on PDF.js Viewer. If it works like expected, you must check how PDF.js works with pixel ratio > 1 here. What library basically does is:
canvas.width = viewport.width * window.devicePixelRatio;
canvas.styles.width = viewport.width + 'px'; // Note: The px unit is required here
But you must check how PDF.js works for better perfomance
I ran into the same issue and I used the intent option of renderContent to fix that.
const renderContext = {
intent: 'print',
// ....
}
var renderTask = page.render(renderContext);
As per docs renderContext accepts intent which supports three values - display, print or any. The default is display. When I used print instead the render quality was extremely good, at par with any desktop app.

How to tell if an image is 90% black?

I have never done image processing before.
I now need to go through many jpeg images from a camera to discard those very dark (almost black) images.
Are there free libraries (.NET) that I can use? Thanks.
Aforge is a great image processing library. Specifically the Aforge.Imaging assembly.
You could try to apply a threshold filter, and use an area or blob operator and do your comparisons from there.
I needed to do the same thing. I came up with this solution to flag mostly black images. It works like a charm. You could enhance it to delete or move the file.
// set limit
const double limit = 90;
foreach (var img in Directory.EnumerateFiles(#"E:\", "*.jpg", SearchOption.AllDirectories))
{
// load image
var sourceImage = (Bitmap)Image.FromFile(img);
// format image
var filteredImage = AForge.Imaging.Image.Clone(sourceImage);
// free source image
sourceImage.Dispose();
// get grayscale image
filteredImage = Grayscale.CommonAlgorithms.RMY.Apply(filteredImage);
// apply threshold filter
new Threshold().ApplyInPlace(filteredImage);
// gather statistics
var stat = new ImageStatistics(filteredImage);
var percentBlack = (1 - stat.PixelsCountWithoutBlack / (double)stat.PixelsCount) * 100;
if (percentBlack >= limit)
Console.WriteLine(img + " (" + Math.Round(percentBlack, 2) + "% Black)");
filteredImage.Dispose();
}
Console.WriteLine("Done.");
Console.ReadLine();

Draw Element's Contents onto a Canvas Element / Capture Website as image using (?) language

I asked a question on SO about compiling an image file from HTML. Michaƫl Witrant responded and told me about the canvas element and html5.
I'm looked on the net and SO, but i haven't found anything regarding drawing a misc element's contents onto a canvas. Is this possible?
For example, say i have a div with a background image. Is there a way to get this element and it's background image 'onto' the canvas? I ask because i found a script that allows one to save the canvas element as a PNG, but what i really want to do is save a collection of DOM elements as an image.
EDIT
It doesn't matter what language, if it could work, i'm willing to attempt it.
For the record, drawWindow only works in Firefox.
This code will only work locally and not on the internet, using drawWindow with an external element creates a security exception.
You'll have to provide us with a lot more context before we can answer anything else.
http://cutycapt.sourceforge.net/
CutyCapt is a command line utility that uses Webkit to render HTML into PNG, PDF, SVG, etc. You would need to interface with it somehow (such as a shell_exec in PHP), but it is pretty robust. Sites render exactly as they do in Webkit browsers.
I've not used CutyCapt specifically, but it came to me highly recommended. And I have used a similar product called WkHtmlToPdf, which has been awesome in my personal experience.
After many attempts using drawWindow parameters, that were drawing wrong parts or the element, I managed to do it with a two steps processing : first capture the whole page in a canvas, then draw a part of this canvas in another one.
This was done in a XUL extension. drawWindow will not work in other browsers, and may not work in a non-privileged context due to security reasons.
function nodeScreenshot(aSaveLocation, aFileName, aDocument, aCSSSelector) {
var doc = aDocument;
var win = doc.defaultView;
var body = doc.body;
var html = doc.documentElement;
var selection = aCSSSelector
? Array.prototype.slice.call(doc.querySelectorAll(aCSSSelector))
: [];
var coords = {
top: 0,
left: 0,
width: Math.max(body.scrollWidth, body.offsetWidth,
html.clientWidth, html.scrollWidth, html.offsetWidth),
height: Math.max(body.scrollHeight, body.offsetHeight,
html.clientHeight, html.scrollHeight, html.offsetHeight)
var canvas = document.createElement("canvas");
canvas.width = coords.width;
canvas.height = coords.height;
var context = canvas.getContext("2d");
// Draw the whole page
// coords.top and left are 0 here, I tried to pass the result of
// getBoundingClientRect() here but drawWindow was drawing another part,
// maybe because of a margin/padding/position ? Didn't solve it.
context.drawWindow(win, coords.top, coords.left,
coords.width, coords.height, 'rgb(255,255,255)');
if (selection.length) {
var nodeCoords = selection[0].getBoundingClientRect();
var tempCanvas = document.createElement("canvas");
var tempContext = tempCanvas.getContext("2d");
tempCanvas.width = nodeCoords.width;
tempCanvas.height = nodeCoords.height;
// Draw the node part from the whole page canvas into another canvas
// void ctx.drawImage(image, sx, sy, sLargeur, sHauteur,
dx, dy, dLargeur, dHauteur)
tempContext.drawImage(canvas,
nodeCoords.left, nodeCoords.top, nodeCoords.width, nodeCoords.height,
0, 0, nodeCoords.width, nodeCoords.height);
canvas = tempCanvas;
context = tempContext;
}
var dataURL = canvas.toDataURL('image/jpeg', 0.95);
return dataURL;
}

Resizing and saving an image in WinMobile and .NET CF throws OutOfMemoryException

I have a WinMobile app which allows the user the snap a photo with the camera, and then use for for various things. The photo can be snapped at 1600x1200, 800x600 or 640x480, but it must always be resized to 400px for the longest size (the other is proportional of course). Here's the code:
private void LoadImage(string path)
{
Image tmpPhoto = new Bitmap(path);
// calculate new bitmap size...
double width = ...
double height = ...
// draw new bitmap
Image photo = new Bitmap(width, height, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
using (Graphics g = Graphics.FromImage(photo))
{
g.FillRectangle(new SolidBrush(Color.White), new Rectangle(0, 0, photo.Width, photo.Height));
int srcX = (int)((double)(tmpPhoto.Width - width) / 2d);
int srcY = (int)((double)(tmpPhoto.Height - height) / 2d);
g.DrawImage(tmpPhoto, new Rectangle(0, 0, photo.Width, photo.Height), new Rectangle(srcX, srcY, photo.Width, photo.Height), GraphicsUnit.Pixel);
}
tmpPhoto.Dispose();
// save new image and dispose
photo.Save(Path.Combine(config.TempPath, config.TempPhotoFileName), System.Drawing.Imaging.ImageFormat.Jpeg);
photo.Dispose();
}
Now the problem is that the app breaks in the photo.Save call, with an OutOfMemoryException. And I don't know why, since I dispose the tempPhoto (with the original photo from the camera) as soon as I can, and I also dispose the Graphics obj. Why does this happen? It seems impossible to me that one can't take a photo with the camera and resize/save it without making it crash :( Should I restor t C++ for such a simple thing?
Thanks.
Have you looked at memory usage with each step to see exactly where you're using the most? You omitted your calculations for width and height, but assuming they are right you would end up with photo requiring 400x300x3 (24bits) == 360k for the bitmap data itself, which is not inordinately large.
My guess is that even though you're calling Dispose, the resources aren't getting rleased, especially if you're calling this method multiple times. The CF behaves in an unexpected way with Bitmaps. I call it a bug. The CF team doesn't.

Resources