Loading and displaying an image on the canvas using scala.js - image

I just started messing with scala.js and got stuck pretty early. I have no scala experience so I probably overlooked something simple. I try to display an image on the canvas. I tried:
var image:HTMLImageElement = new HTMLImageElement()
image.src = "pathToSource"
image.onload = { () =>
ctx.drawImage(image, 0, 0, 200, 200) //ctx is the canvas rendering context
}
The problem with this code is that onload doesn't seem to exist, even though I can find it here: https://github.com/scala-js/scala-js-dom/blob/master/src/main/scala/org/scalajs/dom/raw/Html.scala#L1333
I also tried a few other methods like onloaddata but I cant figure out what the compiler wants from me:
var image:dom.raw.HTMLImageElement = new HTMLImageElement()
image.src = "/img/image.png"
image.onloadeddata = new js.Function1[Event, _]{
def apply(v1: Event):Unit={
ctx.drawImage(image,0,0,200,200)
}
}
Anyone knows how to load and display an image with scala js?
Thanks in advance!

You have to create the image with dom.document.createElement("img")
and then use onload event on the created image, here is a working example:
val ctx = canvas.getContext("2d")
.asInstanceOf[dom.CanvasRenderingContext2D]
var image = dom.document.createElement("img").asInstanceOf[HTMLImageElement]
image.src = "/img/image.gif"
image.onload = (e: dom.Event) => {
ctx.drawImage(image, 0, 0, 525, 600, 0, 0, 525, 600)
}

Which version of org.scalajs.dom are you using? It looks like onload was added quite recently, in version 0.8.2. That might be the source of your confusion. (I'm on version 0.8.0, and get the same error.)
For reference, the idiomatic Scala syntax for something like this would usually be something like:
image.onloadeddata = { evt:Event =>
ctx.drawImage(image, 0, 0, 200, 200) //ctx is the canvas rendering context
}
or possibly (in some cases, if there is ambiguity):
image.onloadeddata = { evt:Event =>
ctx.drawImage(image, 0, 0, 200, 200) //ctx is the canvas rendering context
}:js.Function1[Event, _]

You're missing an = operator right after .onload.

Related

Updating Texture2D frequently causes process to crash (UpdateSubresource)

I am using SharpDX to basically render browser (chromium) output buffer on directX process.
Process is relatively simple, I intercept CEF buffer (by overriding OnPaint method) and write that to a texture2D.
Code is relatively simple:
Texture creation:
public void BuildTextureWrap() {
var oldTexture = texture;
texture = new D3D11.Texture2D(DxHandler.Device, new D3D11.Texture2DDescription() {
Width = overlay.Size.Width,
Height = overlay.Size.Height,
MipLevels = 1,
ArraySize = 1,
Format = DXGI.Format.B8G8R8A8_UNorm,
SampleDescription = new DXGI.SampleDescription(1, 0),
Usage = D3D11.ResourceUsage.Default,
BindFlags = D3D11.BindFlags.ShaderResource,
CpuAccessFlags = D3D11.CpuAccessFlags.None,
OptionFlags = D3D11.ResourceOptionFlags.None,
});
var view = new D3D11.ShaderResourceView(
DxHandler.Device,
texture,
new D3D11.ShaderResourceViewDescription {
Format = texture.Description.Format,
Dimension = D3D.ShaderResourceViewDimension.Texture2D,
Texture2D = { MipLevels = texture.Description.MipLevels },
}
);
textureWrap = new D3DTextureWrap(view, texture.Description.Width, texture.Description.Height);
if (oldTexture != null) {
obsoleteTextures.Add(oldTexture);
}
}
That piece of code is executed at start and when resize is happening.
Now when CEF OnDraw I basically copy their buffer to texture:
var destinationRegion = new D3D11.ResourceRegion {
Top = Math.Min(r.dirtyRect.y, texDesc.Height),
Bottom = Math.Min(r.dirtyRect.y + r.dirtyRect.height, texDesc.Height),
Left = Math.Min(r.dirtyRect.x, texDesc.Width),
Right = Math.Min(r.dirtyRect.x + r.dirtyRect.width, texDesc.Width),
Front = 0,
Back = 1,
};
// Draw to the target
var context = targetTexture.Device.ImmediateContext;
context.UpdateSubresource(targetTexture, 0, destinationRegion, sourceRegionPtr, rowPitch, depthPitch);
There are some more code out there but basically this is only relevant piece. Whole thing works until OnDraw happens frequently.
Apparently if I force CEF to Paint frequently, whole host process dies.
This is happening at UpdateSubresource.
So my question is, is there another, safer way to do this? (Update texture frequently)
Solution to this problem was relatively simple yet not so obvious at the beginning.
I simply moved the code responsible for updating texture inside render loop and just keep internal buffer pointer cached.

Embed logo in antmedia live stream via canvas

Am following the blog at https://antmedia.io/how-to-merge-live-stream-and-canvas-in-webrtc-easily/ that explains how to embed a logo in antmedia live stream. However, I couldn't quite figure out to initialise a localStream with javascript SDK as illustrated in the blog. Specifically, where is the implementation of initWebRTCAdaptor():
//initialize the webRTCAdaptor with the localStream created.
//initWebRTCAdaptor method is implemented below
initWebRTCAdaptor(localStream);
A complete working sample would be very helpful.
It seems that blog post is not fully up to date.
Let me share what to do to have this feature.
Just add a localStream parameter to the WebRTCAdaptor constructor.
Secondly, use the below code in place of initWebRTCAdaptor
For the full code, please take a look at this gist.
https://gist.github.com/mekya/d7d21f78e7ecb2c34d89bd6ec5bf5799
Make sure that you use your own image in image.src.(Use local images)
var canvas = document.getElementById('canvas');
var vid = document.getElementById('localVideo');
var image=new Image();
image.src="images/play.png";
function draw() {
if (canvas.getContext) {
var ctx = canvas.getContext('2d');
ctx.drawImage(vid, 0, 0, 200, 150);
ctx.drawImage(image,50, 10, 100, 30);
}
}
setInterval(function() { draw(); }, 50);
//capture stream from canvas
var localStream = canvas.captureStream(25);
navigator.mediaDevices.getUserMedia({video: true, audio:true}).then(function (stream) {
var video = document.querySelector('video#localVideo');
video.srcObject = stream;
video.onloadedmetadata = function(e) {
video.play();
};
//initialize the webRTCAdaptor with the localStream created.
//initWebRTCAdaptor method is implemented below
localStream.addTrack(stream.getAudioTracks()[0]);
initWebRTCAdaptor(false, autoRepublishEnabled);
});

nokia Imaging SDK customize BlendFilter

I have created this code
Uri _blendImageUri = new Uri(#"Assets/1.png", UriKind.Relative);
var _blendImageProvider = new StreamImageSource((System.Windows.Application.GetResourceStream(_blendImageUri).Stream));
var bf = new BlendFilter(_blendImageProvider);
Filter work nice. But I want change image size for ForegroundSource property. How can I load image with my size?
If I understood you correctly you are trying to blend ForegroundSource with only a part of the original image? That is called local blending at it is currently not supported on the BlendFilter itself.
You can however use ReframingFilter to reframe the ForegroundSource and then blend it. Your chain will look like something like this:
using (var mainImage = new StreamImageSource(...))
using (var filterEffect = new FilterEffect(mainImage))
{
using (var secondaryImage = new StreamImageSource(...))
using (var secondaryFilterEffect = new FilterEffect(secondaryImage))
using (var reframing = new ReframingFilter(new Rect(0, 0, 500, 500), 0)) //reframe your image, thus "setting" the location and size of the content when blending
{
secondaryFilterEffect.Filters = new [] { reframing };
using (var blendFilter = new BlendFilter(secondaryFilterEffect)
using (var renderer = new JpegRenderer(filterEffect))
{
filterEffect.Filters = new [] { blendFilter };
await renderer.RenderAsync();
}
}
}
As you can see, you can use the reframing filter to position the content of your ForegroundSource so that it will only blend locally. Note that when reframeing you can set the borders outside of the image location (for example new Rect(-100, -100, 500, 500)) and the areas outside of the image will appear as black transparent areas - exactly what you need in BlendFilter.

How do I convert blob to imagedata?

I want to stream an image to webpage via websocket. the data is in RGBA. how do I change the blog into image data?
this is my current code, it doesn't work and it will be slow. is there a direct way of assigning event.data to canvas' image data?
void onMessage(MessageEvent event)
{
print("received!");
var imgData = canvas.getImageData(0, 0, 100, 100);
var j = 0;
for (var i=0;i<imgData.data.length;i+=4)
{
imgData.data[i+0]=event.data[j];
imgData.data[i+1]=event.data[j+1];
imgData.data[i+2]=event.data[j+2];
imgData.data[i+3]=255;
j+=3;
}
canvas.putImageData(imgData,0,0);
}
On Firefox you can use the toBlob method. Put the image data on a temporany canvas and call toBlob method. Proof of concept example:
var canvas = document.createElement('canvas');
canvas.width = imageData.width;
canvas.height = imageData.width;
me._dstCanvas.getContext('2d').putImageData(a.dstImgData, 0, 0);
me._dstCanvas.toBlob(function(blob) {
blob// this is yout file
}, 'image/png', 1);
For more have a look at Moz Dev:
https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toBlob

nodejs - How to add image data from file into canvas

The following code is supposed to read an image file and then add the file data into a canvas with the help of the Canvas module.
When I run this code I receive the error message Image is not defined. Is the image object that I'm trying to initialise from a module that I simply import?
var http = require('http'), fs = require('fs'),
Canvas = require('canvas');
http.createServer(function (req, res) {
fs.readFile(__dirname + '/image.jpg', function(err, data) {
if (err) throw err;
img = new Image();
img.src = data;
ctx.drawImage(img, 0, 0, img.width / 4, img.height / 4);
res.write('<html><body>');
res.write('<img src="' + canvas.toDataURL() + '" />');
res.write('</body></html>');
res.end();
});
}).listen(8124, "127.0.0.1");
console.log('Server running at http://127.0.0.1:8124/');
I apologize if I'm wrong here, but it looks like you've found this code somewhere and tried to use it without actually understanding what's happening under the covers. Even if you were to fix the Image is not defined error, there are many others.
I have the fixed code at the end of this post, but I'd recommend thinking more deeply about these issues in the code from your question:
What is Image? Where does it come from? You've imported http, fs, and Canvas, so those things are obviously defined. However, Image hase not been defined anywhere and it is not a built-in.
As it turns out, Image is from the node-canvas module, which you've imported with Canvas = require('canvas'). This means that Image is available as Canvas.Image.
It's important to understand that this is because of the imports you've setup. You could just have easily have done abc = require('canvas'), and then Image would be available as abc.Image.
What is ctx? Where is that coming from?
Again, this is another variable that just hasn't been defined anywhere. Unlike Image, it isn't available as Canvas.ctx. It's just a random variable name that doesn't correspond to anything at this point, so trying to call drawImage on it is going to throw an exception.
What about canvas (lowercase)? What is that?
You are using canvas.toDataURL, but there is no variable called canvas anywhere. What are you expecting this piece of code to do? Right now it's just going to throw an exception saying that canvas is undefined.
I'd recommend reading documentation more closely and looking more closely at any example code you copy into your own applications in the future.
Here is the fixed code, with some comments to explain my changes. I figured this out by taking a quick look at the documentation at https://github.com/learnboost/node-canvas.
var http = require('http'), fs = require('fs'),
Canvas = require('canvas');
http.createServer(function (req, res) {
fs.readFile(__dirname + '/image.jpg', function(err, data) {
if (err) throw err;
var img = new Canvas.Image; // Create a new Image
img.src = data;
// Initialiaze a new Canvas with the same dimensions
// as the image, and get a 2D drawing context for it.
var canvas = new Canvas(img.width, img.height);
var ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0, img.width / 4, img.height / 4);
res.write('<html><body>');
res.write('<img src="' + canvas.toDataURL() + '" />');
res.write('</body></html>');
res.end();
});
}).listen(8124, "127.0.0.1");
console.log('Server running at http://127.0.0.1:8124/');
node-canvas now has a helper function, loadImage, that returns a Promise resolving to a loaded Image object. This prevents having to mess around with onload handlers like in the accepted answer.
const http = require('http');
const fs = require('fs');
const Canvas = require('canvas');
http.createServer(function (req, res) {
fs.readFile(__dirname + '/image.jpg', async function(err, data) {
if (err) throw err;
const img = await Canvas.loadImage(data);
const canvas = new Canvas(img.width, img.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(img, 0, 0, img.width / 4, img.height / 4);
res.write('<html><body>');
res.write('<img src="' + canvas.toDataURL() + '" />');
res.write('</body></html>');
res.end();
});
}).listen(8124, "127.0.0.1");
console.log('Server running at http://127.0.0.1:8124/');

Resources