openseadragon get selection dataurl/blob - image

I retrieve a rect from openSeadragonSelection:
viewer:
this.viewer = OpenSeadragon(this.config);
this.selection = this.viewer.selection({
showConfirmDenyButtons: true,
styleConfirmDenyButtons: true,
returnPixelCoordinates: true,
onSelection: rect => console.log(rect)
});
this.selection.enable();
rect by onSelection:
t.SelectionRect {x: 3502, y: 2265, width: 1122, height: 887, rotation:0, degrees: 0, …}
I have no idea how to get the canvas by rect from my viewer instance.
this.viewer.open(new OpenSeadragon.ImageTileSource(this.getTile(this.src)));
A self implemented imageViewer returned the canvas of the selected area. So I could get the blob and post it to the server:
onSave(canvas){
let source = canvas.toDataURL();
this.setState({source:source, crop: false, angle: 0});
save(this.dataURItoBlob(source), source.match(new RegExp("\/(.*);"))1]);
}
dataURItoBlob(dataURI) {
// convert base64/URLEncoded data component to raw binary data held in a string
var byteString;
if (dataURI.split(',')[0].indexOf('base64') >= 0)
byteString = atob(dataURI.split(',')[1]);
else
byteString = unescape(dataURI.split(',')[1]);
// separate out the mime component
var mimeString = dataURI.split(',')[0].split(':')[1].split(';')[0];
// write the bytes of the string to a typed array
var ia = new Uint8Array(byteString.length);
for (var i = 0; i < byteString.length; i++) {
ia[i] = byteString.charCodeAt(i);
}
return new Blob([ia], {type:mimeString});
}
How can I get the image of the viewer by rect. Rotation should be considered as well.
#iangilman:
Thank's alot for your advice. I created another canvas which I crop and after that put it back into the viewer. I was not sure if something similar was supported by your library yet:
const viewportRect = self.viewer.viewport.imageToViewportRectangle(rect);
const webRect = self.viewer.viewport.viewportToViewerElementRectangle(viewportRect);
const { x, y, width, height } = webRect || {};
const { canvas } = self.viewer.drawer;
let source = canvas.toDataURL();
const img = new Image();
img.onload = function () {
let croppedCanvas = document.createElement('canvas');
let ctx = croppedCanvas.getContext('2d');
croppedCanvas.width = width;
croppedCanvas.height = height;
ctx.drawImage(img, x, y, width, height, 0, 0, width, height);
let croppedSrc = croppedCanvas.toDataURL();
//update viewer with cropped image
self.tile = self.getTile(croppedSrc);
self.ImageTileSource = new OpenSeadragon.ImageTileSource(self.tile);
self.viewer.open(self.ImageTileSource);
}
img.src = source;
Rotation hasn't been considered yet.

I imagine you'll need to convert the rectangle into the proper coordinates, then create a second canvas and copy the appropriate bit out of the OSD canvas into the second one.
Looks like maybe the selection rectangle is in image coordinates? The OSD canvas will be in web coordinates, or maybe double that on an HDPI display. OSD has a number of conversion functions, for instance:
var viewportRect = viewer.viewport.imageToViewportRectangle(imageRect);
var webRect = viewer.viewport.viewportToViewerElementRectangle(viewportRect);
You can find out the pixel density via OpenSeadragon.pixelDensityRatio.
Once you have the appropriate rectangle it should be easy to copy out of the one canvas into another. I'm not sure how you incorporate rotation, but it might be as simple as adding a rotation call to one of the canvas contexts.
Sorry this is kind of vague, but I hope it helps!

Related

Three.js / React-three-fiber | How to spread PointsMaterial evenly on a BufferGeometry(costum SphereGeometry) and compare vertices with Image data

I try to reacreate the github landingpage globe (if youre not logged in https://github.com/) with three.js and react-three-fiber by the example provided by them (https://github.blog/2020-12-21-how-we-built-the-github-globe/). My goal is to archieve that, with the only differents being, using three.js pointsMaterial and hide those materials, which would have the same coordinates as water on an image of the earth.
Sorry for bad english or misspelling (english is not my native language) and this is my first stackoverflow question too.. So if something is unclear or I wasnt specific enough let me know and I try my best to correct it. Thanks in advance for any help!
My Questions are:
how do you spread those three.js points along different latitudes from, in this case south to the north pole.
how would you compare for example image color / alpha values with the pointsMaterial and decide if it should be visible (land) or not (water)
I played around with the code I found on the Github explanation above for a view days, but cant figure out how to translate it, so I could use it:
for (let lat = -90; lat <= 90; lat += 180/rows) {
const radius = Math.cos(Math.abs(lat) * DEG2RAD) * GLOBE_RADIUS; // espacially this part what would DEG2RAD and GLOBE_RADIUS mean?
const circumference = radius * Math.PI * 2;
const dotsForLat = circumference * dotDensity;
for (let x = 0; x < dotsForLat; x++) {
const long = -180 + x*360/dotsForLat;
if (!this.visibilityForCoordinate(long, lat)) continue;
// Setup and save circle matrix data
}
}
Currently I managed to get some data from an image of our planet, by creating a non rendered canvas, created context for it, painted the image in there and getting the values out of it by using .getImageData(). I was able to create a halo by using a for loop for the vertices position of the BufferGeometry, but i guess you'll see it in the image / code provided below.
Current progress Image
import { useEffect } from "react";
import * as THREE from "three";
// Had some troubles with setting up loaders in next.js, so I used Three / react-three-fiber
const Box = () => {
// Loading and append image to get values of it from a not rendered canvas
useEffect(() => {
const textureLoader = new THREE.TextureLoader();
textureLoader.load("/globe/earth.png", (texture) => {
const width = texture.image.width;
const height = texture.image.height;
const img = texture.image;
const canvas = document.createElement("canvas");
const ctx = canvas.getContext("2d");
canvas.width = width;
canvas.height = height;
ctx.scale(1, -1);
ctx.drawImage(img, 0, 0, width, height * -1);
const imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
console.log(imgData);
});
}, []);
// Creating Geometry / Material
const count = 500;
const vertices = new Float32Array(count * 3);
// Looping to get values for vertices
for (let x = 0; x < count * 3; x++) {
const value = Math.cos(Math.abs(x));
vertices[x] = value;
}
const material = new THREE.PointsMaterial({ color: "white", size: 0.005 });
const geometry = new THREE.BufferGeometry();
geometry.setAttribute("position", new THREE.BufferAttribute(vertices, 3));
// Returning to render on canvas in index file
return <points material={material} geometry={geometry}></points>;
};
export default Box;

Get the color of a pixel by its xyz coordinates

I need to get the color of an image texture on a mesh at a given xyz point (mouse click + ray cast). How can I achieve it in THREE.js?
I know I could use gl.readPixels from plain webgl, but it's not a valid option for me.
Thanks!
So I ended using a separate canvas, in which I load the image texture, translate the three.js coordinates into canvas one and read the pixel. Something like this:
// point is a THREE.Vector3
var getColor = function(point, callback) {
var img = new Image();
img.src = 'assets/img/myImage.png';
img.onload = function() {
// get the xy coords from the point
var xyCoords = convertVector3ToXY(point);
// create a canvas to manipulate the image
var canvas = document.createElement('canvas');
canvas.width = img.width;
canvas.height = img.height;
canvas.getContext('2d').drawImage(img, 0, 0, img.width, img.height);
// get the pixel data and callback
var pixelData = canvas.getContext('2d').getImageData(x, y, 1, 1).data;
callback(pixelData);
}
};
Thanks

drag-drop, resize images and then drawing features in 1 canvas

I am working on an application where the user comes and sees a blank area(div or canvas or whatever, lets call it mycanvas hereafter). Now he drags some images from outside(a div) and drops them on mycanvas. He can also resize them. And, he can also draw something in mycanvas with pencils and colors with erasing feature. Now, as per my research till now, I've figured out that the drawing part is a pure HTML 5 canvas stuff. So, no problem with that. But I'm not sure whether he can drop images from an outside div/canvas to mycanvas. Please tell me how to achieve all the three features(drag-drop from outside, draw with pencil, resize images) in a single area.
I have create a online dnd editor by Html5Canvas.
I will create a loop first
var loop = function(){
// Operation Here
}
self.setInterval(loop, 1000/60);
Create the data model, for example a image
var DndImage = function(x, y, width, height, image){
this.type = "image";
this.image = image;
this.x = x;
this.y = y;
this.width = width;
this.height = height;
}
Then we draw the image in the looping
var ObjectArray = new Array();
var WIDTH = 800;
var HEIGHT = 600;
var loop = function(){
var canvas = document.getElementById("canvas");
var context = canvas.getContext("2d");
context.clearRect(0, 0, WIDTH, HEIGHT);
for(var x = 0; x < ObjectArray.length; x++){
if(ObjectArray[x].type == "image")
context.drawImage(ObjectArray[x].image,ObjectArray[x].x,ObjectArray[x].y, ObjectArray[x].width, ObjectArray[x].height);
}
}
Function to add New image object
function addImage(src, x, y, width, height){
var img = new Image();
img.src = src;
img.onload = function(){
ObjectArray.push(new DndImage(x, y, width, height, img));
}
}
And now if you want to do a dnd, You need to do is set up a Listener to listen the mouse move event. And set the DndImage Object x and y to follow the mouse position in the image canavs. You can scale the image or changing the size too.
docuemnt.addEventListener("mousedown", function(){ });
docuemnt.addEventListener("mouseup", function(){ });
docuemnt.addEventListener("mousemove", function(){ });
docuemnt.addEventListener("click", function(){ });
Hope I can help you :D
You can achieve all the required features using kinetic js.
To drag, drop and resize
http://www.html5canvastutorials.com/labs/html5-canvas-drag-and-drop-resize-and-invert-images/
To paint using different shapes, say a line:
http://www.html5canvastutorials.com/kineticjs/html5-canvas-kineticjs-line-tutorial/
and dropping from outside canvas is the simplest thing, probably:
http://www.w3schools.com/html/html5_draganddrop.asp
Just check these and let me know if there is any problem in integration.

WebGL single frame "screenshot" of webGL

tried searching for something like this, but I've had no luck. I'm trying to open a new tab with a screenshot of the current state of my webgl image. Basically, it's a 3d model, with the ability to change which objects are displayed, the color of those objects, and the background color. Currently, I am using the following:
var screenShot = window.open(renderer.domElement.toDataURL("image/png"), 'DNA_Screen');
This line succeeds in opening a new tab with a current image of my model, but does not display the current background color. It also does not properly display the tab name. Instead, the tab name is always "PNG 1024x768".
Is there a way to change my window.open such that the background color is shown? The proper tab name would be great as well, but the background color is my biggest concern.
If you open the window with no URL you can access it's entire DOM directly from the JavaScript that opened the window.
var w = window.open('', '');
You can then set or add anything you want
w.document.title = "DNA_screen";
w.document.body.style.backgroundColor = "red";
And add the screenshot
var img = new Image();
img.src = someCanvas.toDataURL();
w.document.body.appendChild(img);
Well it is much longer than your one liner but you can change the background color of the rectangle of the context.
printCanvas (renderer.domElement.toDataURL ("image/png"), width, height,
function (url) { window.open (url, '_blank'); });
// from THREEx.screenshot.js
function printCanvas (srcUrl, dstW, dstH, callback)
{
// to compute the width/height while keeping aspect
var cpuScaleAspect = function (maxW, maxH, curW, curH)
{
var ratio = curH / curW;
if (curW >= maxW && ratio <= 1)
{
curW = maxW;
curH = maxW * ratio;
}
else if (curH >= maxH)
{
curH = maxH;
curW = maxH / ratio;
}
return { width: curW, height: curH };
}
// callback once the image is loaded
var onLoad = function ()
{
// init the canvas
var canvas = document.createElement ('canvas');
canvas.width = dstW;
canvas.height = dstH;
var context = canvas.getContext ('2d');
context.fillStyle = "black";
context.fillRect (0, 0, canvas.width, canvas.height);
// scale the image while preserving the aspect
var scaled = cpuScaleAspect (canvas.width, canvas.height, image.width, image.height);
// actually draw the image on canvas
var offsetX = (canvas.width - scaled.width ) / 2;
var offsetY = (canvas.height - scaled.height) / 2;
context.drawImage (image, offsetX, offsetY, scaled.width, scaled.height);
// notify the url to the caller
callback && callback (canvas.toDataURL ("image/png")); // dump the canvas to an URL
}
// Create new Image object
var image = new Image();
image.onload = onLoad;
image.src = srcUrl;
}

Kinetic JS - Change image resolution using filter when mouse is down

I am changing left mouse down (drag) to change the image, it works but very slow (refresh). I am using the following to display the image:
function makeKineticImage() {
dImage1 = new Kinetic.Image({
drawFunc: function(canvas) {
var context2 = canvas.getContext("2d");
var x = 0;
var y = 0;
context2.drawImage(dicom1, x, y);
imageData = context2.getImageData(x, y, dicom1.width, dicom1.height).data;
}
});
layer1.add(dImage1);
Then changing the image using Ajax:
...
}).done(function(d) {
dImage1.applyFilter(Kinetic.Filters.Grayscale, null, function() {
image.src = '/Home/changeImage?udm=' + (++udm);
layer1.draw();
});
I tried Grayscale filter, the refresh improved but not good enough. Is there a way to lower the resolution (down sampling). I would appreciate your suggestions, thanks in advance.

Resources