I'm using a neat script found online (Teal 3d dice roller).
http://a.teall.info/dice/
The dice numbers are hardcoded as standard fonts in the script (no image textures applied).
I would like to get rid of those numbers and apply pictogram textures instead to customize the dice to fit my needs.
So, right now, I'm just trying to apply one unique texture to all faces (even though I plan to obviously have 6 different textures eventually but first thing first).
Here is the original script function :
this.create_dice_materials = function(face_labels, size, margin) {
function create_text_texture(text, color, back_color) {
/* --- start of the part I planned to modify --- */
if (text == undefined) return null;
var canvas = document.createElement("canvas");
var context = canvas.getContext("2d");
var ts = calc_texture_size(size + size * 2 * margin) * 2;
canvas.width = canvas.height = ts;
context.font = ts / (1 + 2 * margin) + "pt Arial";
context.fillStyle = back_color;
context.fillRect(0, 0, canvas.width, canvas.height);
context.textAlign = "center";
context.textBaseline = "middle";
context.fillStyle = color;
context.fillText(text, canvas.width / 2, canvas.height / 2);
/* --- End of the part I planned to modify --- */
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
return texture;
}
var materials = [];
for (var i = 0; i < face_labels.length; ++i)
materials.push(new THREE.MeshPhongMaterial($t.copyto(this.material_options,
{ map: create_text_texture(face_labels[i], this.label_color, this.dice_color) })));
return materials;
}
And here is my attempt to apply a texture instead:
this.create_dice_materials = function(face_labels, size, margin) {
function create_text_texture(text, color, back_color) {
/* --- start of the modified part --- */
var img = document.getElementById("image_name");
var canvas = document.createElement("canvas");
var cs = img.height;
canvas.width = img.height;
canvas.height = img.height;
var context = canvas.getContext("2d");
context.drawImage(img, 0, 0, cs, cs, 0, 0, cs, cs);
/* --- End of the modified part --- */
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
return texture;
}
var materials = [];
for (var i = 0; i < face_labels.length; ++i)
materials.push(new THREE.MeshPhongMaterial($t.copyto(this.material_options,
{ map: create_text_texture(face_labels[i], this.label_color, this.dice_color) })));
return materials;
}
Note: the texture picture is embedded within the html file as an img tag. It shows up alright as a flat html picture and it has the proper id which is "image_name". So, this shouldn't be part of the problem.
Anyway, those changes in the script don't break the script (no exception appears in the console while executing it),but nothing shows up on the dice neither. No numbers, no texture.
Any idea what is wrong and how I should proceed to make it work?
So far I suspect two things:
1) the "drawImage" parameters
2) the "map" parameter within the materials array
Thanks.
For whatever reason, it worked on a distant server, not locally.
So, I guess it is solved.
Related
I try to reacreate the github landingpage globe (if youre not logged in https://github.com/) with three.js and react-three-fiber by the example provided by them (https://github.blog/2020-12-21-how-we-built-the-github-globe/). My goal is to archieve that, with the only differents being, using three.js pointsMaterial and hide those materials, which would have the same coordinates as water on an image of the earth.
Sorry for bad english or misspelling (english is not my native language) and this is my first stackoverflow question too.. So if something is unclear or I wasnt specific enough let me know and I try my best to correct it. Thanks in advance for any help!
My Questions are:
how do you spread those three.js points along different latitudes from, in this case south to the north pole.
how would you compare for example image color / alpha values with the pointsMaterial and decide if it should be visible (land) or not (water)
I played around with the code I found on the Github explanation above for a view days, but cant figure out how to translate it, so I could use it:
for (let lat = -90; lat <= 90; lat += 180/rows) {
const radius = Math.cos(Math.abs(lat) * DEG2RAD) * GLOBE_RADIUS; // espacially this part what would DEG2RAD and GLOBE_RADIUS mean?
const circumference = radius * Math.PI * 2;
const dotsForLat = circumference * dotDensity;
for (let x = 0; x < dotsForLat; x++) {
const long = -180 + x*360/dotsForLat;
if (!this.visibilityForCoordinate(long, lat)) continue;
// Setup and save circle matrix data
}
}
Currently I managed to get some data from an image of our planet, by creating a non rendered canvas, created context for it, painted the image in there and getting the values out of it by using .getImageData(). I was able to create a halo by using a for loop for the vertices position of the BufferGeometry, but i guess you'll see it in the image / code provided below.
Current progress Image
import { useEffect } from "react";
import * as THREE from "three";
// Had some troubles with setting up loaders in next.js, so I used Three / react-three-fiber
const Box = () => {
// Loading and append image to get values of it from a not rendered canvas
useEffect(() => {
const textureLoader = new THREE.TextureLoader();
textureLoader.load("/globe/earth.png", (texture) => {
const width = texture.image.width;
const height = texture.image.height;
const img = texture.image;
const canvas = document.createElement("canvas");
const ctx = canvas.getContext("2d");
canvas.width = width;
canvas.height = height;
ctx.scale(1, -1);
ctx.drawImage(img, 0, 0, width, height * -1);
const imgData = ctx.getImageData(0, 0, canvas.width, canvas.height);
console.log(imgData);
});
}, []);
// Creating Geometry / Material
const count = 500;
const vertices = new Float32Array(count * 3);
// Looping to get values for vertices
for (let x = 0; x < count * 3; x++) {
const value = Math.cos(Math.abs(x));
vertices[x] = value;
}
const material = new THREE.PointsMaterial({ color: "white", size: 0.005 });
const geometry = new THREE.BufferGeometry();
geometry.setAttribute("position", new THREE.BufferAttribute(vertices, 3));
// Returning to render on canvas in index file
return <points material={material} geometry={geometry}></points>;
};
export default Box;
There's been a-lot of questions around this but none of those have fixed my problem. Any image that I upload onto the object becomes pixelated regardless of the minFilter or magFilter that I use - and I've used all of them:
THREE.NearestFilter
THREE.NearestMipMapNearestFilter
THREE.NearestMipMapLinearFilter
THREE.LinearFilter
THREE.LinearMipMapNearestFilter
THREE.LinearMipMapLinearFilter
Here's the object with a pixelated image:
And here's a snapshot of how I'm loading the image on:
// Build a canvas object and add the image to it
var imageCanvas = this.getCanvas(imageLayer.guid, 'image');
var imageLoader = new THREE.ImageLoader();
imageLoader.load(imageUrl, img => {
// this.drawImage(img, gr, imageCanvas.canvas, imageCanvas.ctx);
var canvas = imageCanvas.canvas;
var ctx = imageCanvas.ctx;
canvas.width = 1024;
canvas.height = 1024;
var imgAspectRatioAdjustedWidth, imgAspectRatioAdjustedHeight;
var pushDownValueOnDy = 0;
var grWidth = canvas.width / 1.618;
if(img.width > img.height) {
grWidth = canvas.width - grWidth;
}
var subtractFromDx = (canvas.width - grWidth) / 2;
var grHeight = canvas.height / 1.618;
if(img.height > img.height) {
grHeight = canvas.height - grHeight;
}
var subtractFromDy = (canvas.height - grHeight) / 2;
var dx = (canvas.width / 2);
dx -= subtractFromDx;
var dy = (canvas.height / 2);
dy -= (subtractFromDy + pushDownValueOnDy);
imgAspectRatioAdjustedWidth = (canvas.width - grWidth) + 50;
imgAspectRatioAdjustedHeight = (canvas.height - grHeight) + 50;
ctx.globalAlpha = 0.5;
ctx.fillStyle = 'blue;'
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.globalAlpha = 1.0;
ctx.drawImage(img, dx, dy, imgAspectRatioAdjustedWidth, imgAspectRatioAdjustedHeight);
});
After this the canvas data is added to an array to be painted onto the object - it is at this point that the CanvasTexture gets the mapped canvas:
var canvasTexture = new THREE.CanvasTexture(mainCanvas.canvas);
canvasTexture.magFilter = THREE.LinearFilter;
canvasTexture.minFilter = THREE.LinearMipMapLinearFilter;
// Flip the canvas
if(this.currentSide === 'front' || this.currentSide === 'back'){
canvasTexture.wrapS = THREE.RepeatWrapping;
canvasTexture.repeat.x = -1;
}
canvasTexture.needsUpdate = true;
// { ...overdraw: true... } seems to allow the other sides to be transparent so we can see inside
var material = new THREE.MeshBasicMaterial({map: canvasTexture, side: THREE.FrontSide, transparent: false});
for(var i = 0; i < this.layers[this.currentSide].length; i++) {
mainCanvas.ctx.drawImage( this.layers[this.currentSide][i].canvas, 0, 0, this.canvasWidth, this.canvasHeight);
}
Thanks to #2pha for the help as his suggestions lead me to the correct answer and, it turns out, that the pixelated effect was caused by different dimensions of the canvases.
For example the main canvas itself was 1024x1024 whereas the text & image canvases were only 512x512 pixels meaning that it would have to be stretched to cover the size of the main canvas.
UPDATE Cause of problem has been found - see Update section end of question.
I have a complex app using THREE.js (r60) which adds a special plane object to the main scene. The plane geometry is determined by heightmapping from an internally-supplied base64 uri image (size 16x16, 32x32 or 64x64 pixels). The scene has two static lights (ambient and directional) and one moveable point light which switches on and off.
In the complex app the point light is not reflected by the plane object. (Point light is toggled by pressing "R" key or button).
I have made a first JSFiddle example using THREE.js latest version (r70) where the lights work fine.
[Update] I have now made a second JSFiddle example using the older THREE.js library (r60) it also works OK.
I suspect the problem in the complex app (r60) may have something to do with system capacity and or timing/sequencing. Capacity is definitely an issue because other simpler scene objects (boxes and cylinders) show individual responses or non-responses to the point light which vary from one run of the app to the next, seemingly depending on the overall level of system activity (cpu, memory usage). These simpler objects may reflect in one run but not in the next. But the heightmapped plane object is consistently non-reflective to the point light. These behaviors are observed on (i) a Win7 laptop and (ii) an Android Kitkat tablet.
The heightmapping process may be part of the cause. I say this because when I comment out the heightmapped plane and activate a simple similar plane object (with randomly assigned z-levels) the latter plane behaves as expected (i.e. it reflects point light).
I guess that the usual approach now would be to upgrade my complex app to r70 (not a trivial step) and then start disabling chunks of the app to narrow down the cause. However it may be that the way in which heightmapping is implemented (e.g. with a callback) is a factor in explaining the failure of the heightmapped plane to reflect point light.
[RE-WRITTEN] So I would be grateful if anyone could take a look at the code in the correctly-working, previously-cited, (r70) JSFiddle example and point out any glaring design faults which (if applied in more complex, heavilly-loaded apps) might lead to failure of the height-mapped plane to reflect point light.
Full code (javascript, not html or css) of the (r70) JSFiddle:-
//... Heightmap from Image file
//... see http://danni-three.blogspot.co.uk/2013/09/threejs-heightmaps.html
var camera, scene, renderer;
var lpos_x = -60,lpos_y = 20,lpos_z = 100;
var mz = 1;
var time = 0, dt = 0;
var MyPlane, HPlane;
base64_imgData = "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEAeAB4AAD/4QBoRXhpZgAATU0AKgAAAAgABAEaAAUAAAABAAAAPgEbAAUAAAABAAAARgEoAAMAAAABAAIAAAExAAIAAAASAAAATgAAAAAAAAB4AAAAAQAAAHgAAAABUGFpbnQuTkVUIHYzLjUuMTAA/9sAQwANCQoLCggNCwsLDw4NEBQhFRQSEhQoHR4YITAqMjEvKi4tNDtLQDQ4RzktLkJZQkdOUFRVVDM/XWNcUmJLU1RR/9sAQwEODw8UERQnFRUnUTYuNlFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFRUVFR/8AAEQgAIAAgAwEiAAIRAQMRAf/EAB8AAAEFAQEBAQEBAAAAAAAAAAABAgMEBQYHCAkKC//EALUQAAIBAwMCBAMFBQQEAAABfQECAwAEEQUSITFBBhNRYQcicRQygZGhCCNCscEVUtHwJDNicoIJChYXGBkaJSYnKCkqNDU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6g4SFhoeIiYqSk5SVlpeYmZqio6Slpqeoqaqys7S1tre4ubrCw8TFxsfIycrS09TV1tfY2drh4uPk5ebn6Onq8fLz9PX29/j5+v/EAB8BAAMBAQEBAQEBAQEAAAAAAAABAgMEBQYHCAkKC//EALURAAIBAgQEAwQHBQQEAAECdwABAgMRBAUhMQYSQVEHYXETIjKBCBRCkaGxwQkjM1LwFWJy0QoWJDThJfEXGBkaJicoKSo1Njc4OTpDREVGR0hJSlNUVVZXWFlaY2RlZmdoaWpzdHV2d3h5eoKDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uLj5OXm5+jp6vLz9PX29/j5+v/aAAwDAQACEQMRAD8A19Z8SXdu5KOMKxBAFOi1uTUdNJguxFcAchv6Vz2so/mzKc8sc8VX8MyQjUVWYNweCO9AEsOuX8s+xrqWQh8DJ4rsJCphSN3Czsm7ArG1bT7fSFe7EZJzuX3J6VQsdRnvryJ2+/wooA6O501JY7yRh0U8Vyg1WzsghhsAkqnBO4nd713t8NsEqIhJfqRXEahotxPJlISDnOaANzWvL1rR4JiTG6ryorG0C2aDUI02lhu6kVZ02wvLVSpYtu65yRXQaZYvDL5rhRx2oA//2Q==";
init();
animate();
//==================================================================
function init() {
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 10);
camera.position.x = 1300;
camera.position.y = 400;
camera.position.z = 0;
camera.lookAt(0, 0, 0);
scene.add(camera);
scene.add(new THREE.AmbientLight(0x001900));
SunLight = new THREE.DirectionalLight(0xff0000,.3,20000);//...color, intensity, range.
SunLight.position.set(0, 3000, -8000);
scene.add(SunLight);
//POINT LIGHT
PL_color = 0x0000ff;
PL_intensity = 10;
PL_range_to_zero_intensity = 1200;
PL = new THREE.PointLight(PL_color, PL_intensity, PL_range_to_zero_intensity);
scene.add(PL);
PL_pos_x = -100;
PL_pos_y = -100;
PL_pos_z = 120;
PL.position.set(PL_pos_x, PL_pos_y, PL_pos_z);
//INDICATOR SPHERE
var s_Geometry = new THREE.SphereGeometry(5, 20, 20);
var s_Material = new THREE.MeshBasicMaterial({
color: 0xaaaaff
});
i_Sphere = new THREE.Mesh(s_Geometry, s_Material);
i_Sphere.position.set(PL_pos_x, PL_pos_y, PL_pos_z);
scene.add(i_Sphere);
//Plane02
var Plane02Geo = new THREE.PlaneGeometry(50, 50); //...
var Plane02Material = new THREE.MeshPhongMaterial({
side: THREE.DoubleSide
}, {
color: 0xaaaaaa
});
Plane02 = new THREE.Mesh(Plane02Geo, Plane02Material);
Plane02.position.set(0, 0, -120);
scene.add(Plane02);
//PEAS
xxx = SOW_F_Make_peas();
//RENDERER
renderer = new THREE.WebGLRenderer({
antialias: true
});
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.shadowMapEnabled = true;
renderer.shadowMapSoft = false;
document.body.appendChild(renderer.domElement);
// controls
controls = new THREE.OrbitControls(camera, renderer.domElement);
xxx = SOW_F_Make_Heightmap_Object_from_Image_File(scene, camera);
} //...EOFunction Init
//==================================================================
function animate() {
dt = 0.1;
time += dt;
if (time < 10000) {
requestAnimationFrame(animate);
//move point light & indicator sphere
speed = 16;
if (Math.abs(PL_pos_z) > 400) mz = (-1)* mz;
PL_pos_x += 0.01 * speed * mz;
PL_pos_y += 0.05 * speed * mz;
PL_pos_z -= 0.2 * speed * mz;
PL.position.set(PL_pos_x, PL_pos_y, PL_pos_z);
i_Sphere.position.set(PL_pos_x, PL_pos_y, PL_pos_z);
renderer.render(scene, camera);
} else alert("Time=" + time + "Finished");
}
//==================================================================
function SOW_F_Make_Heightmap_Object_from_Image_File(givenScene, givenCamera) {
//... Read a Heightmap from a coloured image file
//... into a (pre-defined global) plane object called HPlane
MyImage = new Image();
MyImage.onload = function () {
var MyPlane_width = 1000;//6000; //...MyPlane width or height are in scene units and do not have to match image width or height
var MyPlane_height = 1000;//6000;
var MyPlane_w_segs = MyImage.naturalWidth - 1; //... important that this mapping is correct for texture 1 pixel :: 1 segment.
var MyPlane_h_segs = MyImage.naturalHeight - 1; //... important that this mapping is correct for texture 1 pixel :: 1 segment.
var Hgeometry = new THREE.PlaneGeometry(MyPlane_width, MyPlane_height, MyPlane_w_segs, MyPlane_h_segs);
//var texture = THREE.ImageUtils.loadTexture( '/images/Tri_VP_Texturemap.jpg' );
var texture = THREE.ImageUtils.loadTexture( base64_imgData );
//... Choose texture or color
//var Hmaterial = new THREE.MeshLambertMaterial( { map: texture, side: THREE.DoubleSide} );//....fails
var Hmaterial = new THREE.MeshPhongMaterial( {
color: 0x111111 , side: THREE.DoubleSide } ); //... works OK
HPlane = new THREE.Mesh(Hgeometry, Hmaterial);
//...get Height Data from Image
var scale = 0.6;//1//6; //0.25;
var Height_data = DA_getHeightData(MyImage, scale);
//... set height of vertices
X_offset = 0;
Y_offset = 0;
Z_offset = -100; //...this will (after rotation) add to the vertical height dimension (+ => up).
for (var iii = 0; iii < HPlane.geometry.vertices.length; iii++) {
//HPlane.geometry.vertices[iii].x = X_offset;
//HPlane.geometry.vertices[iii].y = Y_offset;
HPlane.geometry.vertices[iii].z = Z_offset + Height_data[iii];
}
//----------------------------------------------------------------------
//... Must do it in this order...Faces before Vertices
//... see WestLangley's response in http://stackoverflow.com/questions/13943907/my-object-isnt-reflects-the-light-in-three-js
HPlane.rotation.x = (-(Math.PI) / 2); //... rotate MyPlane -90 degrees on X
//alert("Rotated");
HPlane.geometry.computeFaceNormals(); //... for Lambert & Phong materials
HPlane.geometry.computeVertexNormals(); //... for Lambert & Phong materials
/*
HPlane.updateMatrixWorld();
HPlane.matrixAutoUpdate = false;
HPlane.geometry.verticesNeedUpdate = true;
*/
givenScene.add(HPlane);
HPlane.position.set(0, -150, 0);//... cosmetic
//return HPlane; //... not necessary, given that HPlane is global.
} ; //... End of MyImage.onload = function ()
//===============================================================
//... *** IMPORTANT ***
//... Only NOW do we command the script to actually load the image source
//... This .src statement will load the image from file into MyImage object
//... and invoke the pre-associated MyImage.OnLoad function
//... cause cross-origin problem: MyImage.src = '/images/Tri_VP_Heightmap_64x64.jpg'; //...if image file is local to this html file.
MyImage.src = base64_imgData;//... uses image data provided in the script to avoid Cross-origin file source restrictions.
} //... End of function SOW_F_Make_Heightmap_Object_from_Image_File
//===========================================================================
function DA_getHeightData(d_img, scale) {
//... This is used by function SOW_F_Make_Heightmap_Object_from_Image_File.
//if (scale == undefined) scale=1;
var canvas = document.createElement('canvas');
canvas.width = d_img.width; //OK
canvas.height = d_img.height;
var context = canvas.getContext('2d');
var size = d_img.width * d_img.height;
var data = new Float32Array(size);
context.drawImage(d_img, 0, 0);
for (var ii = 0; ii < size; ii++) {
data[ii] = 0;
}
var imgData = context.getImageData(0, 0, d_img.width, d_img.height);
var pix = imgData.data; //... Uint(8) UnClamped Array[1024] for a 16x16 = 256 pixel image = 4 slots per pixel.
var jjj = 0;
//... presumably each pix cell can have value 0 to 255
for (var iii = 0; iii < pix.length; iii += 4) {
var all = pix[iii] + pix[iii + 1] + pix[iii + 2];
//... I guess RGBA and we don't use the fourth cell (A, = Alpha channel)
jjj++;
data[jjj] = all * scale / 3; //...original code used 12 not 3 ??? and divided by scale.
//console.log (iii, all/(3*scale), data[jjj]);
}
return data;
} //... end of function DA_getHeightData(d_img,scale)
//==================================================================================================
function SOW_F_Get_A_Plane(givenScene, givenCamera) {
//...MyPlane width or height are in scene units and do not have to match image width or height
var MyPlane_width = 1000;
var MyPlane_height = 1000;
var MyPlane_w_segs = 64; //...
var MyPlane_h_segs = 64; //...
geometry = new THREE.PlaneGeometry(MyPlane_width, MyPlane_height, MyPlane_w_segs, MyPlane_h_segs);
//var material = new THREE.MeshLambertMaterial( { color: 0xeeee00, side: THREE.DoubleSide} );
var material = new THREE.MeshPhongMaterial({
color: 0xeeee00,side: THREE.DoubleSide
}); //... OK
MyPlane = new THREE.Mesh(geometry, material);
givenScene.add(MyPlane);
MyPlane.rotation.x = (-(Math.PI) / 2); // rotate it -90 degrees on X
MyPlane.position.set(0, 100, 0);
MyPlane.geometry.computeFaceNormals(); //...for Lambert & Phong materials
MyPlane.geometry.computeVertexNormals(); //...for Lambert & Phong materials
/*
MyPlane.geometry.verticesNeedUpdate = true;
MyPlane.updateMatrixWorld();
MyPlane.matrixAutoUpdate = false;
*/
} //... EOF SOW_F_Get_A_Plane
//====================================================================
function SOW_F_Make_peas()
{
//----------------- Make an array of spheres -----------------------
Pea_geometry = new THREE.SphereGeometry(5,16,16);
//Pea_material = new THREE.MeshNormalMaterial({ shading: THREE.SmoothShading});
Pea_material = new THREE.MeshPhongMaterial({ color: 0xaa5522});
// global...
num_peas = 1200;
for (var iii = 0; iii < num_peas; iii++)
{
//...now global
ob_Pea = new THREE.Mesh(Pea_geometry, Pea_material);
ob_Pea.position.set(
400 * Math.random() - 150,
300 * Math.random() - 150,
1200 * Math.random() - 150);
scene.add(ob_Pea);//TEST
}
}
UPDATE
It appears the problem is a result of phasing. See this new JSFiddle(r70). Pointlight is created in function init() but not added to scene, or is immediately removed from scene after being added. Then various graphical mesh objects are created. When pointlight is added back to the scene (in the animate loop) it is too late - the mesh objects will not be illuminated by the pointlight.
A procedural solution is simply to not remove pointlights from the scene if they are to be used later. If they need to be "extinguished" temporarilly then just turn down the intensity and turn it up later: e.g.
myPointLight.intensity = 0.00
tried searching for something like this, but I've had no luck. I'm trying to open a new tab with a screenshot of the current state of my webgl image. Basically, it's a 3d model, with the ability to change which objects are displayed, the color of those objects, and the background color. Currently, I am using the following:
var screenShot = window.open(renderer.domElement.toDataURL("image/png"), 'DNA_Screen');
This line succeeds in opening a new tab with a current image of my model, but does not display the current background color. It also does not properly display the tab name. Instead, the tab name is always "PNG 1024x768".
Is there a way to change my window.open such that the background color is shown? The proper tab name would be great as well, but the background color is my biggest concern.
If you open the window with no URL you can access it's entire DOM directly from the JavaScript that opened the window.
var w = window.open('', '');
You can then set or add anything you want
w.document.title = "DNA_screen";
w.document.body.style.backgroundColor = "red";
And add the screenshot
var img = new Image();
img.src = someCanvas.toDataURL();
w.document.body.appendChild(img);
Well it is much longer than your one liner but you can change the background color of the rectangle of the context.
printCanvas (renderer.domElement.toDataURL ("image/png"), width, height,
function (url) { window.open (url, '_blank'); });
// from THREEx.screenshot.js
function printCanvas (srcUrl, dstW, dstH, callback)
{
// to compute the width/height while keeping aspect
var cpuScaleAspect = function (maxW, maxH, curW, curH)
{
var ratio = curH / curW;
if (curW >= maxW && ratio <= 1)
{
curW = maxW;
curH = maxW * ratio;
}
else if (curH >= maxH)
{
curH = maxH;
curW = maxH / ratio;
}
return { width: curW, height: curH };
}
// callback once the image is loaded
var onLoad = function ()
{
// init the canvas
var canvas = document.createElement ('canvas');
canvas.width = dstW;
canvas.height = dstH;
var context = canvas.getContext ('2d');
context.fillStyle = "black";
context.fillRect (0, 0, canvas.width, canvas.height);
// scale the image while preserving the aspect
var scaled = cpuScaleAspect (canvas.width, canvas.height, image.width, image.height);
// actually draw the image on canvas
var offsetX = (canvas.width - scaled.width ) / 2;
var offsetY = (canvas.height - scaled.height) / 2;
context.drawImage (image, offsetX, offsetY, scaled.width, scaled.height);
// notify the url to the caller
callback && callback (canvas.toDataURL ("image/png")); // dump the canvas to an URL
}
// Create new Image object
var image = new Image();
image.onload = onLoad;
image.src = srcUrl;
}
Here's a noodle scratcher.
Bearing in mind we have HTML5 local storage and xhr v2 and what not. I was wondering if anyone could find a working example or even just give me a yes or no for this question:
Is it possible to Pre-size an image using the new local storage (or whatever), so that a user who does not have a clue about resizing an image can drag their 10mb image into my website, it resize it using the new localstorage and THEN upload it at the smaller size.
I know full well you can do it with Flash, Java applets, active X... The question is if you can do with Javascript + Html5.
Looking forward to the response on this one.
Ta for now.
Yes, use the File API, then you can process the images with the canvas element.
This Mozilla Hacks blog post walks you through most of the process. For reference here's the assembled source code from the blog post:
// from an input element
var filesToUpload = input.files;
var file = filesToUpload[0];
var img = document.createElement("img");
var reader = new FileReader();
reader.onload = function(e) {img.src = e.target.result}
reader.readAsDataURL(file);
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 800;
var MAX_HEIGHT = 600;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
var dataurl = canvas.toDataURL("image/png");
//Post dataurl to the server with AJAX
I tackled this problem a few years ago and uploaded my solution to github as https://github.com/rossturner/HTML5-ImageUploader
robertc's answer uses the solution proposed in the Mozilla Hacks blog post, however I found this gave really poor image quality when resizing to a scale that was not 2:1 (or a multiple thereof). I started experimenting with different image resizing algorithms, although most ended up being quite slow or else were not great in quality either.
Finally I came up with a solution which I believe executes quickly and has pretty good performance too - as the Mozilla solution of copying from 1 canvas to another works quickly and without loss of image quality at a 2:1 ratio, given a target of x pixels wide and y pixels tall, I use this canvas resizing method until the image is between x and 2 x, and y and 2 y. At this point I then turn to algorithmic image resizing for the final "step" of resizing down to the target size. After trying several different algorithms I settled on bilinear interpolation taken from a blog which is not online anymore but accessible via the Internet Archive, which gives good results, here's the applicable code:
ImageUploader.prototype.scaleImage = function(img, completionCallback) {
var canvas = document.createElement('canvas');
canvas.width = img.width;
canvas.height = img.height;
canvas.getContext('2d').drawImage(img, 0, 0, canvas.width, canvas.height);
while (canvas.width >= (2 * this.config.maxWidth)) {
canvas = this.getHalfScaleCanvas(canvas);
}
if (canvas.width > this.config.maxWidth) {
canvas = this.scaleCanvasWithAlgorithm(canvas);
}
var imageData = canvas.toDataURL('image/jpeg', this.config.quality);
this.performUpload(imageData, completionCallback);
};
ImageUploader.prototype.scaleCanvasWithAlgorithm = function(canvas) {
var scaledCanvas = document.createElement('canvas');
var scale = this.config.maxWidth / canvas.width;
scaledCanvas.width = canvas.width * scale;
scaledCanvas.height = canvas.height * scale;
var srcImgData = canvas.getContext('2d').getImageData(0, 0, canvas.width, canvas.height);
var destImgData = scaledCanvas.getContext('2d').createImageData(scaledCanvas.width, scaledCanvas.height);
this.applyBilinearInterpolation(srcImgData, destImgData, scale);
scaledCanvas.getContext('2d').putImageData(destImgData, 0, 0);
return scaledCanvas;
};
ImageUploader.prototype.getHalfScaleCanvas = function(canvas) {
var halfCanvas = document.createElement('canvas');
halfCanvas.width = canvas.width / 2;
halfCanvas.height = canvas.height / 2;
halfCanvas.getContext('2d').drawImage(canvas, 0, 0, halfCanvas.width, halfCanvas.height);
return halfCanvas;
};
ImageUploader.prototype.applyBilinearInterpolation = function(srcCanvasData, destCanvasData, scale) {
function inner(f00, f10, f01, f11, x, y) {
var un_x = 1.0 - x;
var un_y = 1.0 - y;
return (f00 * un_x * un_y + f10 * x * un_y + f01 * un_x * y + f11 * x * y);
}
var i, j;
var iyv, iy0, iy1, ixv, ix0, ix1;
var idxD, idxS00, idxS10, idxS01, idxS11;
var dx, dy;
var r, g, b, a;
for (i = 0; i < destCanvasData.height; ++i) {
iyv = i / scale;
iy0 = Math.floor(iyv);
// Math.ceil can go over bounds
iy1 = (Math.ceil(iyv) > (srcCanvasData.height - 1) ? (srcCanvasData.height - 1) : Math.ceil(iyv));
for (j = 0; j < destCanvasData.width; ++j) {
ixv = j / scale;
ix0 = Math.floor(ixv);
// Math.ceil can go over bounds
ix1 = (Math.ceil(ixv) > (srcCanvasData.width - 1) ? (srcCanvasData.width - 1) : Math.ceil(ixv));
idxD = (j + destCanvasData.width * i) * 4;
// matrix to vector indices
idxS00 = (ix0 + srcCanvasData.width * iy0) * 4;
idxS10 = (ix1 + srcCanvasData.width * iy0) * 4;
idxS01 = (ix0 + srcCanvasData.width * iy1) * 4;
idxS11 = (ix1 + srcCanvasData.width * iy1) * 4;
// overall coordinates to unit square
dx = ixv - ix0;
dy = iyv - iy0;
// I let the r, g, b, a on purpose for debugging
r = inner(srcCanvasData.data[idxS00], srcCanvasData.data[idxS10], srcCanvasData.data[idxS01], srcCanvasData.data[idxS11], dx, dy);
destCanvasData.data[idxD] = r;
g = inner(srcCanvasData.data[idxS00 + 1], srcCanvasData.data[idxS10 + 1], srcCanvasData.data[idxS01 + 1], srcCanvasData.data[idxS11 + 1], dx, dy);
destCanvasData.data[idxD + 1] = g;
b = inner(srcCanvasData.data[idxS00 + 2], srcCanvasData.data[idxS10 + 2], srcCanvasData.data[idxS01 + 2], srcCanvasData.data[idxS11 + 2], dx, dy);
destCanvasData.data[idxD + 2] = b;
a = inner(srcCanvasData.data[idxS00 + 3], srcCanvasData.data[idxS10 + 3], srcCanvasData.data[idxS01 + 3], srcCanvasData.data[idxS11 + 3], dx, dy);
destCanvasData.data[idxD + 3] = a;
}
}
};
This scales an image down to a width of config.maxWidth, maintaining the original aspect ratio. At the time of development this worked on iPad/iPhone Safari in addition to major desktop browsers (IE9+, Firefox, Chrome) so I expect it will still be compatible given the broader uptake of HTML5 today. Note that the canvas.toDataURL() call takes a mime type and image quality which will allow you to control the quality and output file format (potentially different to input if you wish).
The only point this doesn't cover is maintaining the orientation information, without knowledge of this metadata the image is resized and saved as-is, losing any metadata within the image for orientation meaning that images taken on a tablet device "upside down" were rendered as such, although they would have been flipped in the device's camera viewfinder. If this is a concern, this blog post has a good guide and code examples on how to accomplish this, which I'm sure could be integrated to the above code.
Correction to above:
<img src="" id="image">
<input id="input" type="file" onchange="handleFiles()">
<script>
function handleFiles()
{
var filesToUpload = document.getElementById('input').files;
var file = filesToUpload[0];
// Create an image
var img = document.createElement("img");
// Create a file reader
var reader = new FileReader();
// Set the image once loaded into file reader
reader.onload = function(e)
{
img.src = e.target.result;
var canvas = document.createElement("canvas");
//var canvas = $("<canvas>", {"id":"testing"})[0];
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
var dataurl = canvas.toDataURL("image/png");
document.getElementById('image').src = dataurl;
}
// Load files into file reader
reader.readAsDataURL(file);
// Post the data
/*
var fd = new FormData();
fd.append("name", "some_filename.jpg");
fd.append("image", dataurl);
fd.append("info", "lah_de_dah");
*/
}</script>
Modification to the answer by Justin that works for me:
Added img.onload
Expand the POST request with a real example
function handleFiles()
{
var dataurl = null;
var filesToUpload = document.getElementById('photo').files;
var file = filesToUpload[0];
// Create an image
var img = document.createElement("img");
// Create a file reader
var reader = new FileReader();
// Set the image once loaded into file reader
reader.onload = function(e)
{
img.src = e.target.result;
img.onload = function () {
var canvas = document.createElement("canvas");
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 800;
var MAX_HEIGHT = 600;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
dataurl = canvas.toDataURL("image/jpeg");
// Post the data
var fd = new FormData();
fd.append("name", "some_filename.jpg");
fd.append("image", dataurl);
fd.append("info", "lah_de_dah");
$.ajax({
url: '/ajax_photo',
data: fd,
cache: false,
contentType: false,
processData: false,
type: 'POST',
success: function(data){
$('#form_photo')[0].reset();
location.reload();
}
});
} // img.onload
}
// Load files into file reader
reader.readAsDataURL(file);
}
If you don't want to reinvent the wheel you may try plupload.com
Typescript
async resizeImg(file: Blob): Promise<Blob> {
let img = document.createElement("img");
img.src = await new Promise<any>(resolve => {
let reader = new FileReader();
reader.onload = (e: any) => resolve(e.target.result);
reader.readAsDataURL(file);
});
await new Promise(resolve => img.onload = resolve)
let canvas = document.createElement("canvas");
let ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
let MAX_WIDTH = 1000;
let MAX_HEIGHT = 1000;
let width = img.naturalWidth;
let height = img.naturalHeight;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
let result = await new Promise<Blob>(resolve => { canvas.toBlob(resolve, 'image/jpeg', 0.95); });
return result;
}
The accepted answer works great, but the resize logic ignores the case in which the image is larger than the maximum in only one of the axes (for example, height > maxHeight but width <= maxWidth).
I think the following code takes care of all cases in a more straight-forward and functional way (ignore the typescript type annotations if using plain javascript):
private scaleDownSize(width: number, height: number, maxWidth: number, maxHeight: number): {width: number, height: number} {
if (width <= maxWidth && height <= maxHeight)
return { width, height };
else if (width / maxWidth > height / maxHeight)
return { width: maxWidth, height: height * maxWidth / width};
else
return { width: width * maxHeight / height, height: maxHeight };
}
fd.append("image", dataurl);
This will not work. On PHP side you can not save file with this.
Use this code instead:
var blobBin = atob(dataurl.split(',')[1]);
var array = [];
for(var i = 0; i < blobBin.length; i++) {
array.push(blobBin.charCodeAt(i));
}
var file = new Blob([new Uint8Array(array)], {type: 'image/png', name: "avatar.png"});
fd.append("image", file); // blob file
Resizing images in a canvas element is generally bad idea since it uses the cheapest box interpolation. The resulting image noticeable degrades in quality. I'd recommend using http://nodeca.github.io/pica/demo/ which can perform Lanczos transformation instead. The demo page above shows difference between canvas and Lanczos approaches.
It also uses web workers for resizing images in parallel. There is also WEBGL implementation.
There are some online image resizers that use pica for doing the job, like https://myimageresizer.com
You can use dropzone.js if you want to use simple and easy upload manager with resizing before upload functions.
It has builtin resize functions, but you can provide your own if you want.