I have an Xamarin project where I am using Skiasharp. I am relatively new to the drawing utility. Ive spent a few days trying to figure out this issue with no luck. After scaling and transforming the canvas, when I touch the skcanvas view on the phone screen and look at the 'location' point in the touch event, its not the same location that the canvas drew something. I need the exact location I drew the rectangle.
Its a lot of code below and granted its not all the code, but its the important parts. I am absolutely baffled why I draw in one (X,Y) location yet when I touch the screen the touch event for the canvas gives me a completely different location than what than the (X,Y) the widget was drawn at.
'''
public static void DrawLayout(SKImageInfo info, SKCanvas canvas, SKSvg svg,
SetupViewModel vm)
var layout = vm.SelectedReticleLayout;
float yRatio;
float xRatio;
float widgetHeight = 75;
float widgetWidth = 170;
float availableWidth = 720;
float availableHeight = 1280;
var currentZoomScale = getScale();
canvas.Translate(info.Width / 2f, info.Height / 2f);
SKRect bounds = svg.ViewBox;
xRatio = (info.Width / bounds.Width) + ((info.Width / bounds.Width) * currentZoomScale);
yRatio = (info.Height / bounds.Height) + ((info.Height / bounds.Height) *
currentZoomScale);
float ratio = Math.Min(xRatio, yRatio);
canvas.Scale(ratio);
canvas.Translate(-bounds.MidX, -bounds.MidY);
canvas.DrawPicture(svg.Picture, new SKPaint { Color = SKColors.White, Style =
SKPaintStyle.Fill });
// now set the X,Y and Width and Height of the large Red Rectangle
float imageCenter = canvas.LocalClipBounds.Width / 2;
layout.RedBorderXOffSet = imageCenter - (imageCenter / 2.0f) +
canvas.LocalClipBounds.Left;
float redBorderYOffSet = (float)(svg.Picture.CullRect.Top +
Math.Ceiling(.0654450261780105f * svg.Picture.CullRect.Bottom));
layout.RedBorderYOffSet = (float)(canvas.LocalClipBounds.Top +
Math.Ceiling(.0654450261780105f * canvas.LocalClipBounds.Bottom));
layout.RedBorderWidth = canvas.LocalClipBounds.Width / 2.0f;
layout.RedBorderWidthXOffSet = layout.RedBorderWidth + layout.RedBorderXOffSet;
layout.RedBorderHeight = (float)(canvas.LocalClipBounds.Bottom -
Math.Ceiling(.0654450261780105f * canvas.LocalClipBounds.Bottom * 2)) -
canvas.LocalClipBounds.Top;
layout.RedBorderHeightYOffSet = layout.RedBorderYOffSet + layout.RedBorderHeight;
// draw the large red rectangle
canvas.DrawRect(layout.RedBorderXOffSet, layout.RedBorderYOffSet, layout.RedBorderWidth,
layout.RedBorderHeight, RedBorderPaint);
// clear the tracked widgets, tracked widgets are updated every time we draw the widgets
// base widgets contain the default size and location relative to the scope. base line
widgets
// will need to be multiplied by the node scale height and width
layout.TrackedWidgets.Clear();
var widget = new widget
{
X = layout.RedBorderXOffSet + 5;
Y = layout.RedBorderYOffSet + layout.TrackedReticleWidgets[0].Height + 15;
Height = layout.RedBorderHeight * (widgetHeight / availableHeight);
Width = layout.RedBorderWdith * (widgetWidth / availableWidth);
}
// define colors for text and border colors for small rectangles (widgets)
public static SKPaint SelectedWidgetColor => new SKPaint { Color = SKColors.LightPink,
Style = SKPaintStyle.StrokeAndFill, StrokeWidth = 3 };
public static SKPaint EmptyWidgetBorder => new SKPaint { Color = SKColors.DarkGray,
Style = SKPaintStyle.Stroke, StrokeWidth = 3 };
public static SKPaint EmptyWidgetText => new SKPaint { Color = SKColors.Black, TextSize
= 10, FakeBoldText = false, Style = SKPaintStyle.Stroke, Typeface =
SKTypeface.FromFamilyName("Arial") };
public static SKPaint DefinedWidgetText => new SKPaint { Color = SKColors.DarkRed,
FakeBoldText = false, Style = SKPaintStyle.Stroke };
// create small rectangle (widget) and draw the widget
var widgetRectangle = SKRect.Create(widget.X, widget.Y, widget.Width, widget.Height);
canvas.DrawRect(widgetRectangle, widget.IsSelected ? SelectedWidgetColor :
EmptyWidgetBorder);
// now lets create the text to draw in the widget
string text = EnumUtility.GetDescription(widget.WidgetDataType);
float textWidth = EmptyWidgetText.MeasureText(text);
EmptyWidgetText.TextSize = widget.Width * GetUnscaledWidgetWith(widget) *
EmptyWidgetText.TextSize / textWidth;
SKRect textBounds = new SKRect();
EmptyWidgetText.MeasureText(text, ref textBounds);
float xText = widgetRectangle.MidX - textBounds.MidX;
float yText = widgetRectangle.MidY - textBounds.MidY;
canvas.DrawText(text, xText, yText, EmptyWidgetText);
'''
I have been trying to make an app that takes models from google poly and puts them onto the scene in an iframe.
The initial problem was that models were too large or small so an optimum way was suggested to me by the aframe community which worked fine for a while but gave errors when scaling and rotation was being changed.
Here is the component I am using to make sure that models are properly scaled.
AFRAME.registerComponent('autoscale', {schema: {type: 'number', default: 1},
init: function () {
this.scale();this.el.addEventListener('object3dset', () => this.scale());},scale: function () {
const el = this.el;
const span = this.data;
const mesh = el.getObject3D('mesh');
if (!mesh) return;
const bbox = new THREE.Box3().setFromObject(mesh);
const scale = span / bbox.getSize().length();
var sx = this.el.getAttribute('scale').x;
var sy = this.el.getAttribute('scale').y;
var sz = this.el.getAttribute('scale').z;
var rx = this.el.getAttribute('rotation').x * (Math.PI / 180);
var ry = this.el.getAttribute('rotation').y * (Math.PI / 180);
var rz = this.el.getAttribute('rotation').z * (Math.PI / 180);
mesh.rotation.set(rx,ry,rz);
mesh.scale.set(scale*sx, scale*sy, scale*sz);
var a = new THREE.Box3().setFromObject(this.el.object3D);
var cx = (a.min.x + a.max.x)/2;
var cy = (a.min.y + a.max.y)/2;
var cz = (a.min.z + a.max.z)/2;
var posx = this.el.object3D.position.x;
var posy = this.el.object3D.position.y;
var posz = this.el.object3D.position.z;
console.log("boundingBox xyz: x: "+cx+", y: "+cy+" z: "+cz);
console.log("box position xyz: x: "+posx+", y: "+posy+" z: "+posz);
var translateX = posx - cx;
var translateY = posy - cy;
var translateZ = posz - cz;
this.el.object3DMap.mesh.translateX(translateX/sx);
this.el.object3DMap.mesh.translateY(translateY/sy);
this.el.object3DMap.mesh.translateZ(translateZ/sz);
}
});
There are 2 issues with the above approach:
When I scale the model from their attribute value like this: scale="2 10 2" with one being too large the center that is shown in the frame inspector messes up.
When I rotate model using attribute values the pivot goes off. I tried setting the rotation but no luck.
Any help will be appreciated on the code above.
I think maybe you want to build a transformation matrix with the transformation order you want (maybe scale then rotate then position or something?). THREE.js: Can you force a different order of operations for three.js?
This is the code i wrote to resize the image in aspect ratio, it works on chrome but not display on firefox, does anyone know what is wrong.
var image = new Image();
image.src = data;
//$(image).load(function () {
var aspectRatio = getAspectRatio(parseFloat($(image).prop('naturalWidth')),
parseFloat($(image).prop('naturalHeight')),
dstWidth,
dstHeight);
var canvas = document.createElement('canvas');
canvas.width = dstWidth;
canvas.height = dstHeight;
var x = (dstWidth - aspectRatio[0]) / 2;
var y = (dstHeight - aspectRatio[1]) / 2;
var ctx = canvas.getContext("2d");
ctx.drawImage(image, x, y, aspectRatio[0], aspectRatio[1]);
return canvas.toDataURL("image/png");
This is work it generated by the canvas.toDataURL
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAQEAAADACAYAAAAEL9ZYAAAA1klEQVR4nO3BAQ0AAADCoPdPbQ8HFAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADwYD7QAB/UrDfgAAAABJRU5ErkJggg==
To make it work you will need to handle the asynchronous nature of image loading. You will have to use a callback mechanism here. The reason it "works" in Chrome is accident; that is the image happen to be in the cache when you try and/or the browser is able to deliver the uncompressed/decoded image before you use the image in the drawImage call.
This will probably not work when it's online for most users so to properly handle loading you can do -
Example:
function getImageUri(url, callback) {
var image = new Image();
image.onload = function () { // handle onload
var image = this; // make sure we using valid image
var aspectRatio = getAspectRatio(parseFloat($(image).prop('naturalWidth')),
parseFloat($(image).prop('naturalHeight')),
dstWidth,
dstHeight);
var canvas = document.createElement('canvas');
canvas.width = dstWidth;
canvas.height = dstHeight;
var x = (dstWidth - aspectRatio[0]) / 2;
var y = (dstHeight - aspectRatio[1]) / 2;
var ctx = canvas.getContext("2d");
ctx.drawImage(image, x, y, aspectRatio[0], aspectRatio[1]);
// use callback to provide the finished data-uri
callback(canvas.toDataURL());
}
image.src = url; // set src last
}
Then use it this way:
getImageUri(myURL, function (uri) {
console.log(uri); // contains the image as data-uri
});
The second day in a row I can not solve the problem with scroll area in Starling. Here is my code:
protected function onTouch(event:TouchEvent):void
{
var touches:Vector.<Touch> = event.getTouches(this, TouchPhase.MOVED);
if (touches.length == 1)
{
// one finger touching -> move
var delta:Point = touches[0].getMovement(parent);
x += delta.x;
y += delta.y;
}
else if (touches.length == 2)
{
// two fingers touching -> rotate and scale
var touchA:Touch = touches[0];
var touchB:Touch = touches[1];
var currentPosA:Point = touchA.getLocation(parent);
var previousPosA:Point = touchA.getPreviousLocation(parent);
var currentPosB:Point = touchB.getLocation(parent);
var previousPosB:Point = touchB.getPreviousLocation(parent);
var currentVector:Point = currentPosA.subtract(currentPosB);
var previousVector:Point = previousPosA.subtract(previousPosB);
if (_enableRotation)
{
var currentAngle:Number = Math.atan2(currentVector.y, currentVector.x);
var previousAngle:Number = Math.atan2(previousVector.y, previousVector.x);
var deltaAngle:Number = currentAngle - previousAngle;
}
// update pivot point based on previous center
var previousLocalA:Point = touchA.getPreviousLocation(this);
var previousLocalB:Point = touchB.getPreviousLocation(this);
pivotX = (previousLocalA.x + previousLocalB.x) * 0.5;
pivotY = (previousLocalA.y + previousLocalB.y) * 0.5;
// update location based on the current center
x = (currentPosA.x + currentPosB.x) * 0.5;
y = (currentPosA.y + currentPosB.y) * 0.5;
if (_enableRotation)
{
// rotate
rotation += deltaAngle;
}
// scale
var sizeDiff:Number = currentVector.length / previousVector.length;
var diffSize:Number = scaleX * sizeDiff;
if(( diffSize>= _minScale) && (diffSize <= _maxScale))
{
scaleX *= sizeDiff;
scaleY *= sizeDiff;
}
}
var touch:Touch = event.getTouch(this, TouchPhase.ENDED);
if (touch)
{
//here is my problem!
}
}
In last if I wanna check where is my child object (image), and if this object is out of screen, I wanna move scroll area (sprite) in center of screen. But I don't know how I can do this.
You can use ContainerLayout from Gazman SDK to do this job. It works with every starling displayObject.
Some this like this:
var layout = new ContainerLayout();
layout.horizontalCenter = 0;
layout.applyLayoutOn(myScroller);
Here's a noodle scratcher.
Bearing in mind we have HTML5 local storage and xhr v2 and what not. I was wondering if anyone could find a working example or even just give me a yes or no for this question:
Is it possible to Pre-size an image using the new local storage (or whatever), so that a user who does not have a clue about resizing an image can drag their 10mb image into my website, it resize it using the new localstorage and THEN upload it at the smaller size.
I know full well you can do it with Flash, Java applets, active X... The question is if you can do with Javascript + Html5.
Looking forward to the response on this one.
Ta for now.
Yes, use the File API, then you can process the images with the canvas element.
This Mozilla Hacks blog post walks you through most of the process. For reference here's the assembled source code from the blog post:
// from an input element
var filesToUpload = input.files;
var file = filesToUpload[0];
var img = document.createElement("img");
var reader = new FileReader();
reader.onload = function(e) {img.src = e.target.result}
reader.readAsDataURL(file);
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 800;
var MAX_HEIGHT = 600;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
var dataurl = canvas.toDataURL("image/png");
//Post dataurl to the server with AJAX
I tackled this problem a few years ago and uploaded my solution to github as https://github.com/rossturner/HTML5-ImageUploader
robertc's answer uses the solution proposed in the Mozilla Hacks blog post, however I found this gave really poor image quality when resizing to a scale that was not 2:1 (or a multiple thereof). I started experimenting with different image resizing algorithms, although most ended up being quite slow or else were not great in quality either.
Finally I came up with a solution which I believe executes quickly and has pretty good performance too - as the Mozilla solution of copying from 1 canvas to another works quickly and without loss of image quality at a 2:1 ratio, given a target of x pixels wide and y pixels tall, I use this canvas resizing method until the image is between x and 2 x, and y and 2 y. At this point I then turn to algorithmic image resizing for the final "step" of resizing down to the target size. After trying several different algorithms I settled on bilinear interpolation taken from a blog which is not online anymore but accessible via the Internet Archive, which gives good results, here's the applicable code:
ImageUploader.prototype.scaleImage = function(img, completionCallback) {
var canvas = document.createElement('canvas');
canvas.width = img.width;
canvas.height = img.height;
canvas.getContext('2d').drawImage(img, 0, 0, canvas.width, canvas.height);
while (canvas.width >= (2 * this.config.maxWidth)) {
canvas = this.getHalfScaleCanvas(canvas);
}
if (canvas.width > this.config.maxWidth) {
canvas = this.scaleCanvasWithAlgorithm(canvas);
}
var imageData = canvas.toDataURL('image/jpeg', this.config.quality);
this.performUpload(imageData, completionCallback);
};
ImageUploader.prototype.scaleCanvasWithAlgorithm = function(canvas) {
var scaledCanvas = document.createElement('canvas');
var scale = this.config.maxWidth / canvas.width;
scaledCanvas.width = canvas.width * scale;
scaledCanvas.height = canvas.height * scale;
var srcImgData = canvas.getContext('2d').getImageData(0, 0, canvas.width, canvas.height);
var destImgData = scaledCanvas.getContext('2d').createImageData(scaledCanvas.width, scaledCanvas.height);
this.applyBilinearInterpolation(srcImgData, destImgData, scale);
scaledCanvas.getContext('2d').putImageData(destImgData, 0, 0);
return scaledCanvas;
};
ImageUploader.prototype.getHalfScaleCanvas = function(canvas) {
var halfCanvas = document.createElement('canvas');
halfCanvas.width = canvas.width / 2;
halfCanvas.height = canvas.height / 2;
halfCanvas.getContext('2d').drawImage(canvas, 0, 0, halfCanvas.width, halfCanvas.height);
return halfCanvas;
};
ImageUploader.prototype.applyBilinearInterpolation = function(srcCanvasData, destCanvasData, scale) {
function inner(f00, f10, f01, f11, x, y) {
var un_x = 1.0 - x;
var un_y = 1.0 - y;
return (f00 * un_x * un_y + f10 * x * un_y + f01 * un_x * y + f11 * x * y);
}
var i, j;
var iyv, iy0, iy1, ixv, ix0, ix1;
var idxD, idxS00, idxS10, idxS01, idxS11;
var dx, dy;
var r, g, b, a;
for (i = 0; i < destCanvasData.height; ++i) {
iyv = i / scale;
iy0 = Math.floor(iyv);
// Math.ceil can go over bounds
iy1 = (Math.ceil(iyv) > (srcCanvasData.height - 1) ? (srcCanvasData.height - 1) : Math.ceil(iyv));
for (j = 0; j < destCanvasData.width; ++j) {
ixv = j / scale;
ix0 = Math.floor(ixv);
// Math.ceil can go over bounds
ix1 = (Math.ceil(ixv) > (srcCanvasData.width - 1) ? (srcCanvasData.width - 1) : Math.ceil(ixv));
idxD = (j + destCanvasData.width * i) * 4;
// matrix to vector indices
idxS00 = (ix0 + srcCanvasData.width * iy0) * 4;
idxS10 = (ix1 + srcCanvasData.width * iy0) * 4;
idxS01 = (ix0 + srcCanvasData.width * iy1) * 4;
idxS11 = (ix1 + srcCanvasData.width * iy1) * 4;
// overall coordinates to unit square
dx = ixv - ix0;
dy = iyv - iy0;
// I let the r, g, b, a on purpose for debugging
r = inner(srcCanvasData.data[idxS00], srcCanvasData.data[idxS10], srcCanvasData.data[idxS01], srcCanvasData.data[idxS11], dx, dy);
destCanvasData.data[idxD] = r;
g = inner(srcCanvasData.data[idxS00 + 1], srcCanvasData.data[idxS10 + 1], srcCanvasData.data[idxS01 + 1], srcCanvasData.data[idxS11 + 1], dx, dy);
destCanvasData.data[idxD + 1] = g;
b = inner(srcCanvasData.data[idxS00 + 2], srcCanvasData.data[idxS10 + 2], srcCanvasData.data[idxS01 + 2], srcCanvasData.data[idxS11 + 2], dx, dy);
destCanvasData.data[idxD + 2] = b;
a = inner(srcCanvasData.data[idxS00 + 3], srcCanvasData.data[idxS10 + 3], srcCanvasData.data[idxS01 + 3], srcCanvasData.data[idxS11 + 3], dx, dy);
destCanvasData.data[idxD + 3] = a;
}
}
};
This scales an image down to a width of config.maxWidth, maintaining the original aspect ratio. At the time of development this worked on iPad/iPhone Safari in addition to major desktop browsers (IE9+, Firefox, Chrome) so I expect it will still be compatible given the broader uptake of HTML5 today. Note that the canvas.toDataURL() call takes a mime type and image quality which will allow you to control the quality and output file format (potentially different to input if you wish).
The only point this doesn't cover is maintaining the orientation information, without knowledge of this metadata the image is resized and saved as-is, losing any metadata within the image for orientation meaning that images taken on a tablet device "upside down" were rendered as such, although they would have been flipped in the device's camera viewfinder. If this is a concern, this blog post has a good guide and code examples on how to accomplish this, which I'm sure could be integrated to the above code.
Correction to above:
<img src="" id="image">
<input id="input" type="file" onchange="handleFiles()">
<script>
function handleFiles()
{
var filesToUpload = document.getElementById('input').files;
var file = filesToUpload[0];
// Create an image
var img = document.createElement("img");
// Create a file reader
var reader = new FileReader();
// Set the image once loaded into file reader
reader.onload = function(e)
{
img.src = e.target.result;
var canvas = document.createElement("canvas");
//var canvas = $("<canvas>", {"id":"testing"})[0];
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
var dataurl = canvas.toDataURL("image/png");
document.getElementById('image').src = dataurl;
}
// Load files into file reader
reader.readAsDataURL(file);
// Post the data
/*
var fd = new FormData();
fd.append("name", "some_filename.jpg");
fd.append("image", dataurl);
fd.append("info", "lah_de_dah");
*/
}</script>
Modification to the answer by Justin that works for me:
Added img.onload
Expand the POST request with a real example
function handleFiles()
{
var dataurl = null;
var filesToUpload = document.getElementById('photo').files;
var file = filesToUpload[0];
// Create an image
var img = document.createElement("img");
// Create a file reader
var reader = new FileReader();
// Set the image once loaded into file reader
reader.onload = function(e)
{
img.src = e.target.result;
img.onload = function () {
var canvas = document.createElement("canvas");
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 800;
var MAX_HEIGHT = 600;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
dataurl = canvas.toDataURL("image/jpeg");
// Post the data
var fd = new FormData();
fd.append("name", "some_filename.jpg");
fd.append("image", dataurl);
fd.append("info", "lah_de_dah");
$.ajax({
url: '/ajax_photo',
data: fd,
cache: false,
contentType: false,
processData: false,
type: 'POST',
success: function(data){
$('#form_photo')[0].reset();
location.reload();
}
});
} // img.onload
}
// Load files into file reader
reader.readAsDataURL(file);
}
If you don't want to reinvent the wheel you may try plupload.com
Typescript
async resizeImg(file: Blob): Promise<Blob> {
let img = document.createElement("img");
img.src = await new Promise<any>(resolve => {
let reader = new FileReader();
reader.onload = (e: any) => resolve(e.target.result);
reader.readAsDataURL(file);
});
await new Promise(resolve => img.onload = resolve)
let canvas = document.createElement("canvas");
let ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
let MAX_WIDTH = 1000;
let MAX_HEIGHT = 1000;
let width = img.naturalWidth;
let height = img.naturalHeight;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
let result = await new Promise<Blob>(resolve => { canvas.toBlob(resolve, 'image/jpeg', 0.95); });
return result;
}
The accepted answer works great, but the resize logic ignores the case in which the image is larger than the maximum in only one of the axes (for example, height > maxHeight but width <= maxWidth).
I think the following code takes care of all cases in a more straight-forward and functional way (ignore the typescript type annotations if using plain javascript):
private scaleDownSize(width: number, height: number, maxWidth: number, maxHeight: number): {width: number, height: number} {
if (width <= maxWidth && height <= maxHeight)
return { width, height };
else if (width / maxWidth > height / maxHeight)
return { width: maxWidth, height: height * maxWidth / width};
else
return { width: width * maxHeight / height, height: maxHeight };
}
fd.append("image", dataurl);
This will not work. On PHP side you can not save file with this.
Use this code instead:
var blobBin = atob(dataurl.split(',')[1]);
var array = [];
for(var i = 0; i < blobBin.length; i++) {
array.push(blobBin.charCodeAt(i));
}
var file = new Blob([new Uint8Array(array)], {type: 'image/png', name: "avatar.png"});
fd.append("image", file); // blob file
Resizing images in a canvas element is generally bad idea since it uses the cheapest box interpolation. The resulting image noticeable degrades in quality. I'd recommend using http://nodeca.github.io/pica/demo/ which can perform Lanczos transformation instead. The demo page above shows difference between canvas and Lanczos approaches.
It also uses web workers for resizing images in parallel. There is also WEBGL implementation.
There are some online image resizers that use pica for doing the job, like https://myimageresizer.com
You can use dropzone.js if you want to use simple and easy upload manager with resizing before upload functions.
It has builtin resize functions, but you can provide your own if you want.