Sizing Image inside Word Document using Openxml SDK .Net c# - image

I added an image to my generated word document using openxml sdk, but iam not able to size that image to word document full width respecting the image ratio.
How can i get the word document width? And what is the best aproach to achieve this task!?

I solved it that way now and it works fine:
// Scaling down the Image
SectionProperties? sectionProperties = null;
PageSize? pageSize = null;
long pageWidth = 0;
if (main.Document.Body is { })
sectionProperties = main.Document.Body.GetFirstChild<SectionProperties>();
if (sectionProperties is { })
pageSize = sectionProperties.GetFirstChild<PageSize>();
if (pageSize is { Width: { } } && pageSize is { Height: { } })
pageWidth = pageSize.Width;
if (pageWidth > width) {
var sizeFactor = pageWidth / width;
width = (int)pageWidth;
height = (int)(sizeFactor * height);
}
double englishMetricUnitsPerInch = 914400;
double pixelsPerInch = 96;
double factor = englishMetricUnitsPerInch / pixelsPerInch;
/*
* MS-Docs (OpenXml SDK, PageSize Class)
* This output states that all pages in this section must be 11907 twentieths of
* a point wide (11907 twentieths of a point = 8.269") and 16839 twentieths of a
* point high (16939 twentieths of a point = 11.694"). end example]
*/
double emuWidth = (width * factor) / 20;
double emuHeight = (height * factor) / 20;
The inside the drawing element:
Extent = new() {
Cx = (Int64Value)emuWidth,
Cy = (Int64Value)emuHeight
},

Related

Skiasharp how to get coordinates from touch location after scaling and transform

I have an Xamarin project where I am using Skiasharp. I am relatively new to the drawing utility. Ive spent a few days trying to figure out this issue with no luck. After scaling and transforming the canvas, when I touch the skcanvas view on the phone screen and look at the 'location' point in the touch event, its not the same location that the canvas drew something. I need the exact location I drew the rectangle.
Its a lot of code below and granted its not all the code, but its the important parts. I am absolutely baffled why I draw in one (X,Y) location yet when I touch the screen the touch event for the canvas gives me a completely different location than what than the (X,Y) the widget was drawn at.
'''
public static void DrawLayout(SKImageInfo info, SKCanvas canvas, SKSvg svg,
SetupViewModel vm)
var layout = vm.SelectedReticleLayout;
float yRatio;
float xRatio;
float widgetHeight = 75;
float widgetWidth = 170;
float availableWidth = 720;
float availableHeight = 1280;
var currentZoomScale = getScale();
canvas.Translate(info.Width / 2f, info.Height / 2f);
SKRect bounds = svg.ViewBox;
xRatio = (info.Width / bounds.Width) + ((info.Width / bounds.Width) * currentZoomScale);
yRatio = (info.Height / bounds.Height) + ((info.Height / bounds.Height) *
currentZoomScale);
float ratio = Math.Min(xRatio, yRatio);
canvas.Scale(ratio);
canvas.Translate(-bounds.MidX, -bounds.MidY);
canvas.DrawPicture(svg.Picture, new SKPaint { Color = SKColors.White, Style =
SKPaintStyle.Fill });
// now set the X,Y and Width and Height of the large Red Rectangle
float imageCenter = canvas.LocalClipBounds.Width / 2;
layout.RedBorderXOffSet = imageCenter - (imageCenter / 2.0f) +
canvas.LocalClipBounds.Left;
float redBorderYOffSet = (float)(svg.Picture.CullRect.Top +
Math.Ceiling(.0654450261780105f * svg.Picture.CullRect.Bottom));
layout.RedBorderYOffSet = (float)(canvas.LocalClipBounds.Top +
Math.Ceiling(.0654450261780105f * canvas.LocalClipBounds.Bottom));
layout.RedBorderWidth = canvas.LocalClipBounds.Width / 2.0f;
layout.RedBorderWidthXOffSet = layout.RedBorderWidth + layout.RedBorderXOffSet;
layout.RedBorderHeight = (float)(canvas.LocalClipBounds.Bottom -
Math.Ceiling(.0654450261780105f * canvas.LocalClipBounds.Bottom * 2)) -
canvas.LocalClipBounds.Top;
layout.RedBorderHeightYOffSet = layout.RedBorderYOffSet + layout.RedBorderHeight;
// draw the large red rectangle
canvas.DrawRect(layout.RedBorderXOffSet, layout.RedBorderYOffSet, layout.RedBorderWidth,
layout.RedBorderHeight, RedBorderPaint);
// clear the tracked widgets, tracked widgets are updated every time we draw the widgets
// base widgets contain the default size and location relative to the scope. base line
widgets
// will need to be multiplied by the node scale height and width
layout.TrackedWidgets.Clear();
var widget = new widget
{
X = layout.RedBorderXOffSet + 5;
Y = layout.RedBorderYOffSet + layout.TrackedReticleWidgets[0].Height + 15;
Height = layout.RedBorderHeight * (widgetHeight / availableHeight);
Width = layout.RedBorderWdith * (widgetWidth / availableWidth);
}
// define colors for text and border colors for small rectangles (widgets)
public static SKPaint SelectedWidgetColor => new SKPaint { Color = SKColors.LightPink,
Style = SKPaintStyle.StrokeAndFill, StrokeWidth = 3 };
public static SKPaint EmptyWidgetBorder => new SKPaint { Color = SKColors.DarkGray,
Style = SKPaintStyle.Stroke, StrokeWidth = 3 };
public static SKPaint EmptyWidgetText => new SKPaint { Color = SKColors.Black, TextSize
= 10, FakeBoldText = false, Style = SKPaintStyle.Stroke, Typeface =
SKTypeface.FromFamilyName("Arial") };
public static SKPaint DefinedWidgetText => new SKPaint { Color = SKColors.DarkRed,
FakeBoldText = false, Style = SKPaintStyle.Stroke };
// create small rectangle (widget) and draw the widget
var widgetRectangle = SKRect.Create(widget.X, widget.Y, widget.Width, widget.Height);
canvas.DrawRect(widgetRectangle, widget.IsSelected ? SelectedWidgetColor :
EmptyWidgetBorder);
// now lets create the text to draw in the widget
string text = EnumUtility.GetDescription(widget.WidgetDataType);
float textWidth = EmptyWidgetText.MeasureText(text);
EmptyWidgetText.TextSize = widget.Width * GetUnscaledWidgetWith(widget) *
EmptyWidgetText.TextSize / textWidth;
SKRect textBounds = new SKRect();
EmptyWidgetText.MeasureText(text, ref textBounds);
float xText = widgetRectangle.MidX - textBounds.MidX;
float yText = widgetRectangle.MidY - textBounds.MidY;
canvas.DrawText(text, xText, yText, EmptyWidgetText);
'''

How can I resize and compress an image file in Xamarin [duplicate]

I am working on xamarin.forms. I have to select images from gallery and then resize them and then upload them on server. But I don't know how I can resize selected image in a given particular size?
Please update me how I can do this?
This can be used with a stream (if you're using the Media Plugin https://github.com/jamesmontemagno/MediaPlugin) or standard byte arrays.
// If you already have the byte[]
byte[] resizedImage = await CrossImageResizer.Current.ResizeImageWithAspectRatioAsync(originalImageBytes, 500, 1000);
// If you have a stream, such as:
// var file = await CrossMedia.Current.PickPhotoAsync(options);
// var originalImageStream = file.GetStream();
byte[] resizedImage = await CrossImageResizer.Current.ResizeImageWithAspectRatioAsync(originalImageStream, 500, 1000);
I tried use CrossImageResizer.Current... but I did not find it in the Media Plugin. Instead I found an option called MaxWidthHeight, that worked only if you also add PhotoSize = PhotoSize.MaxWidthHeight option.
For Example :
var file = await CrossMedia.Current.PickPhotoAsync(new PickMediaOptions() { PhotoSize = PhotoSize.MaxWidthHeight, MaxWidthHeight = 600 });
var file = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions { PhotoSize = PhotoSize.MaxWidthHeight, MaxWidthHeight = 600 });
Sadly enough there isn't a good cross-platform image resizer (that I've found at the time of this post). Image processing wasn't really designed to take place in a cross-platform environment for iOS and Android. It's much faster and cleaner to perform this on each platform using platform-specific code. You can do this using dependency injection and the DependencyService (or any other service or IOC).
AdamP gives a great response on how to do this Platform Specific Image Resizing
Here is the code taken from the link above.
iOS
public class MediaService : IMediaService
{
public byte[] ResizeImage(byte[] imageData, float width, float height)
{
UIImage originalImage = ImageFromByteArray(imageData);
var originalHeight = originalImage.Size.Height;
var originalWidth = originalImage.Size.Width;
nfloat newHeight = 0;
nfloat newWidth = 0;
if (originalHeight > originalWidth)
{
newHeight = height;
nfloat ratio = originalHeight / height;
newWidth = originalWidth / ratio;
}
else
{
newWidth = width;
nfloat ratio = originalWidth / width;
newHeight = originalHeight / ratio;
}
width = (float)newWidth;
height = (float)newHeight;
UIGraphics.BeginImageContext(new SizeF(width, height));
originalImage.Draw(new RectangleF(0, 0, width, height));
var resizedImage = UIGraphics.GetImageFromCurrentImageContext();
UIGraphics.EndImageContext();
var bytesImagen = resizedImage.AsJPEG().ToArray();
resizedImage.Dispose();
return bytesImagen;
}
}
Android
public class MediaService : IMediaService
{
public byte[] ResizeImage(byte[] imageData, float width, float height)
{
// Load the bitmap
BitmapFactory.Options options = new BitmapFactory.Options();// Create object of bitmapfactory's option method for further option use
options.InPurgeable = true; // inPurgeable is used to free up memory while required
Bitmap originalImage = BitmapFactory.DecodeByteArray(imageData, 0, imageData.Length, options);
float newHeight = 0;
float newWidth = 0;
var originalHeight = originalImage.Height;
var originalWidth = originalImage.Width;
if (originalHeight > originalWidth)
{
newHeight = height;
float ratio = originalHeight / height;
newWidth = originalWidth / ratio;
}
else
{
newWidth = width;
float ratio = originalWidth / width;
newHeight = originalHeight / ratio;
}
Bitmap resizedImage = Bitmap.CreateScaledBitmap(originalImage, (int)newWidth, (int)newHeight, true);
originalImage.Recycle();
using (MemoryStream ms = new MemoryStream())
{
resizedImage.Compress(Bitmap.CompressFormat.Png, 100, ms);
resizedImage.Recycle();
return ms.ToArray();
}
}
WinPhone
public class MediaService : IMediaService
{
private MediaImplementation mi = new MediaImplementation();
public byte[] ResizeImage(byte[] imageData, float width, float height)
{
byte[] resizedData;
using (MemoryStream streamIn = new MemoryStream(imageData))
{
WriteableBitmap bitmap = PictureDecoder.DecodeJpeg(streamIn, (int)width, (int)height);
float Height = 0;
float Width = 0;
float originalHeight = bitmap.PixelHeight;
float originalWidth = bitmap.PixelWidth;
if (originalHeight > originalWidth)
{
Height = height;
float ratio = originalHeight / height;
Width = originalWidth / ratio;
}
else
{
Width = width;
float ratio = originalWidth / width;
Height = originalHeight / ratio;
}
using (MemoryStream streamOut = new MemoryStream())
{
bitmap.SaveJpeg(streamOut, (int)Width, (int)Height, 0, 100);
resizedData = streamOut.ToArray();
}
}
return resizedData;
}
}
EDIT: If you are already using FFImageLoading in your project then you can just use that for your platform.
https://github.com/luberda-molinet/FFImageLoading
I fixed in my project, this was the best way for me .
when take photo or get image from gallery you can change size with properties
var file = await CrossMedia.Current.TakePhotoAsync(new StoreCameraMediaOptions
{
PhotoSize = PhotoSize.Custom,
CustomPhotoSize = 90 //Resize to 90% of original
});
for more information: https://github.com/jamesmontemagno/MediaPlugin

Dynamic videos from user inputs - How does it work?

Is there a way to generate a video frame's content by code?
For example :
I want to make a program that takes some string variable as input and then gives an output video of same text but now with special effects on the text itself.
It came to my mind the idea after seeing some projects that Facebook has done in their website. They have a database of pictures, comments, friends, events, likes and so on, that are related to a User. With all this information they do videos like for example Friend's day, a video that is completely related to the user that wants to post it.
How does these things work?
Is there some sort of software I can use? Can someone give me a place to start with?
I want to make a program that takes some string variable as input and then gives an output video of same text but now with special effects on the text itself.
You can render the text using canvas and record the result using the new Media Recorder API.
The effects can be made various ways of course, and the text content itself can be used against a database, or in conjunction with machine learning to structure/obtain some data relative to the text. This has a wide scope though, wider than what fits in here.
In any case, the generation of the video itself is shown below. It is a simple example where you can enter some text, it gets animated and recorded as video. When you click stop the actual video will show instead (which you can allow user to download). You can extend with other effects, storyboard features so you can record scenes and develop it to make an entire movie.
Do notice that all animation happens on the canvas in real-time. Typical video is recorded at 30 frames per second and not 60 frames per second, which is the typical frame rate for canvas. This is also something to consider in regard to smoothness. However, you can use this to your advantage by targeting 30 FPS to give your code a little more time to process. The API is suppose to support step-recording by not giving a frame rate as argument, but I have not been able to make this work as of yet.
There are currently limits with support of the API as not all browsers supports it yet.
Example
// un-refined code for demo use only
// set up globals, get key elements
var div = document.querySelector("div");
var btn = document.querySelector("button");
var txt = document.querySelector("input");
var c = document.querySelector("canvas");
// make 2d context without alpha channel
var ctx = c.getContext("2d", {alpha: false});
var isRecording = false;
// stream vars
var rec, stream, tref;
// if clicked, lets go
btn.onclick = setup;
function setup() {
// toggle button text/status for demo
if (isRecording) {
this.disabled = true;
this.innerHTML = "Making video...";
stop();
return
}
else {
isRecording = true;
this.innerHTML = "Click to stop & show video";
}
// Setup canvas for text rendering
var ct1 = document.createElement("canvas");
var ct2 = document.createElement("canvas");
var ctxText1 = ct1.getContext("2d");
var ctxText2 = ct2.getContext("2d");
var w, h;
w = ct1.width = ct2.width = c.width;
h = ct1.height = ct2.height = c.height;
setupCtx(ctxText1, "#333", "#FF9F05");
setupCtx(ctxText2, "#FF9F05", "#000");
function setupCtx(ctxText, bg, fg) {
ctxText.textAlign = "center";
ctxText.textBaseline = "middle";
ctxText.font = "92px sans-serif";
ctxText.fillStyle = bg;
ctxText.fillRect(0, 0, w, h);
ctxText.fillStyle = fg;
ctxText.translate(w/2, h/2);
ctxText.rotate(-0.5);
ctxText.fillText(txt.value.toUpperCase(), 0, 0);
}
// populate grid (see Square object below which handles the tiles)
var cols = 18,rows = 11,
cellWidth = (c.width / cols)|0,
cellHeight = (c.height / rows)|0,
grid = [],
len = cols * rows, y = 0, x,
index, hasActive = true;
for (; y < rows; y++) {
for (x = 0; x < cols; x++) {
grid.push(new Square(ctx, x * cellWidth, y * cellHeight, cellWidth, cellHeight, ct1, ct2, 0.01));
}
}
x = 0;
// start recording canvas to video
record();
//animation loop (refactor at will)
function loop() {
ctx.setTransform(1, 0, 0, 1, 0, 0);
ctx.globalAlpha = 1;
ctx.clearRect(0, 0, w, h);
// trigger cells
for (y = 0; y < rows; y++) {
var gx = (x | 0) - y;
if (gx >= 0 && gx < cols) {
index = y * cols + gx;
grid[index].trigger();
}
}
x += 0.25;
// update all
for (var i = 0; i < grid.length; i++) grid[i].update();
tref = requestAnimationFrame(loop);
}
// setup media recorder to record canvas # 30 FPS (note: canvas = 60 FPS by def.)
function record() {
stream = c.captureStream(30);
rec = new MediaRecorder(stream);
rec.addEventListener('dataavailable', done);
rec.start();
requestAnimationFrame(loop);
}
}
// stop recorder and trigger dataavailable event
function stop() {
rec.stop();
cancelAnimationFrame(tref); // stop loop as well
}
// finish up, show new shiny video instead of canvas
function done(e) {
var blob = new Blob([e.data], {"type": "video/webm"});
var video = document.createElement("video");
document.body.removeChild(c);
document.body.appendChild(video);
// play, Don
video.autoplay = video.loop = video.controls = true;
video.src = URL.createObjectURL(blob);
video.onplay = function() {
div.innerHTML = "Playing resulting video below:";
};
}
// stolen from a previous example I made (CC3.0-attr btw as always)
// this is included for the sole purpose of some animation.
function Square(ctx, x, y, w, h, image, image2, speed) {
this.ctx = ctx;
this.x = x;
this.y = y;
this.height = h;
this.width = w;
this.image = image;
this.image2 = image2;
this.first = true;
this.alpha = 0; // current alpha for this instance
this.speed = speed; // increment for alpha per frame
this.triggered = false; // is running
this.done = false; // has finished
}
Square.prototype = {
trigger: function () { // start this rectangle
this.triggered = true
},
update: function () {
if (this.triggered && !this.done) { // only if active
this.alpha += this.speed; // update alpha
if (this.alpha <= 0 || this.alpha >= 1)
this.speed = -this.speed;
}
var t = Math.sin(Math.PI * this.alpha);
if (t <= 0) this.first = !this.first;
this.ctx.fillStyle = this.color; // render this instance
this.ctx.globalAlpha = Math.max(0, Math.min(1, t));
var cx = this.x + this.width * 0.5, // center position
cy = this.y + this.width * 0.5;
this.ctx.setTransform(t*0.5+0.5, 0, 0, t*0.5+0.5, cx, cy); // scale and translate
this.ctx.rotate(-Math.PI * (1 - t)); // rotate, 90° <- alpha
this.ctx.translate(-cx, -cy); // translate back
this.ctx.drawImage(this.first ? this.image : this.image2, this.x, this.y, this.width, this.height, this.x, this.y, this.width, this.height);
}
};
body {background: #555;color:#ddd;font:16px sans-serif}
<div>
<label>Enter text to animate: <input value="U R AWESOME"></label>
<button>Click to start animation + recording</button>
</div>
<canvas width=640 height=400></canvas>

What did google play music use to create the "particles" visualiser on their website?

What does google play music use to create the "particles" visualiser on their website?
What 3D graphic software was used? My guess was originally unity 3D exported via webGL, or perhaps three.js or UE4?
I don't understand how they get the audio web player to stream the audio while the 3D visualiser reacts to the audio frequencies.
I wanted to re-create the same thing. Not sure where to start. I lack the knowledge on how it is done.
Couldn't find any answers on the web.
Most importantly are there different methods doing what google has done. What are the main differences?
Link of Visualiser demonstration: https://www.youtube.com/watch?v=mjfKCSPFdGI
Thanks.
It would be nice if you'd post a gif or something to show what you're referring to.
Making something audio reactive is pretty simple though. Here's an open source site with lots audio reactive examples.
As for how to do it you basically use the Web Audio API to stream the music and use it's AnalyserNode to get audio data out.
"use strict";
const ctx = document.querySelector("canvas").getContext("2d");
ctx.fillText("click to start", 100, 75);
ctx.canvas.addEventListener('click', start);
function start() {
ctx.canvas.removeEventListener('click', start);
// make a Web Audio Context
const context = new AudioContext();
const analyser = context.createAnalyser();
// Make a buffer to receive the audio data
const numPoints = analyser.frequencyBinCount;
const audioDataArray = new Uint8Array(numPoints);
function render() {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
// get the current audio data
analyser.getByteFrequencyData(audioDataArray);
const width = ctx.canvas.width;
const height = ctx.canvas.height;
const size = 5;
// draw a point every size pixels
for (let x = 0; x < width; x += size) {
// compute the audio data for this point
const ndx = x * numPoints / width | 0;
// get the audio data and make it go from 0 to 1
const audioValue = audioDataArray[ndx] / 255;
// draw a rect size by size big
const y = audioValue * height;
ctx.fillRect(x, y, size, size);
}
requestAnimationFrame(render);
}
requestAnimationFrame(render);
// Make a audio node
var audio = new Audio();
audio.loop = true;
audio.autoplay = true;
// this line is only needed if the music you are trying to play is on a
// different server than the page trying to play it.
// It asks the server for permission to use the music. If the server says "no"
// then you will not be able to play the music
audio.crossOrigin = "anonymous";
// call `handleCanplay` when it music can be played
audio.addEventListener('canplay', handleCanplay);
audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
audio.load();
function handleCanplay() {
// connect the audio element to the analyser node and the analyser node
// to the main Web Audio context
const source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>
Then it's just up to you to do something creative. For example instead of drawing a bunch of black dots across the screen like the first example we could scale random colored circles and adjust their color and velocity something like this
"use strict";
var context = new AudioContext();
var analyser = context.createAnalyser();
var numPoints = analyser.frequencyBinCount;
var audioDataArray = new Uint8Array(numPoints);
var ctx = document.querySelector("canvas").getContext("2d");
var ctx2 = document.createElement("canvas").getContext("2d");
var numSpots = 5;
var spots = [];
for (var ii = 0; ii < numSpots; ++ii) {
spots.push({
x: Math.random(),
y: Math.random(),
velocity: 0.01,
direction: Math.random(),
hue: Math.random() * 360 | 0,
});
}
function rnd(min, max) {
if (max === undefined) {
max = min;
min = 0;
}
return Math.random() * (max - min) + min;
}
function render() {
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.save();
ctx.globalAlpha = .97;
ctx.globalCompositeOperation = "source-out";
ctx.translate(ctx.canvas.width / 2, ctx.canvas.height / 2);
ctx.scale(1.001, 1.001);
ctx.rotate(0.003);
ctx.translate(-ctx.canvas.width / 2, -ctx.canvas.height / 2);
ctx.drawImage(ctx2.canvas, 0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.restore();
analyser.getByteFrequencyData(audioDataArray);
const width = ctx.canvas.width;
const height = ctx.canvas.height;
spots.forEach((spot, n) => {
const ndx = n * numPoints / numSpots | 0;
const audioValue = audioDataArray[ndx] / 255;
const sat = Math.pow(audioValue, 2) * 100;
spot.velocity = audioValue * 0.02;
spot.direction = (spot.direction + 1 + rnd(-.01, 0.01)) % 1;
const angle = spot.direction * Math.PI * 2;
spot.x = (spot.x + Math.cos(angle) * spot.velocity + 1) % 1;
spot.y = (spot.y + Math.sin(angle) * spot.velocity + 1) % 1;
ctx.fillStyle = "hsl(" + spot.hue + "," + sat + "%,50%)";
ctx.beginPath();
ctx.arc(spot.x * width, spot.y * height, 50 * audioValue, 0, Math.PI * 2, false);
ctx.fill();
});
var temp = ctx;
ctx = ctx2;
ctx2 = temp;
requestAnimationFrame(render);
}
requestAnimationFrame(render);
var audio = new Audio();
audio.loop = true;
audio.autoplay = true;
// this line is only needed if the music you are trying to play is on a
// different server than the page trying to play it.
// It asks the server for permission to use the music. If the server says "no"
// then you will not be able to play the music
audio.crossOrigin = "anonymous";
audio.addEventListener('canplay', handleCanplay);
audio.loop = true;
audio.src = "https://twgljs.org/examples/sounds/DOCTOR%20VOX%20-%20Level%20Up.mp3";
audio.load();
function handleCanplay() {
const source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
canvas { border: 1px solid black; display: block; }
<canvas></canvas>
music: DOCTOR VOX - Level Up

HTML5 Pre-resize images before uploading

Here's a noodle scratcher.
Bearing in mind we have HTML5 local storage and xhr v2 and what not. I was wondering if anyone could find a working example or even just give me a yes or no for this question:
Is it possible to Pre-size an image using the new local storage (or whatever), so that a user who does not have a clue about resizing an image can drag their 10mb image into my website, it resize it using the new localstorage and THEN upload it at the smaller size.
I know full well you can do it with Flash, Java applets, active X... The question is if you can do with Javascript + Html5.
Looking forward to the response on this one.
Ta for now.
Yes, use the File API, then you can process the images with the canvas element.
This Mozilla Hacks blog post walks you through most of the process. For reference here's the assembled source code from the blog post:
// from an input element
var filesToUpload = input.files;
var file = filesToUpload[0];
var img = document.createElement("img");
var reader = new FileReader();
reader.onload = function(e) {img.src = e.target.result}
reader.readAsDataURL(file);
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 800;
var MAX_HEIGHT = 600;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
var dataurl = canvas.toDataURL("image/png");
//Post dataurl to the server with AJAX
I tackled this problem a few years ago and uploaded my solution to github as https://github.com/rossturner/HTML5-ImageUploader
robertc's answer uses the solution proposed in the Mozilla Hacks blog post, however I found this gave really poor image quality when resizing to a scale that was not 2:1 (or a multiple thereof). I started experimenting with different image resizing algorithms, although most ended up being quite slow or else were not great in quality either.
Finally I came up with a solution which I believe executes quickly and has pretty good performance too - as the Mozilla solution of copying from 1 canvas to another works quickly and without loss of image quality at a 2:1 ratio, given a target of x pixels wide and y pixels tall, I use this canvas resizing method until the image is between x and 2 x, and y and 2 y. At this point I then turn to algorithmic image resizing for the final "step" of resizing down to the target size. After trying several different algorithms I settled on bilinear interpolation taken from a blog which is not online anymore but accessible via the Internet Archive, which gives good results, here's the applicable code:
ImageUploader.prototype.scaleImage = function(img, completionCallback) {
var canvas = document.createElement('canvas');
canvas.width = img.width;
canvas.height = img.height;
canvas.getContext('2d').drawImage(img, 0, 0, canvas.width, canvas.height);
while (canvas.width >= (2 * this.config.maxWidth)) {
canvas = this.getHalfScaleCanvas(canvas);
}
if (canvas.width > this.config.maxWidth) {
canvas = this.scaleCanvasWithAlgorithm(canvas);
}
var imageData = canvas.toDataURL('image/jpeg', this.config.quality);
this.performUpload(imageData, completionCallback);
};
ImageUploader.prototype.scaleCanvasWithAlgorithm = function(canvas) {
var scaledCanvas = document.createElement('canvas');
var scale = this.config.maxWidth / canvas.width;
scaledCanvas.width = canvas.width * scale;
scaledCanvas.height = canvas.height * scale;
var srcImgData = canvas.getContext('2d').getImageData(0, 0, canvas.width, canvas.height);
var destImgData = scaledCanvas.getContext('2d').createImageData(scaledCanvas.width, scaledCanvas.height);
this.applyBilinearInterpolation(srcImgData, destImgData, scale);
scaledCanvas.getContext('2d').putImageData(destImgData, 0, 0);
return scaledCanvas;
};
ImageUploader.prototype.getHalfScaleCanvas = function(canvas) {
var halfCanvas = document.createElement('canvas');
halfCanvas.width = canvas.width / 2;
halfCanvas.height = canvas.height / 2;
halfCanvas.getContext('2d').drawImage(canvas, 0, 0, halfCanvas.width, halfCanvas.height);
return halfCanvas;
};
ImageUploader.prototype.applyBilinearInterpolation = function(srcCanvasData, destCanvasData, scale) {
function inner(f00, f10, f01, f11, x, y) {
var un_x = 1.0 - x;
var un_y = 1.0 - y;
return (f00 * un_x * un_y + f10 * x * un_y + f01 * un_x * y + f11 * x * y);
}
var i, j;
var iyv, iy0, iy1, ixv, ix0, ix1;
var idxD, idxS00, idxS10, idxS01, idxS11;
var dx, dy;
var r, g, b, a;
for (i = 0; i < destCanvasData.height; ++i) {
iyv = i / scale;
iy0 = Math.floor(iyv);
// Math.ceil can go over bounds
iy1 = (Math.ceil(iyv) > (srcCanvasData.height - 1) ? (srcCanvasData.height - 1) : Math.ceil(iyv));
for (j = 0; j < destCanvasData.width; ++j) {
ixv = j / scale;
ix0 = Math.floor(ixv);
// Math.ceil can go over bounds
ix1 = (Math.ceil(ixv) > (srcCanvasData.width - 1) ? (srcCanvasData.width - 1) : Math.ceil(ixv));
idxD = (j + destCanvasData.width * i) * 4;
// matrix to vector indices
idxS00 = (ix0 + srcCanvasData.width * iy0) * 4;
idxS10 = (ix1 + srcCanvasData.width * iy0) * 4;
idxS01 = (ix0 + srcCanvasData.width * iy1) * 4;
idxS11 = (ix1 + srcCanvasData.width * iy1) * 4;
// overall coordinates to unit square
dx = ixv - ix0;
dy = iyv - iy0;
// I let the r, g, b, a on purpose for debugging
r = inner(srcCanvasData.data[idxS00], srcCanvasData.data[idxS10], srcCanvasData.data[idxS01], srcCanvasData.data[idxS11], dx, dy);
destCanvasData.data[idxD] = r;
g = inner(srcCanvasData.data[idxS00 + 1], srcCanvasData.data[idxS10 + 1], srcCanvasData.data[idxS01 + 1], srcCanvasData.data[idxS11 + 1], dx, dy);
destCanvasData.data[idxD + 1] = g;
b = inner(srcCanvasData.data[idxS00 + 2], srcCanvasData.data[idxS10 + 2], srcCanvasData.data[idxS01 + 2], srcCanvasData.data[idxS11 + 2], dx, dy);
destCanvasData.data[idxD + 2] = b;
a = inner(srcCanvasData.data[idxS00 + 3], srcCanvasData.data[idxS10 + 3], srcCanvasData.data[idxS01 + 3], srcCanvasData.data[idxS11 + 3], dx, dy);
destCanvasData.data[idxD + 3] = a;
}
}
};
This scales an image down to a width of config.maxWidth, maintaining the original aspect ratio. At the time of development this worked on iPad/iPhone Safari in addition to major desktop browsers (IE9+, Firefox, Chrome) so I expect it will still be compatible given the broader uptake of HTML5 today. Note that the canvas.toDataURL() call takes a mime type and image quality which will allow you to control the quality and output file format (potentially different to input if you wish).
The only point this doesn't cover is maintaining the orientation information, without knowledge of this metadata the image is resized and saved as-is, losing any metadata within the image for orientation meaning that images taken on a tablet device "upside down" were rendered as such, although they would have been flipped in the device's camera viewfinder. If this is a concern, this blog post has a good guide and code examples on how to accomplish this, which I'm sure could be integrated to the above code.
Correction to above:
<img src="" id="image">
<input id="input" type="file" onchange="handleFiles()">
<script>
function handleFiles()
{
var filesToUpload = document.getElementById('input').files;
var file = filesToUpload[0];
// Create an image
var img = document.createElement("img");
// Create a file reader
var reader = new FileReader();
// Set the image once loaded into file reader
reader.onload = function(e)
{
img.src = e.target.result;
var canvas = document.createElement("canvas");
//var canvas = $("<canvas>", {"id":"testing"})[0];
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 400;
var MAX_HEIGHT = 300;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
var dataurl = canvas.toDataURL("image/png");
document.getElementById('image').src = dataurl;
}
// Load files into file reader
reader.readAsDataURL(file);
// Post the data
/*
var fd = new FormData();
fd.append("name", "some_filename.jpg");
fd.append("image", dataurl);
fd.append("info", "lah_de_dah");
*/
}</script>
Modification to the answer by Justin that works for me:
Added img.onload
Expand the POST request with a real example
function handleFiles()
{
var dataurl = null;
var filesToUpload = document.getElementById('photo').files;
var file = filesToUpload[0];
// Create an image
var img = document.createElement("img");
// Create a file reader
var reader = new FileReader();
// Set the image once loaded into file reader
reader.onload = function(e)
{
img.src = e.target.result;
img.onload = function () {
var canvas = document.createElement("canvas");
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
var MAX_WIDTH = 800;
var MAX_HEIGHT = 600;
var width = img.width;
var height = img.height;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
dataurl = canvas.toDataURL("image/jpeg");
// Post the data
var fd = new FormData();
fd.append("name", "some_filename.jpg");
fd.append("image", dataurl);
fd.append("info", "lah_de_dah");
$.ajax({
url: '/ajax_photo',
data: fd,
cache: false,
contentType: false,
processData: false,
type: 'POST',
success: function(data){
$('#form_photo')[0].reset();
location.reload();
}
});
} // img.onload
}
// Load files into file reader
reader.readAsDataURL(file);
}
If you don't want to reinvent the wheel you may try plupload.com
Typescript
async resizeImg(file: Blob): Promise<Blob> {
let img = document.createElement("img");
img.src = await new Promise<any>(resolve => {
let reader = new FileReader();
reader.onload = (e: any) => resolve(e.target.result);
reader.readAsDataURL(file);
});
await new Promise(resolve => img.onload = resolve)
let canvas = document.createElement("canvas");
let ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0);
let MAX_WIDTH = 1000;
let MAX_HEIGHT = 1000;
let width = img.naturalWidth;
let height = img.naturalHeight;
if (width > height) {
if (width > MAX_WIDTH) {
height *= MAX_WIDTH / width;
width = MAX_WIDTH;
}
} else {
if (height > MAX_HEIGHT) {
width *= MAX_HEIGHT / height;
height = MAX_HEIGHT;
}
}
canvas.width = width;
canvas.height = height;
ctx = canvas.getContext("2d");
ctx.drawImage(img, 0, 0, width, height);
let result = await new Promise<Blob>(resolve => { canvas.toBlob(resolve, 'image/jpeg', 0.95); });
return result;
}
The accepted answer works great, but the resize logic ignores the case in which the image is larger than the maximum in only one of the axes (for example, height > maxHeight but width <= maxWidth).
I think the following code takes care of all cases in a more straight-forward and functional way (ignore the typescript type annotations if using plain javascript):
private scaleDownSize(width: number, height: number, maxWidth: number, maxHeight: number): {width: number, height: number} {
if (width <= maxWidth && height <= maxHeight)
return { width, height };
else if (width / maxWidth > height / maxHeight)
return { width: maxWidth, height: height * maxWidth / width};
else
return { width: width * maxHeight / height, height: maxHeight };
}
fd.append("image", dataurl);
This will not work. On PHP side you can not save file with this.
Use this code instead:
var blobBin = atob(dataurl.split(',')[1]);
var array = [];
for(var i = 0; i < blobBin.length; i++) {
array.push(blobBin.charCodeAt(i));
}
var file = new Blob([new Uint8Array(array)], {type: 'image/png', name: "avatar.png"});
fd.append("image", file); // blob file
Resizing images in a canvas element is generally bad idea since it uses the cheapest box interpolation. The resulting image noticeable degrades in quality. I'd recommend using http://nodeca.github.io/pica/demo/ which can perform Lanczos transformation instead. The demo page above shows difference between canvas and Lanczos approaches.
It also uses web workers for resizing images in parallel. There is also WEBGL implementation.
There are some online image resizers that use pica for doing the job, like https://myimageresizer.com
You can use dropzone.js if you want to use simple and easy upload manager with resizing before upload functions.
It has builtin resize functions, but you can provide your own if you want.

Resources