pulling images from different canvas and exporting to single pdf - html5-canvas

I have a situation where i am generating several graphs in web page and showing them in canvas and my requirement is that on click of download button,i should be able to export all canvas images to pdf.
I have successfully done this for single canvas element using html2canvas and Jspdf but cannot figure out how to do the same for all.
I followed this JSFiddle code for generating pdf from Html2canvas and jspdf.
jsfiddle
$(document).ready(function() {
var d_canvas = document.getElementById('canvas');
var context = d_canvas.getContext('2d');
context.moveTo(20, 20);
context.lineTo(100, 20);
context.fillStyle = "#999";
context.beginPath();
context.arc(100, 100, 75, 0, 2 * Math.PI);
context.fill();
context.fillStyle = "orange";
context.fillRect(20, 20, 50, 50);
context.font = "24px Helvetica";
context.fillStyle = "#000";
context.fillText("Canvas", 50, 130);
$('#ballon').draggable();
$('#download').click(function() {
html2canvas($("#canvas"), {
onrendered: function(canvas) {
var imgData = canvas.toDataURL(
'image/png');
var doc = new jsPDF('p', 'mm');
doc.addImage(imgData, 'PNG', 10, 10);
doc.save('sample-file.pdf');
}
});
});
});
Kindly help,thanks in advance.

It was very simple ,I just changed the argument to this line
html2canvas($("#canvas"), {
Instead of passing seperate canvases and then trying to export them to single pdf rather i kept different canvases in on Div and passed the Div id to the above mentioned line and both canvases were exported to single pdf file

There should be no need to use html2canvas for this. It will only deliver you another canvas element at a cost. You can use the original canvas element and toDataURL() directly with jsPdf.
Example (partly pseudo)
This will collect all canvases in the page and put them in a PDF. The pseudo part is the missing variables for width, deltas, factor etc. But you should get the gist of it.
Note: The size for images must be given in the same unit you're using for the document, so you need to convert pixel positions and sizes into millimeter representation using a pre-calculated factor based on document DPI (not shown here, but this may help).
var x = someX,
y = someY,
dx = somDeltaForX,
dy = somDeltaForY,
i,
canvases = document.querySelectorAll("canvas"),
pdf = new jsPDF('p', 'mm'),
f = convertionFactorFromPixelstoMM;
for(i = 0; i < canvases.length; i++) {
var url = canvases[i].toDataURL("image/jpeg", 0.75);
doc.addImage(url, "JPEG", x * f, y * f, canvases[i].width * f, canvases[i].height * f);
x += dx; // tip: dx could also be based on previous canvas width in non-uniform sizes
if (x > widthOfPage) {
x = 0;
y += dy;
}
}

Related

How do I draw horizontal bars with a label using either ChartJS or D3?

What's the best way of drawing multiple horizontal lines and labels for a simple line graph in either ChartJS or D3? I know that I could draw these as individual lines and then do a text overlay but I'm wondering if there is a simpler solution. Ideally I'd be able to create each of the labels below as one unit and move it anywhere.
If this is simpler in another JS graph library then feel free suggest.
Example below
To do it with Chart.js you have to extend the line chart
Chart.types.Line.extend({
name: "LineAlt",
initialize: function (data) {
// it's easier to programmatically update if you store the raw data in the object (vs. storing the geometric data)
this.marks = data.marks;
this.marks.xStart = Number(data.labels[0]);
this.marks.xStep = data.labels[1] - data.labels[0];
// make sure all our x labels are uniformly apart
if (!data.labels.every(function (e, i, arr) { return !i || ((e - arr[i - 1]) === this.marks.xStep); }, this))
throw "labels must be uniformly spaced";
Chart.types.Line.prototype.initialize.apply(this, arguments);
},
draw: function () {
Chart.types.Line.prototype.draw.apply(this, arguments);
// save existing context properties
var self = this;
var ctx = self.chart.ctx;
var scale = self.scale;
ctx.save();
// line properties
ctx.lineWidth = 1;
ctx.fillStyle = "#666";
ctx.strokeStyle = "#666";
ctx.textAlign = "center";
ctx.textBaseline = "bottom";
ctx.font = scale.font;
// draw marks
self.marks.forEach(function (mark) {
// assuming that the marks are always within the data range
var y = scale.calculateY(mark.y);
var x1 = scale.calculateX((mark.x1 - self.marks.xStart) / self.marks.xStep);
var x2 = scale.calculateX((mark.x2 - self.marks.xStart) / self.marks.xStep);
// draw line
ctx.beginPath();
ctx.moveTo(x1, y);
ctx.lineTo(x2, y);
// draw edges
ctx.moveTo(x1, y + 10);
ctx.lineTo(x1, y - 10);
ctx.moveTo(x2, y + 10);
ctx.lineTo(x2, y - 10);
ctx.stroke();
// draw text
ctx.fillText(mark.label, (x1 + x2) / 2, y + scale.fontSize * 1.5);
})
ctx.restore();
},
});
You pass in the data for drawing the lines like so
var data = {
...
marks: [
{
x1: 1.5,
x2: 3.5,
y: 50,
label: 'Label1'
},
{
x1: 5,
x2: 7,
y: 60,
label: 'Label2'
}
]
};
and you create the chart using this extended chart type
var myLineChart = new Chart(ctx).LineAlt(data);
You can update the lines like this
myLineChart.marks[0].y = 80;
myLineChart.marks[0].x1 = 9;
myLineChart.marks[0].x2 = 10;
and then call
myLineChart.update();
to reflect those changes on the canvas
Caveats
The (x axis) labels should be numeric and uniformly spaced.
The lines should be within the scale range of the y axis (alternatively you can do a scaleOverride to set the scale parameters so that the lines are within the y scale range)
Fiddle - http://jsfiddle.net/en92k763/2/

Calculate the vertex while creating terrain from heightmap using ThreeJs

I'm reading "create terrain from heightmap" example from ThreeJs Cookbook
This example load GrandCanyon: http://lh5.ggpht.com/_-B0hFoGrn-w/SvHiYk39yAI/AAAAAAAABOQ/6IGZwifUYGA/GrandCanyon.png
And create a 3D terrain: http://www.smartjava.org/tjscb/02-geometries-meshes/02.06-create-terrain-from-heightmap.html
There are some code pieces I can not understand:
// draw on canvas
ctx.drawImage(img, 0, 0);
var pixel = ctx.getImageData(0, 0, width, depth);
var geom = new THREE.Geometry;
var output = [];
for (var x = 0; x < depth; x++) {
for (var z = 0; z < width; z++) {
// get pixel
// since we're grayscale, we only need one element
var yValue = pixel.data[z * 4 + (depth * x * 4)] / heightOffset;
var vertex = new THREE.Vector3(x * spacingX, yValue, z * spacingZ);
geom.vertices.push(vertex);
}
}
why is yValue calculated with that value ? why don't we use var yValue = pixel.data[z * 4 + (depth * x )] or something like that ?
And do we really need spacingX and spacingZ ?
Source code is here: https://github.com/josdirksen/threejs-cookbook/blob/master/02-geometries-meshes/02.06-create-terrain-from-heightmap.html
Could you please help me ?
Thank you very much!
You don't NEED spacingX and spacingZ, no. You could adjust scale in other ways, like applying a scale matrix to the entire THREE.Geometry after you've populated the vertices. Up to you, really.
As fort the yValue, the indexing is to adjust for the way the data for the texture is laid out. There are four channels, usually RGBA, but in this case we only need one of them as a height.

Resizing a DXGI Resource or Texture2D in SharpDX

I want to resize a screen captured using the Desktop Duplication API in SharpDX. I am using the Screen Capture sample code from the SharpDX Samples repository, relevant portion follows:.
SharpDX.DXGI.Resource screenResource;
OutputDuplicateFrameInformation duplicateFrameInformation;
// Try to get duplicated frame within given time
duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource);
if (i > 0)
{
// copy resource into memory that can be accessed by the CPU
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopyResource(screenTexture2D, screenTexture);
// Get the desktop capture texture
var mapSource = device.ImmediateContext.MapSubresource(screenTexture, 0, MapMode.Read, MapFlags.None);
System.Diagnostics.Debug.WriteLine(watch.Elapsed);
// Create Drawing.Bitmap
var bitmap = new System.Drawing.Bitmap(width, height, PixelFormat.Format32bppArgb);
var boundsRect = new System.Drawing.Rectangle(0, 0, width, height);
// Copy pixels from screen capture Texture to GDI bitmap
var mapDest = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat);
var sourcePtr = mapSource.DataPointer;
var destPtr = mapDest.Scan0;
for (int y = 0; y < height; y++)
{
// Iterate and write to bitmap...
I would like to resize the image much smaller than the actual screen size before processing it as a byte array. I do not need to save the image, just get at the bytes. I would like to do this relatively quickly and efficiently (e.g. leveraging GPU if possible).
I'm not able to scale during CopyResource, as the output dimensions are required to be the same as the input dimensions. Can I perform another copy from my screenTexture2D to scale? How exactly do I scale the resource - do I use a Swap Chain, Matrix transform, or something else?
If you are fine resizing to a power of two from the screen, you can do it by:
Create a smaller texture with RenderTarget/ShaderResource usage, and options GenerateMipMaps, same size of screen, mipcount > 1 (2 for having size /2, 3 for having /4...etc.).
Copy the first mipmap of the screen texture to the smaller texture
DeviceContext.GenerateMipMaps on the smaller texture
Copy the selected mimap of the smaller texture (1: /2, 2: /4...etc.) to the staging texture (that should also be declared smaller, i.e. same size as the mipmap that is going to be used)
A quick hack on the original code to generate a /2 texture would be like this:
[STAThread]
private static void Main()
{
// # of graphics card adapter
const int numAdapter = 0;
// # of output device (i.e. monitor)
const int numOutput = 0;
const string outputFileName = "ScreenCapture.bmp";
// Create DXGI Factory1
var factory = new Factory1();
var adapter = factory.GetAdapter1(numAdapter);
// Create device from Adapter
var device = new Device(adapter);
// Get DXGI.Output
var output = adapter.GetOutput(numOutput);
var output1 = output.QueryInterface<Output1>();
// Width/Height of desktop to capture
int width = output.Description.DesktopBounds.Width;
int height = output.Description.DesktopBounds.Height;
// Create Staging texture CPU-accessible
var textureDesc = new Texture2DDescription
{
CpuAccessFlags = CpuAccessFlags.Read,
BindFlags = BindFlags.None,
Format = Format.B8G8R8A8_UNorm,
Width = width/2,
Height = height/2,
OptionFlags = ResourceOptionFlags.None,
MipLevels = 1,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Staging
};
var stagingTexture = new Texture2D(device, textureDesc);
// Create Staging texture CPU-accessible
var smallerTextureDesc = new Texture2DDescription
{
CpuAccessFlags = CpuAccessFlags.None,
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
Format = Format.B8G8R8A8_UNorm,
Width = width,
Height = height,
OptionFlags = ResourceOptionFlags.GenerateMipMaps,
MipLevels = 4,
ArraySize = 1,
SampleDescription = { Count = 1, Quality = 0 },
Usage = ResourceUsage.Default
};
var smallerTexture = new Texture2D(device, smallerTextureDesc);
var smallerTextureView = new ShaderResourceView(device, smallerTexture);
// Duplicate the output
var duplicatedOutput = output1.DuplicateOutput(device);
bool captureDone = false;
for (int i = 0; !captureDone; i++)
{
try
{
SharpDX.DXGI.Resource screenResource;
OutputDuplicateFrameInformation duplicateFrameInformation;
// Try to get duplicated frame within given time
duplicatedOutput.AcquireNextFrame(10000, out duplicateFrameInformation, out screenResource);
if (i > 0)
{
// copy resource into memory that can be accessed by the CPU
using (var screenTexture2D = screenResource.QueryInterface<Texture2D>())
device.ImmediateContext.CopySubresourceRegion(screenTexture2D, 0, null, smallerTexture, 0);
// Generates the mipmap of the screen
device.ImmediateContext.GenerateMips(smallerTextureView);
// Copy the mipmap 1 of smallerTexture (size/2) to the staging texture
device.ImmediateContext.CopySubresourceRegion(smallerTexture, 1, null, stagingTexture, 0);
// Get the desktop capture texture
var mapSource = device.ImmediateContext.MapSubresource(stagingTexture, 0, MapMode.Read, MapFlags.None);
// Create Drawing.Bitmap
var bitmap = new System.Drawing.Bitmap(width/2, height/2, PixelFormat.Format32bppArgb);
var boundsRect = new System.Drawing.Rectangle(0, 0, width/2, height/2);
// Copy pixels from screen capture Texture to GDI bitmap
var mapDest = bitmap.LockBits(boundsRect, ImageLockMode.WriteOnly, bitmap.PixelFormat);
var sourcePtr = mapSource.DataPointer;
var destPtr = mapDest.Scan0;
for (int y = 0; y < height/2; y++)
{
// Copy a single line
Utilities.CopyMemory(destPtr, sourcePtr, width/2 * 4);
// Advance pointers
sourcePtr = IntPtr.Add(sourcePtr, mapSource.RowPitch);
destPtr = IntPtr.Add(destPtr, mapDest.Stride);
}
// Release source and dest locks
bitmap.UnlockBits(mapDest);
device.ImmediateContext.UnmapSubresource(stagingTexture, 0);
// Save the output
bitmap.Save(outputFileName);
// Capture done
captureDone = true;
}
screenResource.Dispose();
duplicatedOutput.ReleaseFrame();
}
catch (SharpDXException e)
{
if (e.ResultCode.Code != SharpDX.DXGI.ResultCode.WaitTimeout.Result.Code)
{
throw e;
}
}
}
// Display the texture using system associated viewer
System.Diagnostics.Process.Start(Path.GetFullPath(Path.Combine(Environment.CurrentDirectory, outputFileName)));
// TODO: We should cleanp up all allocated COM objects here
}
You need to take your original source surface in GPU memory and Draw() it on to a smaller surface. This involves simple vector/pixel shaders, which some folks with simple needs would rather bypass.
I would look to see if someone made a sprite lib for sharpdx. It should be a common "thing"...or using Direct2D (which is much more fun). Since D2D is just a user-mode library over D3D, it interops with D3D very easily.
I've never used SharpDx, but fFrom memory you would do something like this:
1.) Create an ID2D1Device, wrapping your existing DXGI Device (make sure your dxgi device creation flag has D3D11_CREATE_DEVICE_BGRA_SUPPORT)
2.) Get the ID2D1DeviceContext from your ID2D1Device
3.) Wrap your source and destination DXGI surfaces into D2D bitmaps with ID2D1DeviceContext::CreateBitmapFromDxgiSurface
4.) ID2D1DeviceContext::SetTarget of your destination surface
5.) BeginDraw, ID2D1DeviceContext::DrawBitmap, passing your source D2D bitmap. EndDraw
6.) Save your destination
Here is a pixelate example...
d2d_device_context_h()->BeginDraw();
d2d_device_context_h()->SetTarget(mp_ppBitmap1.Get());
D2D1_SIZE_F rtSize = mp_ppBitmap1->GetSize();
rtSize.height *= (1.0f / cbpx.iPixelsize.y);
rtSize.width *= (1.0f / cbpx.iPixelsize.x);
D2D1_RECT_F rtRect = { 0.0f, 0.0f, rtSize.width, rtSize.height };
D2D1_SIZE_F rsSize = mp_ppBitmap0->GetSize();
D2D1_RECT_F rsRect = { 0.0f, 0.0f, rsSize.width, rsSize.height };
d2d_device_context_h()->DrawBitmap(mp_ppBitmap0.Get(), &rtRect, 1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_LINEAR, &rsRect);
d2d_device_context_h()->SetTarget(mp_ppBitmap0.Get());
d2d_device_context_h()->DrawBitmap(mp_ppBitmap1.Get(), &rsRect, 1.0f,
D2D1_BITMAP_INTERPOLATION_MODE_NEAREST_NEIGHBOR, &rtRect);
d2d_device_context_h()->EndDraw();
Where iPixelsize.xy is the size of the "pixelated pixel", note that i just use linear interpolation when shrinking the bmp and NOT when i reenlarge. This will generate a pixelation effect.

Raphael playground effect and fill opacity

I've written the following code:
var w = 800;
var h = 600;
var paper = Raphael(0, 0, w, h);
paper.image("http://static.pourfemme.it/pfmoda/fotogallery/625X0/63617/borsa-alviero-martini-rodeo-drive.jpg", 0, 0, w, h);
var c = paper.circle(400, 300, 1);
c.attr({stroke: "#999", "stroke-width": w*2});
anim = Raphael.animation({r: w*2}, 6000);
c.animate(anim.delay(100));
( http://jsfiddle.net/qAgy7/ )
I need to visualize background image when the circle enlarges its radius, however I have the following problems:
I don't understand why the animation run slowly
Chrome render the effect in different way (I see a polygon not a circle like in
Firefox and IE)
Someone can help me?

HTML5 Cropped Canvas Image Not Showing

I am learning HTML5 and JavaScript and am attempting to draw an animated image. I thought the easiest way to do this would be to create an image with the frames in a row, as below.
Image http://html5stuff.x10.mx/HTML5%20Test/alien_green_strip8.png
Then a only part of the image would be draw at a time. I followed this tutorial.
This is a link to what I have made:
html5stuff.x10.mx/HTML5%20Test/page.html
The problem is, the image isn't being drawn. It's something within the drawSprite function, because when I change it to a simple "ctx.drawImage(sprite.source, x, y)", it does draw the image (just as a whole without the animation, obviously). Please note that though there is an option for rotating the image, I have not yet added support for that. Also, keys.js is not being used yet though it is included.
The reason is because sprite.imagenum is not defined when drawSprite is called.
This is because in some places you use imagenum and others imgnum, so correct that typo and you're good to go!
TOTALLY OPTIONAL:
But now that thats answered lets take a look at your js to get a better idea of how to structure this. You have:
function Sprite(){
var imagenum = null; //The number of images
var width = null; //The width of each image
var height = null; //The height on each image
var xoffset = null; //The origin on each image
var yoffset = null;
var source = null; //The location of each image
}
function drawSprite(sprite, subimg, x, y, w, h, angle){
ctx.drawImage(sprite.source, Math.floor(subimg) * sprite.width, 0, w * sprite.imagenum, h, x - sprite.xoffset * (w/sprite.width), y - sprite.yoffset * (h/sprite.height), w, h);
}
All those var statements are actually doing nothing. It should be:
function Sprite(){
this.imagenum = null; //The number of images
this.width = null; //The width of each image
this.height = null; //The height on each image
this.xoffset = null; //The origin on each image
this.yoffset = null;
this.source = null; //The location of each image
}
in order to correctly set them as you were envisioning. Also, you can rewrite drawSprite so that the sprites are drawing themselves, so that you don't need to pass them as an argument:
// now we can use "this" instead of "sprite"
Spite.prototype.draw = function(subimg, x, y, w, h, angle){
// put this on several lines just so we can see it easier
ctx.drawImage(this.source,
Math.floor(subimg) * this.width,
0,
w * this.imagenum, h,
x - this.xoffset * (w/this.width),
y - this.yoffset * (h/this.height),
w, h);
}
Then instead of:
drawSprite(img, index, 128, 128, 32, 32, 0); // img is a sprite
We can write:
img.draw(index, 128, 128, 32, 32, 0); // img is a sprite

Resources