snap svg animate from Illustrator file - animation

I've got an svg file from Illustrator that I pasted into an html file; I am trying to animate it with snap svg.
The following causes no errors, but doesn't move anything either. I tried changing the value, and also animating other attributes, but nothing doing. I also tried adding a callback function as some suggest.
var svg = Snap("#svg");
var ball = svg.select("#ball")
var mov = ball.animate({x: 10}, 1000);
console.log(mov);
The ball object looks like this:
<path id="ball" d="M100.5,123A38.5,38.5,0,1,0,139,161.5,38.5,38.5,0,0,0,100.5,123Zm17,28a4.5,4.5,0,1,1,4.5-4.5A4.49,4.49,0,0,1,117.5,151Z" transform="translate(0 557)" style="fill: red"/>
The log of the mov object in the console looks like this:
Object { node: path#ball, paper: {…}, type: "path", id: "pathSjdy9zk8k1", anims: {…}, _: {…} }

Related

Alternative to unescape() and saving images as base64

I am trying to convert svg to png, using the code here as basis of my conversion.
Please note the following code (it has been shortened to only what's relevant to the error that I will describe later):
let svgData = new XMLSerializer().serializeToString(target); // 'target' is an svg element that is passed to us
let canvas = document.createElement('canvas');
let ctxt = canvas.getContext('2d');
let img = document.createElement('img');
img.setAttribute(
'src',
'data:image/svg+xml;base64,' + window.btoa(unescape(encodeURIComponent(svgData)))
);
ctxt.drawImage(img, 0, 0);
The above code works fine. However there is a problem with it. Notice that here is this line of code:
'data:image/svg+xml;base64,' + window.btoa(unescape(encodeURIComponent(svgData)))
In this line of code, the unescape() function is used, however it is deprecated, and I have to use an alternative. According to documentation, I should use decodeURIComponent(), but this not a viable solution, because when I update the code above with decodeURIComponent(), the result is:
'data:image/svg+xml;base64,' + window.btoa(decodeURIComponent(encodeURIComponent(svgData)))
So basically I would be encoding then decoding. Which is the same as doing this:
'data:image/svg+xml;base64,' + window.btoa(svgData)
Now, with the new updated code, if the source svg contains only English characters, everything is OK. However, if an svg contains characters other than English (e.g., contains the Japanese word トマト), then the next line of code, ctxt.drawImage(img, 0, 0); , throws an error. The error message is DOMException: String contains an invalid character.
So, to summarise so far:
the original code works fine with both English and non-English characters, but uses the unescape() function which is depracated. So I should use something else.
The updated code, which does not encode or decode, causes an exception when using non-English characters.
Now, question 1, what do I do to be able to create an image from an svg that contains non-English characters using base64?
As an alternative solution, I did this
'data:image/svg+xml;utf8,' + svgData
Basically, I did not use base64, but rather utf8. This worked just fine, but I am not sure if it is the correct solution.
So, question 2, is using utf8 a common practice? Is there any issue with it as opposed to base64?
Thank you.
According to the comment below from #RobertLongson , I updated the code to:
'data:image/svg+xml;utf8,' + encodeURIComponent(svgData)
This worked in svg elements containg either English and non-English characters.
Now, question 3, is this correct?
Apologies about the basic question. I am not familiar with this. Thanks.
With this example I'm trying to avoid using the mentioned functions for converting to base64 and instead use a FileReader and particular the readAsDataURL function for returning the data URI. I know the code is a bit more complicated with the callbacks, but it works.
As I understand Google Translate トマト translates into 🍅.
let svg01 = document.getElementById('svg01');
let canvas = document.getElementById('canvas01');
let img = document.getElementById('img01');
let ctx = canvas.getContext('2d');
canvas01.width = svg01.getAttribute('width');
canvas01.height = svg01.getAttribute('height');
let svgData = new XMLSerializer().serializeToString(svg01);
// create a File object
let file = new File([svgData], 'svg.svg', {
type: "image/svg+xml"
});
// and a reader
let reader = new FileReader();
reader.addEventListener('load', e => {
let img = new Image();
// wait for it to got load
img.addEventListener('load', e => {
// update canvas with new image
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.drawImage(e.target, 0, 0);
// create PNG image based on canvas
img01.src = canvas.toDataURL("image/png");
});
img.src = e.target.result;
});
// read the file as a data URL
reader.readAsDataURL(file);
<p>SVG embedded:</p>
<svg id="svg01" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 20" width="300" height="60">
<rect width="100" height="20" fill="silver" rx="5"/>
<text font-size="7" x="50" y="10" text-anchor="middle"
dominant-baseline="middle">Japanese word: トマト 🍅</text>
</svg>
<p>Canvas image:</p>
<canvas id="canvas01"></canvas>
<p>PNG image:</p>
<p><img id="img01" /></p>

Using L.esri.DynamicMapLayer, is it possible to bind a mouseover event rather than a pop-up on a dynamic map?

I'm aware of binding a pop-up to ESRI's L.esri.DynamicMapLayer here. The following code below is successful.
$.ajax({
type: 'GET',
url: url + '?f=json',
data: { layer: fooType },
dataType: 'json',
success: function(json) {
var foo_layer = fooLayers[fooType].layers;
foo = L.esri.dynamicMapLayer({
url: url,
layers: [foo_layer],
transparent: true
}).addTo(map).bringToFront();
foo.bindPopup(function(error, featureCollection) {
if (error || featureCollection.features.length === 0) {
return false;
} else {
var obj = featureCollection.features[0].properties;
var val = obj['Pixel Value'];
var lat = featureCollection.features[0].geometry.coordinates[1];
var lon = featureCollection.features[0].geometry.coordinates[0];
new L.responsivePopup({
autoPanPadding: [10, 10],
closeButton: true,
autoPan: false
}).setContent(parseFloat(val).toFixed(2)).setLatLng([lat, lon]).openOn(map);
}
});
}
});
But rather than a click response I am wondering as to whether you can mouseover using bindTooltip instead on a dynamic map. I've looked at the documentation for L.esri.DynamicMapLayer which says it is an extension of L.ImageOverlay. But perhaps there is an issue outlined here that I'm not fully understanding. Maybe it is not even related.
Aside, I've been testing multiple variations of even the simplest code to get things to work below but have been unsuccessful. Perhaps because this is asynchronous behavior it isn't possible. Looking for any guidance and/or explanation(s). Very novice programmer and much obliged for expertise.
$.ajax({
type: 'GET',
url: url + '?f=json',
data: { layer: fooType },
dataType: 'json',
success: function(json) {
var foo_layer = fooLayers[fooType].layers;
foo = L.esri.dynamicMapLayer({
url: url,
layers: [foo_layer],
transparent: true
}).addTo(map).bringToFront();
foo.bindTooltip(function(error, featureCollection) {
if (error || featureCollection.features.length === 0) {
return false;
} else {
new L.tooltip({
sticky: true
}).setContent('blah').setLatLng([lat,lng]).openOn(map);
}
});
}
});
Serendipitously, I have been working on a different problem, and one of the byproducts of that problem may come in handy for you.
Your primary issue is the asynchronous nature of the click event. If you open up your map (the first jsfiddle in your comment), open your dev tools network tab, and start clicking around, you will see a new network request made for every click. That's how a lot of esri query functions work - they need to query the server and check the database for the value you want at the given latlng. If you tried to attach that same behavior to a mousemove event, you'll trigger a huge number of network requests and you'll overload the browser - bad news.
One solution of what you can do, and its a lot more work, is to read the pixel data under the cursor of the image returned from the esri image service. If you know the exact rgb value of the pixel under the cursor, and you know what value that rgb value corresponds to in the map legend, you can achieve your result.
Here is a working example
And Here is the codesandbox source code. Don't be afraid to hit refresh, CSB is little wonky in the way it transpiles the modules.
What is happening here? Let's look step by step:
On map events like load, zoomend, moveend, a specialized function is fetching the same image that L.esri.dynamicMapLayer does, using something called EsriImageRequest, which is a class I wrote that reuses a lot of esri-leaflet's internal logic:
map.on("load moveend zoomend resize", applyImage);
const flashFloodImageRequest = new EsriImageRequest({
url: layer_url,
f: "image",
sublayer: "3",
});
function applyImage() {
flashFloodImageRequest
.fetchImage([map.getBounds()], map.getZoom())
.then((image) => {
//do something with the image
});
}
An instance of EsriImageRequest has the fetchImage method, which takes an array of L.LatLngBounds and a map zoom level, and returns an image - the same image that your dynamicMapLayer displays on the map.
EsriImageRequest is probably extra code that you don't need, but I happen to have just run into this issue. I wrote this because my app runs on a nodejs server, and I don't have a map instance with an L.esri.dynamicMapLayer. As a simpler alternative, you can target the leaflet DOM <img> element that shows your dynamicMapLayer, use that as your image source that we'll need in step 2. You will have to set up a listener on the src attribute of that element, and run the applyImage in that listener. If you're not familiar with how leaflet manages the DOM, look into your elements tab in the inspector, and you can find the <img> element here:
I'd recommend doing it that way, and not the way my example shows. Like I said, I happened to have just been working on a sort-of related issue.
Earlier in the code, I had set up a canvas, and using the css position, pointer-events, and opacity properties, it lays exactly over the map, but is set to take no interaction (I gave it a small amount of opacity in the example, but you'd probably want to set opacity to 0). In the applyImage function, the image we got is written to that canvas:
// earlier...
const mapContainer = document.getElementById("leafletMapid");
const canvas = document.getElementById("mycanvas");
const height = mapContainer.getBoundingClientRect().height;
const width = mapContainer.getBoundingClientRect().width;
canvas.height = height;
canvas.width = width;
const ctx = canvas.getContext("2d");
// inside applyImage .then:
.then((image) => {
image.crossOrigin = "*";
ctx.drawImage(image, 0, 0, width, height);
});
Now we have an invisible canvas who's pixel content is exactly the same as the dynamicMapLayer's.
Now we can listen to the map's mousemove event, and get the mouse's rgba pixel value from the canvas we created. If you read into my other question, you can see how I got the array of legend values, and how I'm using that array to map the pixel's rgba value back to the legend's value for that color. We can use the legend's value for that pixel, and set the popup content to that value.
map.on("mousemove", (e) => {
// get xy position on cavnas of the latlng
const { x, y } = map.latLngToContainerPoint(e.latlng);
// get the pixeldata for that xy position
const pixelData = ctx.getImageData(x, y, 1, 1);
const [R, G, B, A] = pixelData.data;
const rgbvalue = { R, G, B, A };
// get the value of that pixel according to the layer's legend
const value = legend.find((symbol) =>
compareObjectWithTolerance(symbol.rgbvalue, rgbvalue, 5)
);
// open the popup if its not already open
if (!popup.isOpen()) {
popup.setLatLng(e.latlng);
popup.openOn(map);
}
// set the position of the popup to the mouse cursor
popup.setLatLng(e.latlng);
// set the value of the popup content to the value you got from the legend
popup.setContent(`Value: ${value?.label || "unknown"}`);
});
As you can see, I'm also setting the latlng of the popup to wherever the mouse is. With closeButton: false in the popup options, it behaves much like a tooltip. I tried getting it to work with a proper L.tooltip, but I was having some trouble myself. This seems to create the same effect.
Sorry if this was a long answer. There are many ways to adapt / improve my code sample, but this should get you started.

How to save a real time flot chart as image?

I know that this type of question has been already made. I have seen answers on this topic, but I didn't understand actually how to save a flot chart as an image (png or jpeg). Bellow you have a print screen of my real time graph. .
When I click "Save Image As..." the photo which is saved is completely black. I tried many ways but none of them worked for me. So how could I save my graph as an image?
Here is an updated version of the fiddle you posted in the comments. Instead of generating an image and a PDF document after clicking a link, it creates the image directly after plotting the chart and hides the original canvas. Using "Save Image As ..." on the new chart works fine for me. The code:
$.plot($("#placeholder"), [{
label: 'Test',
data: [
[0, 0],
[1, 1]
]
}], {
yaxis: {
max: 1
}
});
html2canvas($("#placeholder").get(0), {
onrendered: function(canvas) {
document.body.appendChild(canvas);
$('#placeholder').hide();
}
});
When drawing a new chart (after changing the option or so) you have to show() the original container and remove the copy before calling html2canvas(...) again.

how to use html content inside a canvas element

Can any one tell me how to place my html content on a canvas.And if we can do that, will the properties and events of those elements works or not, and also I have animations drawn on that canvas.
From this article on MDN:
You can't just draw HTML into a canvas. Instead, you need to use an
SVG image containing the content you want to render. To draw HTML
content, you'd use a element containing the HTML, then
draw that SVG image into your canvas.
It than suggest you follow these steps:
The only really tricky thing here—and that's probably an
overstatement—is creating the SVG for your image. All you need to do
is create a string containing the XML for the SVG and construct a Blob
with the following parts.
The MIME media type of the blob should be "image/svg+xml".
The element.
Inside that, the element.
The (well-formed) HTML itself, nested inside the .
By using a object URL as described above, we can inline our HTML
instead of having to load it from an external source. You can, of
course, use an external source if you prefer, as long as the origin is
the same as the originating document.
The following example is provided (you can see more information about this in this blog by Robert O'Callahan):
DEMO
const ctx = document.getElementById("canvas").getContext("2d");
const data = `
<svg xmlns='http://www.w3.org/2000/svg' width='200' height='200'>
<foreignObject width='100%' height='100%'>
<div xmlns='http://www.w3.org/1999/xhtml' style='font-size:40px'>
<em>I</em> like <span style='color:white; text-shadow:0 0 2px blue;'>CANVAS</span>
</div>
</foreignObject>
</svg>
`;
const img = new Image();
const svg = new Blob([data], {type: "image/svg+xml;charset=utf-8"});
const url = URL.createObjectURL(svg);
img.onload = function() {
ctx.drawImage(img, 0, 0);
URL.revokeObjectURL(url);
};
img.src = url;
<canvas id="canvas" style="border:2px solid black;" width="200" height="200"></canvas>
This example results in this HTML being rendered to canvas as this:
Will the properties and events of those elements works or not ?
No, everything drawn to a canvas is forgotten as passive pixels - they becomes simply an image.
You will need to provide custom logic that you provide yourselves in order to to handle any such things as clicks, objects, events etc. The logic need to define the areas, objects and anything else.

Kineticjs - Help uploading images to stage from input file

I am trying to allow users upload their own images to the kineticJS stage through an input in the html. I prefer to keep all my code in a separate js file, here is what i have so far:
$(document).ready(function() {
var stage = new Kinetic.Stage({
container: 'container',
width: 900,
height: 500
});
var layer = new Kinetic.Layer();
});
function addImage(){
var imageObj = new Image();
imageObj.onload = function() {
var myImage = new Kinetic.Image({
x: 140,
y: stage.getHeight() / 2 - 59,
image: imageObj,
width: 106,
height: 118
});
layer.add(myImage);
stage.add(layer);
}
var f = document.getElementById('uploadimage').files[0];
var name = f.name;
var url = window.URL;
var src = url.createObjectURL(f);
imageObj.src = src;
}
How do I expose the stage to the addImage() method? It is out of its scope at the moment and I havent been able to figure out how to solve the problem as the canvas doesn't show in the html until something is added to it. I need these images to be added as layers for future manipulation so want to use kineticJS. Any suggestions would be much appreciated!
http://jsfiddle.net/8XKBM/12/
I managed to get your addImage function working by attaching an event to it. If you use the Firebug console in Firefox or just press Ctrl+Shift+J you can get javascript errors. It turns out your function was being read as undefined, so now the alert is working, but your image isn't added because they aren't stored anywhere yet, like on a server (must be uploaded somewhere first)
I used jQuery to attach the event as you should use that instead of onclick='function()'
$('#addImg').on('click', function() {
addImage();
});
and changed
<div>
<input type="file" name="img" size="5" id="uploadimage" />
<button id='addImg' value="Upload" >Upload</button>
</div>
What you would really want to do is have the user upload the photos (to the server) on the fly using AJAX, (available with jQuery, doesn't interfere with KineticJS). Then, on success, you can draw the photo onto the canvas using your function. Make sure to use:
layer.draw()
or
stage.draw()
at the end of the addImage() function so that the photo is drawn on your canvas, as the browser does not draw the image until after the page loads and img.src is defined at the end. So, this will basically just require things to be in correct order rather than being difficult.
So, step 1: upload using AJAX (to server), step 2: add to stage, step 3: redraw stage

Resources