Product demo using Canvas Animation - animation

I love the way Sublime text shows it's product demo on it's home page:
http://www.sublimetext.com/
How can I create a similar demo? All I note is that it is a Canvas element.
Sorry if it's sounds as a basic question. I see it's made on Canvas. Any leads or help in this regard is highly appreciate?

They are using delays and parts of images such as this one (look at the bottom part of the image):
and specify what (rectangular) part of each image renders when, making it look like an animation.
It's a typical texture atlas.
This is the list of the images:
"anim/rename2_packed.png",
"anim/days_169_packed.png",
"anim/command_palette_packed.png",
"anim/goto_anything_packed.png",
"anim/goto_anything2_packed.png",
"anim/regex_packed.png"
And this is how they specify the delay and the image pieces
{"delay":1811,"blit":[[0,0,800,450,0,0]]},
{"delay":48,"blit":[[0,450,400,344,200,36],[66,982,63,15,0,36]]},
{"delay":798,"blit":[]}, etc...
As you see, delay is the time in milliseconds, and blit looks like parameters for drawImage - srcX, srcY, width, height, destX, destY.
Each of the "screens" is kept as a separate variable, like command_palette_timeline, days_169_timeline, goto_anything_timeline, etc. Each containing delay/blit array of objects like the one from the paragraph above.
The actual render code is pretty straightforward, they follow each step in each timeline, with delays between them, and each step is rendered like this:
for (j = 0; j < blits.length; ++j)
{
var blit = blits[j]
var sx = blit[0]
var sy = blit[1]
var w = blit[2]
var h = blit[3]
var dx = blit[4]
var dy = blit[5]
ctx.drawImage(img, sx, sy, w, h, dx, dy, w, h)
}

Related

How to determine the visible objects on the screen?

I need to find the objects that fully/partly visible on the rendered screen. I know this can be done by coloring each object uniquely, rendering the scene, and detecting the colors that end up on the screen. This is a screen-space operation that would involve fiddling with the frame-buffer. Are there any special functions/helpers within three.js that do this more easily?
You can check if object is in view frustum of the camera. See Frustum in Three.js documentation.
One way to achieve this is to render your scene once with constant shading, colour-coding your objects as you need, with any anti-aliasing and other effects turned off, so that you can easily map a read pixel back to its object by its colour.
Then, you can read pixels from your render target, for which you can use three.js' WebGLRenderer.readRenderTargetPixels() (see docs). You can then read the colours out of the buffer you pass to it.
Something like this:
// Render your scene first, into a renderTarget. Then:
const buffer = new Uint8Array(width * height * 4);
this.renderer.readRenderTargetPixels(renderTarget, 0, 0, width, height, buffer);
for (let i=0; i<buffer.length/4; ++i) {
const r = buffer[i*4 ];
const g = buffer[i*4 + 1];
const b = buffer[i*4 + 2];
const rgb = (r << 16) | (g << 8) | b;
// Do your mapping
}
This is very much just WebGL though, and don't know whether there might be a better way to do this within three.js.

AWS Rekognition - create image from detect-faces bounding box

Currently trying to figure out how to make face crops from bounding boxes (from detect-faces response) and use those crops to search an existing collection using the SearchFacesByImage API
This is mentioned on the SearchFacesByImage documentation.
You can also call the DetectFaces operation and use the bounding boxes in the response to make face crops, which then you can pass in to theSearchFacesByImage operation
I am trying to do this in Python or Node.js in a Lambda function. The input image is an s3 object.
All help greatly appreciated.
I have faced the exact same problem. Refer this link from AWS Documentation. Here you will find sample code for python or java both. It will return the Top, Lfet, Width and Height of the bounding box. Remember, the upper-left corner will be considered as (0,0).
Then if you use python, you can crop image with cv2 or PIL.
Here is an example with PIL:
from PIL import Image
img = Image.open( 'my_image.png' )
cropped = img.crop( ( Left, Top, Left + Width, Top + Height ) )
cropped.show()
In this code Top, Lfet, Width and Height is a response from code given in the link.
I did this script in java, maybe its help
java.awt.image.BufferedImage image = ...
com.amazonaws.services.rekognition.model.BoundingBox target ...
int x = (int) Math.abs((image.getWidth() * target.getLeft()));
int y = (int) Math.abs((image.getHeight() *target.getTop()));;
int w = (int) Math.abs((image.getWidth() * target.getWidth()));
int h = (int) Math.abs((image.getHeight() * target.getHeight()));
int finalX = x + w;
int finalH = y + h;
if (finalX > image.getWidth())
w = image.getWidth()-x;
if (finalH > image.getHeight())
h = image.getHeight()-y;
System.out.println(finalX);
System.out.println(finalH);
//
//
BufferedImage subImage = image.getSubimage(
x,
y,
w,
h);
//
//
String base64 = ImageUtils.imgToBase64String(subImage, "jpg");

D3 Donut chart projected to sphere/globe

I want to use d3 for the next task:
display rotating globe with donut chart in center of every country. It should be possible to interact with globe (select country, zoom, rotate).
Seems d3 provide an easy way to implement every part of it but I can not get donuts part working as I need.
There is an easy way draw donut chart with the help of d3.arc:
var arc = d3.arc();
var data = [3, 23, 17, 35, 4];
var radius = 15/scale;
var _arc = arc.innerRadius(radius - 7/scale)
.outerRadius(radius).context(donutsContext);
var pieData = pie(data);
for (var i = 0; i < pieData.length; i++) {
donutsContext.beginPath();
donutsContext.fillStyle = color(i);
_arc(pieData[i]);
}
by with code as it is donuts are displayed on a plane on top of the globe, like:
globe with donut
​
while I want them to be 'wrapped' around the globe
There is d3.geoCircle method that can be projected to globe correctly. I got 'ring' projected correctly to the globe with the help of two circles:
var circle = d3.geoCircle()
.center(centroid)
.radius(2);
var outerCircle = circle();
var circle = d3.geoCircle()
.center(centroid)
.radius(1);
var innerCircle = circle();
var interCircleCoordinates = [];
for (var i = innerCircle.coordinates[0].length - 1; i >= 0; i--) {
interCircleCoordinates.push(innerCircle.coordinates[0][i]);
}
outerCircle.coordinates.push(interCircleCoordinates);
​globe with rings
but I really need to get a donut.
The other way I tried is getting image from donuts and wrapping this image around globe with the help of pixels manipulation:
var image = new Image;
image.onload = onload;
image.src = img;
function onload() {
window.dx = image.width;
window.dy = image.height;
context.drawImage(image, 0, 0, dx, dy);
sourceData = context.getImageData(0, 0, dx, dy).data;
target = context.createImageData(width, height);
targetData = target.data;
for (var y = 0, i = -1; y < height; ++y) {
for (var x = 0; x < width; ++x) {
var p = projection.invert([x, y]), λ = p[0], φ = p[1];
if (λ > 180 || λ < -180 || φ > 90 || φ < -90) { i += 4; continue; }
var q = ((90 - φ) / 180 * dy | 0) * dx + ((180 + λ) / 360 * dx | 0) << 2;
var r = sourceData[q];
var g = sourceData[++q];
var b = sourceData[++q];
targetData[++i] = r;
targetData[++i] = g;
targetData[++i] = b;
targetData[++i] = 125;//
}
}
context.clearRect(0,0, width, height);
context.putImageData(target, 0, 0);
};
by this way I get extremely slow rotating and interaction with a globe for a globe size I need (1000px)
So my questions are:
Is there is some way to project donuts that are generated with the help of d3.arc to a sphere (globe, orthographic projection)?
Is there is some way to get a donut from geoCircle?
Maybe there is some other way to achieve my goal I do not see
There is one way that comes to mind to display donuts on a globe. The key challenge is that d3 doesn't project three dimensional objects very well - with one exception, geographic features. Consequently, an "easy" solution is to convert your pie charts into geographic features and project them with the rest of your features.
To do this you need to:
Use a pie/donut generator as you normally would
Go along the paths generated to get points approximating the pie shape.
Convert the points to long/lat points
Assemble those points into geojson
Project them onto the map.
The first point is easy enough, just make a pie chart with an inner radius.
Now you have to select each path and find points along its perimeter using path.getPointAtLength(), this will be dependent on path length, so path.getTotalLength() will be handy (and corners are important, so you might want to incorporate a little bit of complexity for these corner cases to ensure you get them)).
Once you have the points, you need the use of a second projection, azimuthal equidistant would be best. If the pie chart is centered on [0,0] in svg coordinate space, rotate the azimuthal (don't center), so that the centroid coordinate is located at [0,0] in svg space (you can use translates on the pies to position them, but it will just add extra steps). Take each point and run it through projection.invert() using the second projection. You will need to update the projection for each donut chart as each one will have a different geographic centroid.
Once you have lat long points, it's easy - you've already done it with the geo circle function - convert to geojson and project with the orthographic projection.
This approach gave me something like:
Notes: Depending on your data, it might be easiest to preprocess your data into geojson and store that as opposed to calculating the geojson each page load.
You are using canvas, while you don't need to actually use an svg, you need to still be able to access svg functions like getPointAtLength, you do not need to have an svg or display svg elements by using a custom element replicating a path :
document.createElementNS(d3.namespaces.svg, 'path');
Oh, and make sure the second projection's translate is set - the default is [480,250] for all (most?) d3 projections, that will throw things off if unaccounted for.

Drawing image(PGraphics) gives unwanted double image mirrored about x-axis. Processing 3

The code is supposed to fade and copy the window's image to a buffer f, then draw f back onto the window but translated, rotated, and scaled. I am trying to create an effect like a feedback loop when you point a camera plugged into a TV at the TV.
I have tried everything I can think of, logged every variable I could think of, and still it just seems like image(f,0,0) is doing something wrong or unexpected.
What am I missing?
Pic of double image mirror about x-axis:
PGraphics f;
int rect_size;
int midX;
int midY;
void setup(){
size(1000, 1000, P2D);
f = createGraphics(width, height, P2D);
midX = width/2;
midY = height/2;
rect_size = 300;
imageMode(CENTER);
rectMode(CENTER);
smooth();
background(0,0,0);
fill(0,0);
stroke(255,255);
}
void draw(){
fade_and_copy_pixels(f); //fades window pixels and then copies pixels to f
background(0,0,0);//without this the corners dont get repainted.
//transform display window (instead of f)
pushMatrix();
float scaling = 0.90; // x>1 makes image bigger
float rot = 5; //angle in degrees
translate(midX,midY); //makes it so rotations are always around the center
rotate(radians(rot));
scale(scaling);
imageMode(CENTER);
image(f,0,0); //weird double image must have something not working around here
popMatrix();//returns window matrix to normal
int x = mouseX;
int y = mouseY;
rectMode(CENTER);
rect(x,y,rect_size,rect_size);
}
//fades window pixels and then copies pixels to f
void fade_and_copy_pixels(PGraphics f){
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
for (int i = 0; i < pixels.length; i++) {
//////////////FADE PIXELS in window and COPY to f:///////////////
color p = pixels[i];
//get color values, mask then shift
int r = (p & 0x00FF0000) >> 16;
int g = (p & 0x0000FF00) >> 8;
int b = p & 0x000000FF; //no need for shifting
// reduce value for each color proportional
// between fade_amount between 0-1 for 0 being totallty transparent, and 1 totally none
// min is 0.0039 (when using floor function and 255 as molorModes for colors)
float fade_percent= 0.005; //0.05 = 5%
int r_new = floor(float(r) - (float(r) * fade_percent));
int g_new = floor(float(g) - (float(g) * fade_percent));
int b_new = floor(float(b) - (float(b) * fade_percent));
//maybe later rewrite in a way to save what the difference is and round it differently, like maybe faster at first and slow later,
//round doesn't work because it never first subtracts one to get the ball rolling
//floor has a minimum of always subtracting 1 from each value each time. cant just subtract 1 ever n loops
//keep a list of all the pixel as floats? too much memory?
//ill stick with floor for now
// the lowest percent that will make a difference with floor is 0.0039?... because thats slightly more than 1/255
//shift back and or together
p = 0xFF000000 | (r_new << 16) | (g_new << 8) | b_new; // or-ing all the new hex together back into AARRGGBB
f.pixels[i] = p;
////////pixels now copied
}
f.updatePixels();
}
This is a weird one. But let's start with a simpler MCVE that isolates the problem:
PGraphics f;
void setup() {
size(500, 500, P2D);
f = createGraphics(width, height, P2D);
}
void draw() {
background(0);
rect(mouseX, mouseY, 100, 100);
copyPixels(f);
image(f, 0, 0);
}
void copyPixels(PGraphics f) {
loadPixels();
f.loadPixels();
for (int i = 0; i < pixels.length; i++) {
color p = pixels[i];
f.pixels[i] = p;
}
f.updatePixels();
}
This code exhibits the same problem as your code, without any of the extra logic. I would expect this code to show a rectangle wherever the mouse is, but instead it shows a rectangle at a position reflected over the X axis. If the mouse is on the top of the window, the rectangle is at the bottom of the window, and vice-versa.
I think this is caused by the P2D renderer being OpenGL, which has an inversed Y axis (0 is at the bottom instead of the top). So it seems like when you copy the pixels over, it's going from screen space to OpenGL space... or something. That definitely seems buggy though.
For now, there are two things that seem to fix the problem. First, you could just use the default renderer instead of P2D. That seems to fix the problem.
Or you could get rid of the for loop inside the copyPixels() function and just do f.pixels = pixels; for now. That also seems to fix the problem, but again it feels pretty buggy.
If somebody else (paging George) doesn't come along with a better explanation by tomorrow, I'd file a bug on Processing's GitHub. (I can do that for you if you want.)
Edit: I've filed an issue here, so hopefully we'll hear back from a developer in the next few days.
Edit Two: Looks like a fix has been implemented and should be available in the next release of Processing. If you need it now, you can always build Processing from source.
An easier one, and works like a charm:
add f.beginDraw(); before and f.endDraw(); after using f:
loadPixels(); //load windows pixels. dont need because I am only reading pixels?
f.loadPixels(); //loads feedback loops pixels
// Loop through every pixel in window
//it is faster to grab data from pixels[] array, so dont use get and set, use this
f.beginDraw();
and
f.updatePixels();
f.endDraw();
Processing must know when it's drawing in a buffer and when not.
In this image you can see that works

Storing motion vectors from calculated optical flow in a practical way which enables reconstruction of subsequent frames from initial keyframes

I am trying to store the motion detected from optical flow for frames in a video sequence and then use these stored motion vectors in order to predict the already known frames using just the first frame as a reference. I am currently using two processing sketches - the first sketch draws a motion vector for every pixel grid (each of width and height 10 pixels). This is done for every frame in the video sequence. The vector is only drawn in a grid if there is sufficient motion detected. The second sketch aims to reconstruct the video frames crudely from just the initial frame of the video sequence combined with information about the motion vectors got from the first sketch.
My approach so far is as follows: I am able to determine the size, position and direction of each motion vector drawn in the first sketch from four variables. By creating four arrays (two for the motion vector's x and y coordinate and another two for its length in the x and y direction), every time a motion vector is drawn I can append each of the four variables to the arrays mentioned above. This is done for each pixel grid throughout an entire frame where the vector is drawn and for each frame in the sequence - via for loops. Once the arrays are full, I can then save them to a text file as a list of strings. I then load these strings from the text file into the second sketch, along with the first frame of the video sequence. I load the strings into variables within a while loop in the draw function and convert them back into floats. I increment a variable by one each time the draw function is called - this moves on to the next frame (I used a specific number as a separator in my text-files which appears at the end of every frame - the loop searches for this number and then increments the variable by one, thus breaking the while loop and the draw function is called again for the subsequent frame). For each frame, I can draw 10 by 10 pixel boxes and move then by the parameters got from the text files in the first sketch. My problem is simply this: How do I draw the motion of a particular frame without letting what I've have blitted to the screen in the previous frame affect what will be drawn for the next frame. My only way of getting my 10 by 10 pixel box is by using the get() function which gets pixels that are already drawn to the screen.
Apologies for the length and complexity of my question. Any tips would be very much appreciated! I will add the code for the second sketch. I can also add the first sketch if required, but it's rather long and a lot of it is not my own. Here is the second sketch:
import processing.video.*;
Movie video;
PImage [] naturalMovie = new PImage [0];
String xlengths [];
String ylengths [];
String xpositions [];
String ypositions [];
int a = 0;
int c = 0;
int d = 0;
int p;
int gs = 10;
void setup(){
size(640, 480, JAVA2D);
xlengths = loadStrings("xlengths.txt");
ylengths = loadStrings("ylengths.txt");
xpositions = loadStrings("xpositions.txt");
ypositions = loadStrings("ypositions.txt");
video = new Movie(this, "sample1.mov");
video.play();
rectMode(CENTER);
}
void movieEvent(Movie m) {
m.read();
PImage f = createImage(m.width, m.height, ARGB);
f.set(0, 0, m);
f.resize(width, height);
naturalMovie = (PImage []) append(naturalMovie, f);
println("naturalMovie length: " + naturalMovie.length);
p = naturalMovie.length - 1;
}
void draw() {
if(naturalMovie.length >= p && p > 0){
if (c == 0){
image(naturalMovie[0], 0, 0);
}
d = c;
while (c == d && c < xlengths.length){
float u, v, x0, y0;
u = float(xlengths[a]);
v = float(ylengths[a]);
x0 = float(xpositions[a]);
y0 = float(ypositions[a]);
if (u != 1.0E-19){
//stroke(255,255,255);
//line(x0,y0,x0+u,y0+v);
PImage box;
box = get(int(x0-gs/2), int(y0 - gs/2), gs, gs);
image(box, x0-gs/2 +u, y0 - gs/2 +v, gs, gs);
if (a < xlengths.length - 1){
a += 1;
}
}
else if (u == 1.0E-19){
if (a < xlengths.length - 1){
c += 1;
a += 1;
}
}
}
}
}
Word to the wise: most people aren't going to read that wall of text. Try to "dumb down" your posts so they get to the details right away, without any extra information. You'll also be better off if you post an MCVE instead of only giving us half your code. Note that this does not mean posting your entire project. Instead, start over with a blank sketch and only create the most basic code required to show the problem. Don't include any of your movie logic, and hardcode as much as possible. We should be able to copy and paste your code onto our own machines to run it and see the problem.
All of that being said, I think I understand what you're asking.
How do I draw the motion of a particular frame without letting what I've have blitted to the screen in the previous frame affect what will be drawn for the next frame. My only way of getting my 10 by 10 pixel box is by using the get() function which gets pixels that are already drawn to the screen.
Separate your program into a view and a model. Right now you're using the screen (the view) to store all of your information, which is going to cause you headaches. Instead, store the state of your program into a set of variables (the model). For you, this might just be a bunch of PVector instances.
Let's say I have an ArrayList<PVector> that holds the current position of all of my vectors:
ArrayList<PVector> currentPositions = new ArrayList<PVector>();
void setup() {
size(500, 500);
for (int i = 0; i < 100; i++) {
currentPositions.add(new PVector(random(width), random(height)));
}
}
void draw(){
background(0);
for(PVector vector : currentPositions){
ellipse(vector.x, vector.y, 10, 10);
}
}
Notice that I'm just hardcoding their positions to be random. This is what your MCVE should do as well. And then in the draw() function, I'm simply drawing each vector. This is like drawing a single frame for you.
Now that we have that, we can create a nextFrame() function that moves the vectors based on the ArrayList (our model) and not what's drawn on the screen!
void nextFrame(){
for(PVector vector : currentPositions){
vector.x += random(-2, 2);
vector.y += random(-2, 2);
}
}
Again, I'm just hardcoding a random movement, but you would be reading these from your file. Then we just call the nextFrame() function as the last line in the draw() function:
If you're still having trouble, I highly recommend posting an MCVE similar to mine and posting a new question. Good luck.

Resources