3D SVG animation in Processing—flickering problems, svg loading questions - animation

I'm trying to make a time lapse geographic twitter visualization inspired by Jer Thorp's "Just Landed". I am using the latest version of processing.
I'm using an SVG image for my map because I want to be able to zoom into the map at an arbitrary angle, to focus on certain localities, then show the twitter connections on a global scale. I'm running into several problems, the first of which is a flickering of path boundaries of the countries when I rotate my map. Here's a screenshot of my problem:
Here is my code which is causing the problem:
import processing.opengl.*;
import java.awt.event.*;
PShape map;
PShape test1;
PShape test2;
//camera position/movement intialization
PVector position = new PVector(450, 450);
PVector movement = new PVector();
PVector rotation = new PVector();
PVector velocity = new PVector();
float rotationSpeed = 0.035;
float panningsSpeed = 0.035;
float movementSpeed = 0.05;
float scaleSpeed = 0.25;
float fScale = 2;
void setup(){
map = loadShape("blank_merc.svg"); //swap out for whatever file
size(900, 900, OPENGL);
smooth();
fill(150, 200, 250);
addMouseWheelListener(new MouseWheelListener(){
public void mouseWheelMoved(MouseWheelEvent mwe){
mouseWheel(mwe.getWheelRotation());
}
});
}
void draw(){
if (mousePressed) {
if (mouseButton==LEFT) velocity.add( (pmouseY-mouseY) * 0.01, (mouseX-pmouseX) * 0.01, 0);
if (mouseButton==RIGHT) movement.add( (mouseX-pmouseX) * movementSpeed, (mouseY-pmouseY) * movementSpeed, 0);
}
//TODO: implement reset functionality: DONE
if (keyPressed){
if (key=='r'){
position.set(450,450);
rotation.sub(rotation.get());
velocity.sub(velocity.get());
movement.sub(movement.get());
}
}
velocity.mult(0.95);
rotation.add(velocity);
movement.mult(0.95);
position.add(movement);
background(255);
//lights();
translate(position.x, position.y, position.z);
rotateX(rotation.x*rotationSpeed);
rotateY(rotation.y*rotationSpeed);
scale(fScale);
shape(map,-250,-250,1000,1000);
}
void mouseWheel(int delta){
fScale -= delta * scaleSpeed;
fScale = max(0.5, fScale);
}
I was told it might be z-fighting amongst the paths, and I think this might be the problem because the flickering is more problematic when the map is mid rotation, especially at angles that are non orthogonal to the viewing plane. I tried to remedy this by "translating" a PShape child of the file a small amount in the Z direction with the test1.translate(0,0,0.1); command, but I get an error telling me illegal argument exception: cannot use translate(x,y,z) on a PMatrix2D.
I've also had trouble testing my code with other SVG map files and generally getting the SVG to look like what I think it should look like. There are a bunch of cities and other weird markers on my SVG map, and even when i download the completely "blank" svg world map mercator projection from wikimedia commons. There are these city marker/region attributes which show up in the processing render that dont show up in the browser view. I'm trying to figure out how to "clean" my SVG file up in Inkscape, but I'm unsure what specifically to look for.
For example, I've run it with this map: http://commons.wikimedia.org/wiki/File:Mercator_Projection.svg
but the dots and lines I have no use for, and I'm having to resort to manually selecting and deleting the paths, which is not a very thorough process
and when I use this map, which is supposed to be the "blank version" of the above without all the markers, I see not only a bunch of markers (presumably hidden with some style attribute in the SVG XML?) but also this weird vertical banding, and my camera controls are super slow. The applet appears to be behaving as if the file is way too large, but its like 2MB. Here's a screenshot of what this looks like:
I'm really just looking for a way to get a "clean" SVG world map into Processing so I can spin it around and zoom in on it, and if I can get that to work I can start the Arc-Drawing part. I would sincerely appreciate any assistance anyone could give me.
Thanks

If I understand your question correctly, the flickering is only on the edges, presumably where they overlap. That would suggest z-fighting to me. I usually find that a simple test outside your main sketch is best, just as a quick way to see what's happening and how you might fix it.
If you make a simple SVG with two overlapping shapes, sharing just one edge, does the same thing happen?
If so, I think the easiest solution (though not that easy) would be either:
Select all the countries in Illustrator
Use Object > Transform > Scale... and shrink by a tiny amount
Then share your fixed map for everyone else!

Related

p5js only drawing what's needed

I'm looking for a way to limit what gets done in the draw loop.
I have a system where when I click, it add's a rect.
This rect then starts spawning circles that move.
since the rect does not change location, it isn't ideal to redraw it in every frame.
Is there a way to put the rects on a different layer of sorts, or is there another mechanism that I can use to limit the rect-drawing without impeding the circle-drawing?
I've tried with createGraphic to make a background with the rects, but I can't make the 'foreground' where the circles reside to be transparant.
Curious about this I tried myself. My idea was simply grabbing the canvas and interacting with it regardless of p5.js.
My result was that the draw... in this case ctx.fillRect did not render on screen.
However the fillStyle was changed.
Canvas is surprisingly efficient as well as WebGL and can handle the performance usually... unless you are rendering hundreds(mobile) to thousands(laptop/desktop) of objects.
I would have liked to have a better outcome but I think it was worthwhile posting what I had tried and my outcome nonetheless.
//P5 Setup
function setup(){
createCanvas(1500, 750);
background('rgba(0, 0, 0, 0.3)');
stroke(255);
fill(255)
doNonP5Drawing();
}
//Render
function draw(){
background(0);
frame();
}
function doNonP5Drawing(){
let canvas = document.querySelector('canvas'),
ctx = canvas.getContext('2d');
ctx.fillStyle="red";
ctx.fillRect(canvas.innerWidth/2 - 100,canvas.innerHeight/2 - 100,200,200);
}

Change facing direction of CSS3DObject

I have a 3D scene with a bunch of CSS object that I want to rotate so that they are all pointing towards a point in the space.
My CSS objects are simple rectangles that are a lot wider than they are high:
var element = document.createElement('div');
element.innerHTML = "test";
element.style.width = "75px";
element.style.height = "10px";
var object = new THREE.CSS3DObject(element);
object.position.x = x;
object.position.y = y;
object.position.z = z;
Per default, the created objects are defined as if they are "facing" the z-axis. This means that if I use the lookAt() function, the objects will rotate so that the "test" text face the point.
My problem is that I would rather rotate so that the "right edge" of the div is pointing towards the desired point. I've tried fiddling with the up-vector, but I feel like that wont work because I still want the up-vector to point up. I also tried rotating the object Math.PI/2 along the y axis first, but lookAt() seems to ignore any prior set rotation.
It seems like I need to redefine the objects local z-vector instead, so that it runs along with the global x-vector. That way the objects "looking at"-direction would be to the right in the scene, and then lookAt() would orient it properly.
Sorry for probably mangling terminology, newbie 3D programmer here.
Object.lookAt( point ) will orient the object so that the object's internal positive z-axis points in the direction of the desired point.
If you want the object's internal positive x-axis to point in the direction of the desired point, you can use this pattern:
object.lookAt( point );
object.rotateY( - Math.PI / 2 );
three.js r.84

libGDX- Exact collision detection - Polygon creation?

I've got a question about libGDX collision detection. Because it's a rather specific question I have not found any good solution on the internet yet.
So, I already created "humans" that consist of different body parts, each with rectangle-shaped collision detection.
Now I want to implement weapons and skills, which for example look like this:
Skill example image
Problem
Working with rectangles in collision detections would be really frustrating for players when there are skills like this: They would dodge a skill successfully but the collision detector would still damage them.
Approach 1:
Before I started working with Libgdx I have created an Android game with a custom engine and similar skills. There I solved the problem following way:
Detect rectangle collision
Calculate overlapping rectangle section
Check every single pixel of the overlapping part of the skill for transparency
If there is any non-transparent pixel found -> Collision
That's a kind of heavy way, but as only overlapping pixels are checked and the rest of the game is really light, it works completely fine.
At the moment my skill images are loaded as "TextureRegion", where it is not possible to access single pixels.
I have found out that libGDX has a Pixmap class, which would allow such pixel checks. Problem is: having them loaded as Pixmaps additionally would 1. be even more heavy and 2. defeat the whole purpose of the Texture system.
An alternative could be to load all skills as Pixmap only. What do you think: Would this be a good way? Is it possible to draw many Pixmaps on the screen without any issues and lag?
Approach 2:
An other way would be to create Polygons with the shape of the skills and use them for the collision detection.
a)
But how would I define a Polygon shape for every single skill (there are over 150 of them)? Well after browsing a while, I found this useful tool: http://www.aurelienribon.com/blog/projects/physics-body-editor/
it allows to create Polygon shapes by hand and then save them as JSON files, readable by the libGDX application. Now here come the difficulties:
The Physics Body Editor is connected to Box2d (which I am not using). I would either have to add the whole Box2d physics engine (which I do not need at all) just because of one tiny collision detection OR I would have to write a custom BodyEditorLoader which would be a tough, complicated and time-intensive task
Some images of the same skill sprite have a big difference in their shapes (like the second skill sprite example). When working with the BodyEditor tool, I would have to not only define the shape of every single skill, but I would have to define the shape of several images (up to 12) of every single skill. That would be extremely time-intensive and a huge mess when implementing these dozens of polygon shapes
b)
If there is any smooth way to automatically generate Polygons out of images, that could be the solution. I could simply connect every sprite section to a generated polygon and check for collisions that way. There are a few problems though:
Is there any smooth tool which can generate Polygon shapes out of an image (and does not need too much time therefor)?
I don't think that a tool like this (if one exists) can directly work with Textures. It would probably need Pixmaps. It would not be needed to keep te Pixmaps loaded after the Polygon creation though. Still an extremely heavy task!
My current thoughts
I'm stuck at this point because there are several possible approaches but all of them have their difficulties. Before I choose one path and continue coding, it would be great if you could leave some of your ideas and knowledge.
There might be helpful classes and code included in libGDX that solve my problems within seconds - as I am really new at libGDX I just don't know a lot about it yet.
Currently I think I would go with approach 1: Work with pixel detection. That way I made exact collision detections possible in my previous Android game.
What do you think?
Greetings
Felix
I, personally, would feel like pixel-to-pixel collision would be overkill on performance and provide some instances where I would still feel cheated - (I got hit by the handle of the axe?)
If it were me, I would add a "Hitbox" to each skill. StreetFighter is a popular game which uses this technique. (newer versions are in 3D, but hitbox collision is still 2D) Hitboxes can change frame-by-frame along with the animation.
Empty spot here to add example images - google "Streetfighter hitbox" in the meantime
For your axe, there could be a defined rectangle hitbox along the edge of one or both ends - or even over the entire metal head of the axe.
This keeps it fairly simple, without having to mess with exact polygons, but also isn't overly performance heavy like having every single pixel being its own hitbox.
I've used that exact body editor you referenced and it has the ability to generate polygons and/or circles for you. I also made a loader for the generated JSON with the Jackson library. This may not be the answer for you since you'd have to implement box2d. But here's how how I did it anyway.
/**
* Adds all the fixtures defined in jsonPath with the name'lookupName', and
* attach them to the 'body' with the properties defined in 'fixtureDef'.
* Then converts to the proper scale with 'width'.
*
* #param body the body to attach fixtures to
* #param fixtureDef the fixture's properties
* #param jsonPath the path to the collision shapes definition file
* #param lookupName the name to find in jsonPath json file
* #param width the width of the sprite, used to scale fixtures and find origin.
* #param height the height of the sprite, used to find origin.
*/
public void addFixtures(Body body, FixtureDef fixtureDef, String jsonPath, String lookupName, float width, float height) {
JsonNode collisionShapes = null;
try {
collisionShapes = json.readTree(Gdx.files.internal(jsonPath).readString());
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
for (JsonNode node : collisionShapes.findPath("rigidBodies")) {
if (node.path("name").asText().equals(lookupName)) {
Array<PolygonShape> polyShapes = new Array<PolygonShape>();
Array<CircleShape> circleShapes = new Array<CircleShape>();
for (JsonNode polygon : node.findPath("polygons")) {
Array<Vector2> vertices = new Array<Vector2>(Vector2.class);
for (JsonNode vector : polygon) {
vertices.add(new Vector2(
(float)vector.path("x").asDouble() * width,
(float)vector.path("y").asDouble() * width)
.sub(width/2, height/2));
}
polyShapes.add(new PolygonShape());
polyShapes.peek().set(vertices.toArray());
}
for (final JsonNode circle : node.findPath("circles")) {
circleShapes.add(new CircleShape());
circleShapes.peek().setPosition(new Vector2(
(float)circle.path("cx").asDouble() * width,
(float)circle.path("cy").asDouble() * width)
.sub(width/2, height/2));
circleShapes.peek().setRadius((float)circle.path("r").asDouble() * width);
}
for (PolygonShape shape : polyShapes) {
Vector2 vectors[] = new Vector2[shape.getVertexCount()];
for (int i = 0; i < shape.getVertexCount(); i++) {
vectors[i] = new Vector2();
shape.getVertex(i, vectors[i]);
}
shape.set(vectors);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
for (CircleShape shape : circleShapes) {
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
}
}
}
And I would call it like this:
physics.addFixtures(body, fixtureDef, "ship/collision_shapes.json", shipType, width, height);
Then for collision detection:
public ContactListener shipsExplode() {
ContactListener listener = new ContactListener() {
#Override
public void beginContact(Contact contact) {
Body bodyA = contact.getFixtureA().getBody();
Body bodyB = contact.getFixtureB().getBody();
for (Ship ship : ships) {
if (ship.body == bodyA) {
ship.setExplode();
}
if (ship.body == bodyB) {
ship.setExplode();
}
}
}
};
return listener;
}
then you would add the listener to the world:
world.setContactListener(physics.shipsExplode());
my sprites' width and height were small since you're dealing in meters not pixels once you start using box2d. One sprite height was 0.8f and width was 1.2f for example. If you made the sprites width and height in pixels the physics engine hits speed limits that are built in http://www.iforce2d.net/b2dtut/gotchas
Don't know if this still matter to you guys, but I built a small python script that returns the pixels positions of the points in the edges of the image. There is room to improve the script, but for me, for now its ok...
from PIL import Image, ImageFilter
filename = "dship1"
image = Image.open(filename + ".png")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(filename + "_edge.png")
cols = image.width
rows = image.height
points = []
w = 1
h = 1
i = 0
for pixel in list(image.getdata()):
if pixel[3] > 0:
points.append((w, h))
if i == cols:
w = 0
i = 0
h += 1
w += 1
i += 1
with open(filename + "_points.txt", "wb") as nf:
nf.write(',\n'.join('%s, %s' % x for x in points))
In case of updates you can find them here: export positions

Zoom toward mouse (eg. Google maps)

I've written a home-brew view_port class for a 2D strategy game. The panning (with arrow keys) and zooming (with mouse wheel) work fine, but I'd like the view to also home towards wherever the cursor is placed, as in Google Maps or Supreme Commander
I'll spare you the specifics of how the zoom is implemented and even what language I'm using: this is all irrelevant. What's important is the zoom function, which modifies the rectangle structure (x,y,w,h) that represents the view. So far the code looks like this:
void zoom(float delta, float mouse_x, float mouse_y)
{
zoom += delta;
view.w = window.w/zoom;
view.h = window.h/zoom;
// view.x = ???
// view.y = ???
}
Before somebody suggests it, the following will not work:
view.x = mouse_x - view.w/2;
view.y = mouse_y - view.h/2;
This picture illustrates why, as I attempt to zoom towards the smiley face:
As you can see when the object underneath the mouse is placed in the centre of the screen it stops being under the mouse, so we stop zooming towards it!
If you've got a head for maths (you'll need one) any help on this would be most appreciated!
I managed to figure out the solution, thanks to a lot of head-scratching a lot of little picture: I'll post the algorithm here in case anybody else needs it.
Vect2f mouse_true(mouse_position.x/zoom + view.x, mouse_position.y/zoom + view.y);
Vect2f mouse_relative(window_size.x/mouse_pos.x, window_size.y/mouse_pos.y);
view.x = mouse_true.x - view.w/mouse_relative.x;
view.y = mouse_true.y - view.h/mouse_relative.y;
This ensures that objects placed under the mouse stay under the mouse. You can check out the code over on github, and I also made a showcase demo for youtube.
In my concept there is a camera and a screen.
The camera is the moving part. The screen is the scalable part.
I made an example script including a live demo.
The problem is reduced to only one dimension in order to keep it simple.
https://www.khanacademy.org/cs/cam-positioning/4772921545326592
var a = (mouse.x + camera.x) / zoom;
// now increase the zoom e.g.: like that:
zoom = zoom + 1;
var newPosition = a * zoom - mouse.x;
camera.setX(newPosition);
screen.setWidth(originalWidth * zoom);
For a 2D example you can simply add the same code for the height and y positions.

HTML Canvas clip area - Context restore?

Am trying to set a "dirty zone" on my canvas to prevent the repainting of unmoved items (background image, static items, etc.)
i.e. only the background painted behind a moving player needs to be redrawn
EDIT: As suggested, here's the jsfiddle of it
http://jsfiddle.net/7kbzj/3/
The "update" method doesn't work out there, so it's moveSprite() you can get run by clicking the "move sprite" link... Basically, the clipping zone shouldmove by 10px to the right each time you click. Clipping mask stays at initial position, only the re-paint occurs. Weird o_O
So as I init my canvas, once the background is painted, set I use the ctx.save() method:
function init() {
canvas = document.getElementById('kCanvas');
ctx = canvas.getContext('2d');
ctx.fillStyle = "rgb(0,128,0)";
ctx.fillRect (0,0,320,240);
ctx.save();
setInterval(function () { update(); }, tpf);
}
In order to see the clipping works, I draw a different color background (blue one) in the area that I wanted clipped... the result is bad, only the first clipped area is painted blue :(
function update() {
setDirtyArea(x,y,w+1,h)
ctx.fillStyle = "rgb(0,0,128)";
ctx.fillRect (0,0,320,240);
x++;
// paint image
ctx.clearRect(x,y,w,h);
ctx.drawImage(imageObj, x, y);
}
function setDirtyArea(x,y,w,h) {
ctx.restore();
// define new dirty zone
ctx.beginPath();
ctx.rect(x, y, w, h);
ctx.clip();
}
I'd love to se the blue zone propagate itself towards the right of the screen... please help, I don't understand what's wrong!
Thanks,
J.
You need to wrap the actual drawing and clipping of the box with the save and restore methods. and include the closePath method. I have modified your fiddle to work the way I believe you are trying to make it.
http://jsfiddle.net/jaredwilli/7kbzj/7/
ctx.save(); // save the context
// define new dirty zone
ctx.beginPath();
ctx.rect(x, y, w, h);
ctx.clip();
ctx.restore(); // restore the context
I also have learned that using save and restore can get really complex, and confusing to know which context your in. It came up with a pretty huge canvas app im working on, and i found that indenting your canvas code helps immensely. Especially the save/restores. I have even decided it should be considered a best practice, so the more people who know and do it the better.
Hope this helps.

Resources