It is hard to explain the problem so I recorded a video in order to illustrate the issue. [Video here]
I have image in box2d objects (bodies). When user drags an actor the body underneath moves too so that images follow physics. When the body is not fully rotated everything works as expected (drag&drop) but when rotation happens the movement goes crazy making that unwanted effect of infinite rotation.
Here's my approach:
In the constructor:
for(final Brick b : map.list){
stage.addActor(b.img);
Vector3 v = new Vector3(b.box.getPosition().x,b.box.getPosition().y,0);
camera.project(v);
b.img.setPosition(v.x-b.img.getWidth()*0.5f, v.y-b.img.getHeight()*0.5f);
b.img.setOrigin(b.img.getWidth()*0.5f, b.img.getHeight()*0.5f);
b.img.setRotation((float) Math.toDegrees(b.box.getAngle()));
b.img.addListener((new DragListener() {
public void touchDragged (InputEvent event, float x, float y, int pointer) {
float newPosX =b.img.getX() + x;
float newPosY = b.img.getY() + y;
b.img.setPosition(newPosX-b.img.getWidth()*0.5f,newPosY-b.img.getHeight()*0.5f);
b.box.setTransform(newPosX, newPosY, b.box.getAngle());
}
}));
}
Where map.list is a list containing all bodies that can be dragged.
In the render function:
for(final Brick b : map.list){
b.img.setVisible(true);
b.img.setPosition(b.box.getPosition().x-b.img.getWidth()*0.5f, b.box.getPosition().y-b.img.getHeight()*0.5f);
b.img.setOrigin(b.img.getWidth()*0.5f, b.img.getHeight()*0.5f);
b.img.setRotation((float) Math.toDegrees(b.box.getAngle()));
}
Thanks a lot in advance!
I think your problem is that you set the origin for rotation incorrectly.
b.img.setOrigin(b.img.getWidth()*0.5f, b.img.getHeight()*0.5f);
As long as the bodies aren't rotated at all, everything works fine. Assuming that your bodies position is at the center of the bodies, this should actually be
b.img.setOrigin(v.x, v.y);
Try to use the Box2dDebugRenderer to quickly check if the bodies are really moving so weird, or whether you just draw the pictures incorrectly.
Related
I have a 3D scene with a bunch of CSS object that I want to rotate so that they are all pointing towards a point in the space.
My CSS objects are simple rectangles that are a lot wider than they are high:
var element = document.createElement('div');
element.innerHTML = "test";
element.style.width = "75px";
element.style.height = "10px";
var object = new THREE.CSS3DObject(element);
object.position.x = x;
object.position.y = y;
object.position.z = z;
Per default, the created objects are defined as if they are "facing" the z-axis. This means that if I use the lookAt() function, the objects will rotate so that the "test" text face the point.
My problem is that I would rather rotate so that the "right edge" of the div is pointing towards the desired point. I've tried fiddling with the up-vector, but I feel like that wont work because I still want the up-vector to point up. I also tried rotating the object Math.PI/2 along the y axis first, but lookAt() seems to ignore any prior set rotation.
It seems like I need to redefine the objects local z-vector instead, so that it runs along with the global x-vector. That way the objects "looking at"-direction would be to the right in the scene, and then lookAt() would orient it properly.
Sorry for probably mangling terminology, newbie 3D programmer here.
Object.lookAt( point ) will orient the object so that the object's internal positive z-axis points in the direction of the desired point.
If you want the object's internal positive x-axis to point in the direction of the desired point, you can use this pattern:
object.lookAt( point );
object.rotateY( - Math.PI / 2 );
three.js r.84
I have a face detection app, and I want a character's head to rotate according to the detected face's pose.
I've managed to get the rotation of the detected face in the form of a quaternion, but I'm unsure about how I'm supposed to translate the data from the quaternion into 3D points for the reference points of the rigged character which I believe will decide the rotation.
Let's say I have this character: http://i.imgur.com/3pcRoYx.png
One solution could be to just cut off the head and make it an own object and then set the rotation of that object according to the quaternion, but I don't want that. I want an intact character.
Is it possible to move the reference points in the head with the data from a quaternion? Or have I gotten it wrong how rigged characters turn their heads? I haven't animated before.
You can apply rotation to a single bone. Get that bone in your script. Keep a var in your class to store the last quaternion in and every update, compare it to that and rotate by the different. I don't have the actual editor here but try this psuedocode.
class NeckRotator {
public GameObject Neck;
private Quaternion LastFace;
void Start(){
LastFace = Neck.transform.Rotation;
}
void Update(){
var DetectedFace = ... // Whatever you do to get this
var Change = Quaternion.Inverse(DetectedFace) * LastFace; // Found this online real quick
Neck.Rotate(Change);
LastFace = Neck.transform.Rotation;
}
}
I've done something like that before to rotate a neck of an NPC to look at a player. It should work for your deal as well.
I've got a question about libGDX collision detection. Because it's a rather specific question I have not found any good solution on the internet yet.
So, I already created "humans" that consist of different body parts, each with rectangle-shaped collision detection.
Now I want to implement weapons and skills, which for example look like this:
Skill example image
Problem
Working with rectangles in collision detections would be really frustrating for players when there are skills like this: They would dodge a skill successfully but the collision detector would still damage them.
Approach 1:
Before I started working with Libgdx I have created an Android game with a custom engine and similar skills. There I solved the problem following way:
Detect rectangle collision
Calculate overlapping rectangle section
Check every single pixel of the overlapping part of the skill for transparency
If there is any non-transparent pixel found -> Collision
That's a kind of heavy way, but as only overlapping pixels are checked and the rest of the game is really light, it works completely fine.
At the moment my skill images are loaded as "TextureRegion", where it is not possible to access single pixels.
I have found out that libGDX has a Pixmap class, which would allow such pixel checks. Problem is: having them loaded as Pixmaps additionally would 1. be even more heavy and 2. defeat the whole purpose of the Texture system.
An alternative could be to load all skills as Pixmap only. What do you think: Would this be a good way? Is it possible to draw many Pixmaps on the screen without any issues and lag?
Approach 2:
An other way would be to create Polygons with the shape of the skills and use them for the collision detection.
a)
But how would I define a Polygon shape for every single skill (there are over 150 of them)? Well after browsing a while, I found this useful tool: http://www.aurelienribon.com/blog/projects/physics-body-editor/
it allows to create Polygon shapes by hand and then save them as JSON files, readable by the libGDX application. Now here come the difficulties:
The Physics Body Editor is connected to Box2d (which I am not using). I would either have to add the whole Box2d physics engine (which I do not need at all) just because of one tiny collision detection OR I would have to write a custom BodyEditorLoader which would be a tough, complicated and time-intensive task
Some images of the same skill sprite have a big difference in their shapes (like the second skill sprite example). When working with the BodyEditor tool, I would have to not only define the shape of every single skill, but I would have to define the shape of several images (up to 12) of every single skill. That would be extremely time-intensive and a huge mess when implementing these dozens of polygon shapes
b)
If there is any smooth way to automatically generate Polygons out of images, that could be the solution. I could simply connect every sprite section to a generated polygon and check for collisions that way. There are a few problems though:
Is there any smooth tool which can generate Polygon shapes out of an image (and does not need too much time therefor)?
I don't think that a tool like this (if one exists) can directly work with Textures. It would probably need Pixmaps. It would not be needed to keep te Pixmaps loaded after the Polygon creation though. Still an extremely heavy task!
My current thoughts
I'm stuck at this point because there are several possible approaches but all of them have their difficulties. Before I choose one path and continue coding, it would be great if you could leave some of your ideas and knowledge.
There might be helpful classes and code included in libGDX that solve my problems within seconds - as I am really new at libGDX I just don't know a lot about it yet.
Currently I think I would go with approach 1: Work with pixel detection. That way I made exact collision detections possible in my previous Android game.
What do you think?
Greetings
Felix
I, personally, would feel like pixel-to-pixel collision would be overkill on performance and provide some instances where I would still feel cheated - (I got hit by the handle of the axe?)
If it were me, I would add a "Hitbox" to each skill. StreetFighter is a popular game which uses this technique. (newer versions are in 3D, but hitbox collision is still 2D) Hitboxes can change frame-by-frame along with the animation.
Empty spot here to add example images - google "Streetfighter hitbox" in the meantime
For your axe, there could be a defined rectangle hitbox along the edge of one or both ends - or even over the entire metal head of the axe.
This keeps it fairly simple, without having to mess with exact polygons, but also isn't overly performance heavy like having every single pixel being its own hitbox.
I've used that exact body editor you referenced and it has the ability to generate polygons and/or circles for you. I also made a loader for the generated JSON with the Jackson library. This may not be the answer for you since you'd have to implement box2d. But here's how how I did it anyway.
/**
* Adds all the fixtures defined in jsonPath with the name'lookupName', and
* attach them to the 'body' with the properties defined in 'fixtureDef'.
* Then converts to the proper scale with 'width'.
*
* #param body the body to attach fixtures to
* #param fixtureDef the fixture's properties
* #param jsonPath the path to the collision shapes definition file
* #param lookupName the name to find in jsonPath json file
* #param width the width of the sprite, used to scale fixtures and find origin.
* #param height the height of the sprite, used to find origin.
*/
public void addFixtures(Body body, FixtureDef fixtureDef, String jsonPath, String lookupName, float width, float height) {
JsonNode collisionShapes = null;
try {
collisionShapes = json.readTree(Gdx.files.internal(jsonPath).readString());
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
for (JsonNode node : collisionShapes.findPath("rigidBodies")) {
if (node.path("name").asText().equals(lookupName)) {
Array<PolygonShape> polyShapes = new Array<PolygonShape>();
Array<CircleShape> circleShapes = new Array<CircleShape>();
for (JsonNode polygon : node.findPath("polygons")) {
Array<Vector2> vertices = new Array<Vector2>(Vector2.class);
for (JsonNode vector : polygon) {
vertices.add(new Vector2(
(float)vector.path("x").asDouble() * width,
(float)vector.path("y").asDouble() * width)
.sub(width/2, height/2));
}
polyShapes.add(new PolygonShape());
polyShapes.peek().set(vertices.toArray());
}
for (final JsonNode circle : node.findPath("circles")) {
circleShapes.add(new CircleShape());
circleShapes.peek().setPosition(new Vector2(
(float)circle.path("cx").asDouble() * width,
(float)circle.path("cy").asDouble() * width)
.sub(width/2, height/2));
circleShapes.peek().setRadius((float)circle.path("r").asDouble() * width);
}
for (PolygonShape shape : polyShapes) {
Vector2 vectors[] = new Vector2[shape.getVertexCount()];
for (int i = 0; i < shape.getVertexCount(); i++) {
vectors[i] = new Vector2();
shape.getVertex(i, vectors[i]);
}
shape.set(vectors);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
for (CircleShape shape : circleShapes) {
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
}
}
}
And I would call it like this:
physics.addFixtures(body, fixtureDef, "ship/collision_shapes.json", shipType, width, height);
Then for collision detection:
public ContactListener shipsExplode() {
ContactListener listener = new ContactListener() {
#Override
public void beginContact(Contact contact) {
Body bodyA = contact.getFixtureA().getBody();
Body bodyB = contact.getFixtureB().getBody();
for (Ship ship : ships) {
if (ship.body == bodyA) {
ship.setExplode();
}
if (ship.body == bodyB) {
ship.setExplode();
}
}
}
};
return listener;
}
then you would add the listener to the world:
world.setContactListener(physics.shipsExplode());
my sprites' width and height were small since you're dealing in meters not pixels once you start using box2d. One sprite height was 0.8f and width was 1.2f for example. If you made the sprites width and height in pixels the physics engine hits speed limits that are built in http://www.iforce2d.net/b2dtut/gotchas
Don't know if this still matter to you guys, but I built a small python script that returns the pixels positions of the points in the edges of the image. There is room to improve the script, but for me, for now its ok...
from PIL import Image, ImageFilter
filename = "dship1"
image = Image.open(filename + ".png")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(filename + "_edge.png")
cols = image.width
rows = image.height
points = []
w = 1
h = 1
i = 0
for pixel in list(image.getdata()):
if pixel[3] > 0:
points.append((w, h))
if i == cols:
w = 0
i = 0
h += 1
w += 1
i += 1
with open(filename + "_points.txt", "wb") as nf:
nf.write(',\n'.join('%s, %s' % x for x in points))
In case of updates you can find them here: export positions
I'm trying to make a time lapse geographic twitter visualization inspired by Jer Thorp's "Just Landed". I am using the latest version of processing.
I'm using an SVG image for my map because I want to be able to zoom into the map at an arbitrary angle, to focus on certain localities, then show the twitter connections on a global scale. I'm running into several problems, the first of which is a flickering of path boundaries of the countries when I rotate my map. Here's a screenshot of my problem:
Here is my code which is causing the problem:
import processing.opengl.*;
import java.awt.event.*;
PShape map;
PShape test1;
PShape test2;
//camera position/movement intialization
PVector position = new PVector(450, 450);
PVector movement = new PVector();
PVector rotation = new PVector();
PVector velocity = new PVector();
float rotationSpeed = 0.035;
float panningsSpeed = 0.035;
float movementSpeed = 0.05;
float scaleSpeed = 0.25;
float fScale = 2;
void setup(){
map = loadShape("blank_merc.svg"); //swap out for whatever file
size(900, 900, OPENGL);
smooth();
fill(150, 200, 250);
addMouseWheelListener(new MouseWheelListener(){
public void mouseWheelMoved(MouseWheelEvent mwe){
mouseWheel(mwe.getWheelRotation());
}
});
}
void draw(){
if (mousePressed) {
if (mouseButton==LEFT) velocity.add( (pmouseY-mouseY) * 0.01, (mouseX-pmouseX) * 0.01, 0);
if (mouseButton==RIGHT) movement.add( (mouseX-pmouseX) * movementSpeed, (mouseY-pmouseY) * movementSpeed, 0);
}
//TODO: implement reset functionality: DONE
if (keyPressed){
if (key=='r'){
position.set(450,450);
rotation.sub(rotation.get());
velocity.sub(velocity.get());
movement.sub(movement.get());
}
}
velocity.mult(0.95);
rotation.add(velocity);
movement.mult(0.95);
position.add(movement);
background(255);
//lights();
translate(position.x, position.y, position.z);
rotateX(rotation.x*rotationSpeed);
rotateY(rotation.y*rotationSpeed);
scale(fScale);
shape(map,-250,-250,1000,1000);
}
void mouseWheel(int delta){
fScale -= delta * scaleSpeed;
fScale = max(0.5, fScale);
}
I was told it might be z-fighting amongst the paths, and I think this might be the problem because the flickering is more problematic when the map is mid rotation, especially at angles that are non orthogonal to the viewing plane. I tried to remedy this by "translating" a PShape child of the file a small amount in the Z direction with the test1.translate(0,0,0.1); command, but I get an error telling me illegal argument exception: cannot use translate(x,y,z) on a PMatrix2D.
I've also had trouble testing my code with other SVG map files and generally getting the SVG to look like what I think it should look like. There are a bunch of cities and other weird markers on my SVG map, and even when i download the completely "blank" svg world map mercator projection from wikimedia commons. There are these city marker/region attributes which show up in the processing render that dont show up in the browser view. I'm trying to figure out how to "clean" my SVG file up in Inkscape, but I'm unsure what specifically to look for.
For example, I've run it with this map: http://commons.wikimedia.org/wiki/File:Mercator_Projection.svg
but the dots and lines I have no use for, and I'm having to resort to manually selecting and deleting the paths, which is not a very thorough process
and when I use this map, which is supposed to be the "blank version" of the above without all the markers, I see not only a bunch of markers (presumably hidden with some style attribute in the SVG XML?) but also this weird vertical banding, and my camera controls are super slow. The applet appears to be behaving as if the file is way too large, but its like 2MB. Here's a screenshot of what this looks like:
I'm really just looking for a way to get a "clean" SVG world map into Processing so I can spin it around and zoom in on it, and if I can get that to work I can start the Arc-Drawing part. I would sincerely appreciate any assistance anyone could give me.
Thanks
If I understand your question correctly, the flickering is only on the edges, presumably where they overlap. That would suggest z-fighting to me. I usually find that a simple test outside your main sketch is best, just as a quick way to see what's happening and how you might fix it.
If you make a simple SVG with two overlapping shapes, sharing just one edge, does the same thing happen?
If so, I think the easiest solution (though not that easy) would be either:
Select all the countries in Illustrator
Use Object > Transform > Scale... and shrink by a tiny amount
Then share your fixed map for everyone else!
I'm new to XNA and would like to develop a light-weight 2D engine over it, with the entities organized into parent-child hierarchy. I think of matrix when drawing children, because their position, rotation and scale are depend on their parent.
If I use SpriteBatch.Begin(), my rectangles can be drawn on the screen, but when I change them into:
this.DrawingMatrix = Matrix.Identity;
this.SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullClockwise, null, this.DrawingMatrix);
nothing is drawn anymore. I even tried new Matrix() or Matrix.CreateTranslation(0, 0, 0) for DrawingMatrix.
My first question is: why doesn't it work? I'm not working with any camera or viewport.
Secondly, before drawing an entity, I call the PreDraw to transform the matrix (I will then reset to original state at PostDraw):
protected virtual void PreDraw(Engine pEngine)
{
pEngine.DrawingMatrix *=
Matrix.CreateTranslation(this.X, this.Y, 0) *
Matrix.CreateScale(this.ScaleX, this.ScaleY, 1) *
Matrix.CreateRotationZ(this.Rotation);
}
Please clarify the correction of above code. And I need to scale not at the origin, but at ScaleCenterX and ScaleCenterY, how can I achieve this?
ADDED: Here is an example of my engine's draw process:
Call these code:
this.DrawingMatrix = Matrix.CreateTranslation(0, 0, 0);
this.SpriteBatch.Begin(SpriteSortMode.Deferred, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullClockwise, null, this.DrawingMatrix);
Call PreDraw(), with is:
protected virtual void PreDraw(Engine pEngine)
{
pEngine.DrawingMatrix *=
Matrix.CreateTranslation(this.X, this.Y, 0) *
Matrix.CreateScale(this.ScaleX, this.ScaleY, 1) *
Matrix.CreateRotationZ(this.Rotation);
}
Call Draw(), for example, in my Rect class:
protected override void Draw(Engine pEngine)
{
pEngine.SpriteBatch.Draw(pEngine.RectangleTexture, new Rectangle(0, 0, (int)this.Width, (int)this.Height), new Rectangle(0, 0, 1, 1), this.Color);
}
If I replace above Begin code with this.SpriteBatch.Begin(), the rectangle is drawn correctly, so I guess it is because of the matrix.
First issue is a simple bug: The default for SpriteBatch is CullCounterClockwise, but you have specified CullClockwise causing all your sprites to get back-face-culled. You can pass null if you just want to use the default render states - you don't need to specify them explicitly.
(You would need to change the cull mode if you used a negative scale.)
To answer your second question: You need to translate "back" to place the scaling origin (your ScaleCenterX and ScaleCenterY) at the world origin (0,0). Transformations always happen around (0,0). So normally the order is: translate sprite origin back to the world origin, scale, rotate, translate to place sprite origin at desired world position.
Also, I hope that your PostDraw is not applying the reverse transformations (you made it sound like it does). That is very likely to cause precision problems. You should save and restore the matrix instead.