pygame rotation around center point - rotation

I've read several other posts about rotating an image around the center point, but I've yet to figure it out. I even copy pasted one of the solutions posted on another SO question and it didn't work.
This is my code
def rotate(self, angle):
self.old_center = self.surface.get_rect().center
self.surface = pygame.transform.rotate(self.surface, angle)
self.surface.get_rect(center = self.old_center)
it's inside a class which contains the surface.
When I call this method the image rotates but also translates, and gets distorted.

You are not assigning the new rect, you should do something like:
self.rect = self.surface.get_rect(center = self.old_center)
And you should always keep the original surface and rotate from the original, that way distortion doesn't accumulate when rotating multiple times.
Update
If you don't want to keep track of the rect object, you can do it every time you blit. If you keep the center coordinates.
Example:
rect = object.surface.get_rect()
rect.center = object.center
display.blit(object.surface, rect.topleft)

Related

Change facing direction of CSS3DObject

I have a 3D scene with a bunch of CSS object that I want to rotate so that they are all pointing towards a point in the space.
My CSS objects are simple rectangles that are a lot wider than they are high:
var element = document.createElement('div');
element.innerHTML = "test";
element.style.width = "75px";
element.style.height = "10px";
var object = new THREE.CSS3DObject(element);
object.position.x = x;
object.position.y = y;
object.position.z = z;
Per default, the created objects are defined as if they are "facing" the z-axis. This means that if I use the lookAt() function, the objects will rotate so that the "test" text face the point.
My problem is that I would rather rotate so that the "right edge" of the div is pointing towards the desired point. I've tried fiddling with the up-vector, but I feel like that wont work because I still want the up-vector to point up. I also tried rotating the object Math.PI/2 along the y axis first, but lookAt() seems to ignore any prior set rotation.
It seems like I need to redefine the objects local z-vector instead, so that it runs along with the global x-vector. That way the objects "looking at"-direction would be to the right in the scene, and then lookAt() would orient it properly.
Sorry for probably mangling terminology, newbie 3D programmer here.
Object.lookAt( point ) will orient the object so that the object's internal positive z-axis points in the direction of the desired point.
If you want the object's internal positive x-axis to point in the direction of the desired point, you can use this pattern:
object.lookAt( point );
object.rotateY( - Math.PI / 2 );
three.js r.84

Make a rigged character's head rotate in sync with a quaternion in Unity

I have a face detection app, and I want a character's head to rotate according to the detected face's pose.
I've managed to get the rotation of the detected face in the form of a quaternion, but I'm unsure about how I'm supposed to translate the data from the quaternion into 3D points for the reference points of the rigged character which I believe will decide the rotation.
Let's say I have this character: http://i.imgur.com/3pcRoYx.png
One solution could be to just cut off the head and make it an own object and then set the rotation of that object according to the quaternion, but I don't want that. I want an intact character.
Is it possible to move the reference points in the head with the data from a quaternion? Or have I gotten it wrong how rigged characters turn their heads? I haven't animated before.
You can apply rotation to a single bone. Get that bone in your script. Keep a var in your class to store the last quaternion in and every update, compare it to that and rotate by the different. I don't have the actual editor here but try this psuedocode.
class NeckRotator {
public GameObject Neck;
private Quaternion LastFace;
void Start(){
LastFace = Neck.transform.Rotation;
}
void Update(){
var DetectedFace = ... // Whatever you do to get this
var Change = Quaternion.Inverse(DetectedFace) * LastFace; // Found this online real quick
Neck.Rotate(Change);
LastFace = Neck.transform.Rotation;
}
}
I've done something like that before to rotate a neck of an NPC to look at a player. It should work for your deal as well.

Snap SVG animating existing matrix

I use Snap.svg to create a simple card game. I loaded drawed cards from file and moved them to specific location using matrix translate.
It's svg element now looks kinda like this:
<g id="card11" inkscape:label="#g3908" transform="matrix(1.5621,0,0,1.5621,625.1085,529.3716)" cardposition="4" style="visibility: visible;" class="card inhand hand-4 ofplayer1">...</g>
However, now I'm trying to animate them to a specific position (same for all cards) using this:
function animateTo(object, x, y, scaleX, scaleY, time) {
var matrix = object.transform().localMatrix;
var added = new Snap.Matrix();
added.translate(x, y);
added.scale(scaleX, scaleY);
added.add(matrix);
object.animate({transform: added}, time);
}
or something like this:
function animateTo(object, x, y, scaleX, scaleY, time) {
object.animate({transform: "t100,100"}, time);//this one I tried to use to understand how snap animations works
}
And here is my problem - when it animates, it allways first deletes the animation matrix of object and start animate from it's original location with blank matrix (where the object would be without transform attribute).
For example, when I tried:
var matrix = object.transform().localMatrix;
object.animate({transform: matrix}, time);
I expected it will do nothing, but my object blinks to the top left corner (blank matrix) and animates to position where it should stay.
What am I doing wrong? I need to animate that object from some matrix state to another (ideally the same one for every object). Is it somehow possible? Like I can specify start transform attribute somehow?
Thanks.
According to Ian's advice, I've used toTransformString:
object.animate({transform: matrix.toTransformString()}, time);
but of course, I had to use it in previous transformations too using
object.attr({transform: added.toTransformString()});//this
//object.transform(added);//instead of this
However, getting local matrix still works as expected. Animation now works and I can use matrix.translate() - to relative move the object or object.animate({transform: "t100,100"}, time).
I also can modify a,b,c,d,e,f attributes of the matrix directly. (or use transform: "T100,100")
It works!
Thanks!

get a 2D bounding box from a 3D object

I have a scene filled with ~hundred oblong asteroid shaped objects. I want to place a text label under each one so that the label is visible from any camera angle. I billboard the text so that it always faces the camera.
At first, everything looks great by placing text below the 3d object using .translateY. However, as you start to moving around the scene, labels no longer are 'below' the object depending on your camera position. This especially true when you orient using trackballControls.
How can I place text 'below' the object no matter the orientation. Perhaps if I create a 2d bounding box around each object in relation to the camera on each frame - I could then place the text label right below that 2d box.
I'm also concerned that calculating 2d bounding boxes on a hundred 3d objects every frame could get expensive. Thoughts?
screenshots:
At first, text labels appear correctly translated -y below the object
but as you rotate the camera, labels get sideways
flipping the camera all the way around shows the labels upside down
My goal is to have the labels below the objects no matter the camera orientation.
Did you tried to add the Text Labels to the Object?
object.add(Label) instead of scene.add(Label)
I have a demo site here that might help give you source to look at:
https://dl.dropboxusercontent.com/u/31495717/cubemaker/index.html
This site places textual DOM elements at a screen coordinate-based constant distance from a 3D object, styled with CSS, within the render loop, when the mouse pointer is moved over the 3D object.
From the source:
var id_label = document.createElement('div');
id_label.id = INTERSECTED.name;
id_label.style.position = 'absolute';
id_label.style.top = '-10000px';
id_label.style.left = '-10000px';
id_label.innerHTML = '<span class="particle_label">' + INTERSECTED.name + '<br><span class="particle_sublabel">' + INTERSECTED.subname + '</span></span>';
container.appendChild(id_label);
var id_label_rect = id_label.getBoundingClientRect();
id_label.style.top = (screen_object_center.y - 0.85 * (id_label_rect.height / 2)) + 'px';
if (mouse.x < 0)
id_label.style.left = (screen_object_center.x - horizontal_fudge * (screen_object_edge.x - screen_object_center.x)) + 'px';
else {
id_label.style.left = (screen_object_center.x + horizontal_fudge * (screen_object_edge.x - screen_object_center.x) - id_label_rect.width) + 'px';
id_label.style.textAlign = 'right';
}
The DOM element is drawn offscreen and then repositioned based on attributes of its bounding box and the world coordinates of the 3D element it is associated with. When the mouse pointer is moved outside the 3D element bounds, the text label is removed from the DOM.
Since you are always showing your labels, you might modify this to draw the element once in an initialization step, and simply change the top and left style attributes within the render loop.

Performance problems with scenekit

I've got a row dimensional array of values that I want to visualize in 3D and I'm using scene kit under OS X for it. I've done it in a clumsy manner by using each column as a point on the X axis, each row as a point on the Z axis, and each value as a normalized point on the Y axis -- I place a sphere at the vector defined by each data point. It works but it doesn't look too good.
I've also done this by building a mesh of lines based on #Matthew's function in Drawing a line between two points using SceneKit (the answer he posted, not the original question). For each point I use his function to draw two lines - one between my current point and the next point to the right and another between my current point and the next point towards the front (except when there is no additional column/row, of course).
Using the second method, my results look much better... however the performance is quite hideous! It takes quite a long time to complete the initial rendering, and if I use a trackpad/mouse to rotate or translate the scene, I might as well get a cup of coffee to wait until my system is usable again (and this is not much hyperbole). Using the sphere method, things render and update very quickly.
Any advice on how to improve the performance when using the lines method? (Note that I am not trying to add both lines and spheres at the same time.) Code-wise, the only difference between approach is which of the following methods gets called (and that for each point, addPixelAt... is called once, but addLineAt... is called twice for most points).
- (SCNNode *)addPixelAtRow:(CGFloat)row Column:(CGFloat)column size:(CGFloat)size color:(NSColor *)color
{
CGFloat radius = 0.5;
SCNSphere *ball = [SCNSphere sphereWithRadius:radius*1.5];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[ball setMaterials:#[material]];
SCNNode *ballNode = [SCNNode nodeWithGeometry:ball];
[ballNode setPosition:SCNVector3Make(column, size, row)];
[_baseNode addChildNode:ballNode];
return ballNode;
}
- (SCNNode *)addLineFromRow:(CGFloat)row1 Column:(CGFloat)column1 size:(CGFloat)size1
toRow2:(CGFloat)row2 Column2:(CGFloat)column2 size2:(CGFloat)size2 color:(NSColor *)color
{
SCNVector3 positions[] = {
SCNVector3Make(column1, size1, row1),
SCNVector3Make(column2, size2, row2)
};
int indices[] = {0, 1};
SCNGeometrySource *vertexSource = [SCNGeometrySource geometrySourceWithVertices:positions count:2];
NSData *indexData = [NSData dataWithBytes:indices length:sizeof(indices)];
SCNGeometryElement *element = [SCNGeometryElement geometryElementWithData:indexData
primitiveType:SCNGeometryPrimitiveTypeLine
primitiveCount:1
bytesPerIndex:sizeof(int)];
SCNGeometry *line = [SCNGeometry geometryWithSources:#[vertexSource] elements:#[element]];
SCNMaterial *material = [SCNMaterial material];
[[material diffuse] setContents:color];
[[material specular] setContents:color];
[line setMaterials:#[material]];
SCNNode *lineNode = [SCNNode nodeWithGeometry:line];
[_baseNode addChildNode:lineNode];
return lineNode;
}
From the data that you've shown in your question I would say that your main problem is the number of draw calls. Your's is in the tens of thousands, which is way too much. It should probably be a lot closer to ~100.
The reason why you have so many draw calls is that you have so many distinct objects in your scene (each line). The better (but more advanced solution) would probably be to generate a single element for the entire mesh that consists of all the lines. If you want to achieve the same rendering with that mesh (with a color from cold to warm based on the height) then you could do that in a shader modifier.
However, in your case I would start by flattening all the lines (since that would be the smallest code change and should still have a significant performance improvement in your case).
(Optimizing performance is always an iterative process. Once you fix one thing there will be another thing which is the most expensive operation. Without your code I can only say what would help with the current performance problem)
Create an empty node (without adding it to your scene) and generate all the lines, adding them to this node. Then create a flattened copy of that node by calling flattenedClone on the node that contains all the lines
SCNNode *nodeWithAllTheLines = [SCNNode node];
// create all the lines and add them to it...
SCNNode *flattenedNode = [nodeWithAllTheLines flattenedClone];
[_baseNode addChildNode:flattenedNode];
When you do this you should see a significant drop in the number of draw calls (the number after the diamond in the statistics) and hopefully a big increase in performance.

Resources