I have an actor that i need to rotate 2 times. Each time around a different origin. But it seems the actor just saves the origin set with setOrigin() and and the rotation set with setRotation() and calculates it when drawing. So if i simply set these values 2 times its just overwriting the old one and doesnt calculate it when drawing. Is there any way to chain multile rotations around different origins ?
Yes you are right, you will not see the result until you draw your actor, it is because the rotation doesn't make any transformation on actor's coordinates or something else. Rotation just a plain value that uses only when actor draws it's graphics or someone queries for actor BoundingBox e.g. So all the rotation transformations happens each time someone need in it.
Back to your question... If you want to apply several transformations to your actor you should somehow accumulate them and then change actor's state only once.
As solution you may have a look at Group#applyTransform() method, it supplied with Matrix4 where your can flexible configure all your transformations. Of course you will have to place your actor inside Group object, which is a some kind of cons, but in profit you will have a deal with a matrix transformations whose are not available for a plain Actor.
Hope this will helpful, good luck.
Related
I have a game that requires the player to roll two die. As this is a multiplayer game, the way I currently do this is have 6 animations (1 for each die's outcome). When the player clicks a button, it sends a request to my server code. My server code determines the die's outcome and sends the results to the client. The client then plays the corresponding animations.
This works ok, but has some issues. For instance, if the server sends back two of the same values (two 6's, for example) then the animations don't work correctly. As both animations are the same, they overlay each other, and it looks like only one die was rolled.
Is there a better way to do this? Instead of animations, using "real" dice? If that's the case, I always need to be sure to "pre-determine" the outcome of the dice roll, on the server. I also need to make sure the dice don't fall off the table or jostle any of the other player pieces on the board.
thanks for any ideas.
The server only needs to care about the value result, not running physics calculations.
Set up 12 different rolling animations:
Six for the first die
Six for the second die
Each one should always end with the same modeled face pointing upwards (the starting position isn't relevant, only the ending position). For the latter steps you'll probably want to adjust the model's UV coordinates to use a very tall or very wide texture (or just a slice of a square one). So not like this but rather all in a line 1-2-3-4-5-6.
The next step is picking a random animation to play. You've already got code to run a given animation, just set it to pick randomly instead of based on the die-roll-value from the server:
int animNum = Mathf.Floor(Random.Next()*6);
Finally, the fun bit. Adjusting the texture so that the desired face shows when the animation is done. I'm going to assume that you arrange your faces along the top edge of your square texture. Material.SetTextureOffset().
int showFace = Mathf.Floor(Random.Next()*6); //this value should come from the server
die.renderer.material.SetTextureOffset(1f/6 * showFace,0);
This will set the texture offset such that the desired face will show on top. You'll even be able to see it changing in the inspector. Because of the UVs being arranged such that each face uses the next chunk over and because textures will wrap around when reaching the edge (unless the texture is set to Clamp in its import settings: you don't want this here).
Note that this will cause a new material instance to be instantiated (which is not very performant). If you want to avoid this, you'll have to use a material property block instead.
You could simulate the physics on the server, keep track of the positions and the orientations of the dice for the duration of the animation, and then send the data over to the client. I understand it's a lot of data for something so simple, but that's one way you can get the rolls to appear realistic and synced between all clients.
If only Unity's physics was deterministic that would be a whole lot easier.
Hy!
I am working with huge vertice objects, I am able to show lots of modells, because I have split them into smaller parts(Under 65K vertices). Also I am using three js cameras. I want to increase the performance by using a priority queue, and when the user moving the camera show only the top 10, then when the moving stop show the rest. This part is not that hard, but I dont want to put modells to render, when they are behind another object, maybe send out some Rays from the view of the camera(checking the bounding box hit) and according hit list i can build the prior queue.
What do you think?
Also how can I detect if I can load the next modell or not.(on the fly)
Option A: Occlusion culling, you will need to find a library for this.
Option B: Use a AABB Plane test with camera Frustum planes and object bounding box, this will tell you if an object is in cameras field of view. (not necessarily visible behind object, as such a operation is impossible, this mostly likely already done to a degree with webgl)
Implementation:
Google it, three js probably supports this
Option C: Use a max object render Limit, prioritized based on distance from camera and size of object. Eg Calculate which objects are visible(Option B), then prioritize the closest and biggest ones and disable the rest.
pseudo-code:
if(object is in frustum ){
var priority = (bounding.max - bounding.min) / distanceToCamera
}
Make sure your shaders are only doing one pass. As that will double the calculation time(roughly depending on situation)
Option D: raycast to eight corners of bounding box if they all fail don't render
the object. This is pretty accurate but by no means perfect.
Option A will be the best for sure, Using Option C is great if you don't care that small objects far away don't get rendered. Option D works well with objects that have a lot of verts, you may want to raycast more points of the object depending on the situation. Option B probably won't be useful for your scenario, but its a part of c, and other optimization methods. Over all there has never been an extremely reliable and optimal way to tell if something is behind something else.
I was looking at the http://threejs.org/examples/webgl_nearestneighbour.html and had a few questions come up. I see that they use the kdtree to stick all the particles positions and then have a function to determine the nearest particle and color it. Let's say that you have a canvas with around 100 buffered geometries with around 72000 vertices / geometry. The only way I know to do this is that you get the positions of the buffered geometries and then put them into the kdtree to determine the nearest vertice and go from there. This sounds very expensive.
What other way is there to return the objects that are near the camera. Something like how THREE.LOD does it? http://threejs.org/examples/#webgl_lod It has the ability to see how far an object is and render the different levels depending on the setting you inputted.
Define "expensive". The point of the kdtree is to find nearest neighbour elements quickly, its primary focus is not on saving memory (Although it does everything inplace on a typed array, it's quit cheap in terms of memory already). If you need to save memory you maybe have to find another way.
Yet a typed array with length 21'600'000 is indeed a bit long. I highly doubt you have to have every single vertex in there. Why not have a position reference point for every geometry part? And, if you need to get the vertices associated to that point, a dictionary. Then you can call myGeometryVertices[ geometryReferencePoint ].
Three.LOD works with raycasts. If you have a (few) hundred objects that might work well. If you have hundred thousands or even millions of positions you'll get some troubles. Also if you're not using meshes; you can't raytrace e.g. a particle.
Really just build your own logic with them. None of those two provide a prebuilt perfect-for-all-cases solution.
Let's say i want to create simple physics object with a shape of "matryoshka" or banal snowman . As i see it , i have two options to do it: 1. To create 2 circle (or may be custom) bodies and connect them with weld joint , or 2. To create one body with two circle (or may be custom) shapes in it.
So the question is: what is more expensive for CPU: bodies connected with joints or complicate-shaped bodies. If i have one object may be i don't feel difference in performance , but what if i have many object of that type?
I know that joints are expensive , but may be custom shaped bodies is more expensiver?
I'm working with Box2dFlash.
Since the question is about CPU use, joints use more CPU than shapes alone with no joint. Circles shapes can be more efficient than polygons in many cases, but not all.
For CPU optimization, use as few bodies and as simple polygons as possible. For every normal defined in a polygon, a calculation may need to be performed if an object overlaps with another. For circles, a maximum of one calculation is needed.
As an aside, unless you are already experiencing performance problems, you should not worry about if your shapes are the idea CPU use. Instead, you should ask whether the simulation the create is the one you want to happen. Box2D contains many many special case optimizations to make it run smoothly. You can also decrease its accuracy per tick by setting the velocity and position iteration variables. This will have a far greater effect on efficiency than geometry unless your geometry is extremely complex.
If you don't want the two sections of the snowman's body to move in relation to each other, then a single body is the way to go. You will ideally be defining the collision shape(s) manually anyway, so there is absolutely no gain to be had using a weld.
Think of it like this: If you use (a) weld(s), the collision shapes will be no less complicated than if you simply made an approximation of the collision geometry for a single body; it would still require either multiple collision shapes, or a single complex collision shape regardless of your choice. The weld will simply be adding an extra step.
If you need the snowman's body to be dynamic in anyway (break apart or flex) then a weld or regular joint is the way to go.
I'm adding an OpenGL renderer to my 2D game engine and I want to know whether there is a way to apply an mvp matrix only to part of the vertices in a single draw call?
I'm planning to group draw calls by textures so I'll pass a buffer of many vertices and texcoords, now I want to apply different rotation angles to different quads. Is there a way to accomplish it in the shader or should I give up on the mvp matrix in the shader and perform the same thing using the cpu?
EDIT: What about adding 3 float attributes (rotation and rot_center.xy) per vertex?
what's better performance
(1) doing CPU rotation?
(2) providing 3 more floats per vertex
(3) separating draw calls?
Is there any other option?
Here is a possibility:
Do the rotation in the vertex shader. Pass in the information (angle?) needed to create the rotation matrix as a vertex attribute.
Pass in a vertex attribute (ubyte) that is effectively a per-vertex boolean flag. Rotation in #1 will be executed only if the bool is set.
Not sure if the above will work for you from a performance/storage perspective.
I think that, while it is a good thing to group draw calls for many different performance reasons, changing your code to satisfy a basic requirement as rotation is not a good idea.
Drawing batching is a good thing but, if you are forced to keep an additional attribute (because you cannot do it with uniforms for sure, you wouldn't have the information of the single entity) it is not worth.
An additional attribute means much more memory bandwidth usage that usually is the main killing factor for performances on nowadays systems.
Drawing batching, on the other side, is important but not always critical, it depends on many factors such as:
the GPU OpenGL driver optimization
The GPU tiles configuration
The number of shapes/draw calls we are talking about (if you have 20 quads on the screen, why should you bother of batching? :) )
In other words, often it is much more convenient to drop extreme batching in favor of easiness/main tenability and avoid fancy solutions for simple requirements as rotation.
I hope this helps in some way.
Use two different objects, that is all!
There is no other workaround for rotation of part of object
Example:
A game with a tank, where you want to rotate turret and remaining-body separately. Like in your case here these two are treated as separate objects.