I'm making a 3D monster maker. I recently added a feature to flip parts along the x and y axes, this works perfectly fine on its own, however, I also have a feature that allows users to combine parts (sets flags, doesn't combine mesh), this means that simply flipping the individual objects won't flip the "shape" of the combined object. I have had two ideas of how to do this which didn't work and I'll list them below. I have access to the origin of the objects and the centre of mass of all instances that are combined - the 0, 0, 0 point on a theoretical number plane
In these examples we're flipping across the y axis, the axis plane is X = width, Y = height, Z = depth
Attempt #1 - Simply flipping the individual object's X scale, getting the X distance from the centreMass and taking that from the centreMass for position, this works when the direction of the object is (0, 0, 1) and the right (1, 0, 0) or (-1, 0, 0), in any other direction X isn't the exact "left/right" of the object. Here's a video to clarify: https://youtu.be/QXdEF4ScP10
code:
modelInstance[i].scale.x *= -1;
modelInstance[i].basePosition.set(centre.x - modelInstance[i].distFromCentre.x, modelInstance[I].basePosition.y, modelInstance[I].basePosition.z);
modelInstance[i].transform.set(modelInstance[i].basePosition, modelInstance[i].baseRotation, modelInstance[i].scale);
Attempt #2 - Rotate the objects Y180° around the centreMass and then flip their z value. As far as I understand, this is a solution, but I don't think I can do this. The way to rotate an object around a point AFAIK involves transforming the matrix to the point, rotating it, and then translating it back which I can't use. Due to the ability to rotate, join, flip, and scale objects I keep the rotation, position, and scale completely separate because issues with scaling/rotating and movement occur. I have a Vector3 for the position, a matrix for the rotation, and a Vector3 for the scale, whenever I change any of these I use object.transform.set(position, matrix.getRotation(), scale); So when I attempt to do this method (translating rotation matrix to point etc) the objects individually flip but remain in the same place, translating the objects transform matrix has weird results and doesn't work. Video of both variations: https://youtu.be/5xzTAHA1vCU
code:
modelInstance[i].scale.z *= -1;
modelInstance[i].baseRotationMatrix.translate(modelInstance[i].distFromCentre).rotate(Vector3.Y, 180).translate( modelInstance[i].distFromCentre.scl(-1));
modelInstance[i].transform.set(modelInstance[i].basePosition, modelInstance[i].baseRotation, modelInstance[i].scale);
Ok, since no one else has helped I'll give you some code that you can either use directly or use to help you alter your code so that it is done in a similar way.
First of all, I tend to just deal with matrices and pass them to shaders as projection matrices, ie. I don't really know what modelInstance[i] is, is it an actor (I never use them), or some other libgdx class? Whatever it is, if you do use this code to generate your matrices, you should be able to overwrite your modelInstance[i] matrix at the end of it. If not, maybe it'll give you pointers on how to alter your code.
First, rotate or flip your object with out any translation. Don't translate or scale first, because when you rotate you'll also rotate the translation you've performed. I use this function to generate a rotation matrix, it rotates around the y axis first, which I think is way better then other rotation orders. Alternatively you could create an identity matrix and use the libgdx rotation functions on it to create a similar matrix.
public static void setYxzRotationMatrix(double xRotation, double yRotation, double zRotation, Matrix4 matrix)
{
// yxz - y rotation performed first
float c1=(float)Math.cos(yRotation);
float c2=(float)Math.cos(xRotation);
float c3=(float)Math.cos(zRotation);
float s1=(float)Math.sin(yRotation);
float s2=(float)Math.sin(xRotation);
float s3=(float)Math.sin(zRotation);
matrix.val[0]= -c1*c3 - s1*s2*s3; matrix.val[1]=c2*s3; matrix.val[2]=c1*s2*s3-c3*s1; matrix.val[3]=0;
matrix.val[4]= -c3*s1*s2 + c1*s3; matrix.val[5]=c2*c3; matrix.val[6]=c1*c3*s2+s1*s3; matrix.val[7]=0;
matrix.val[8]= -c2*s1; matrix.val[9]=-s2; matrix.val[10]=c1*c2; matrix.val[11]=0;
matrix.val[12]=0; matrix.val[13]=0; matrix.val[14]=0; matrix.val[15]=1.0f;
}
I use the above function to rotate my object to the correct orientation, I then translate it to the correct location, then multiply it by the cameras matrix and scale as the final operation. This will definitely work if you can do it that way, but I just pass my final matrix to the shader. I'm not sure how you use your matrices. If you want to flip the model using the scale, you should try it immediately after the rotation matrix has been created. I'd recommend getting it working without flipping with scale first, so you can test both matrix.scl() and matrix.scale() as the final step. Off hand, I'm not sure which scale function you'll need.
Matrix4 matrix1;
setYxzRotationMatrix(xRotationInRadians, yRotationInRadians, zRotationInRadians,matrix1);
//matrix1 will rotate your model to the correct orientation, around the origin.
//here is where you may wish to use matrix1.scl(-1,1,1) or matrix1.scale(-1,1,1).
//get anchor position here if required - see notes later
//now translate to the correct location, I alter the matrix directly so I know exactly
what is going on. I think matrix1.trn(x, y, z) would do the same.
matrix1.val[12]=x;
matrix1.val[13]=y;
matrix1.val[14]=z;
//Combine with your camera, this may be part of your stage or scene, but I don't use
//these, so can't help.
Matrix4 matrix2;
//set matrix2 to an identity matrix, multiply it by the cameras projection matrix, then
//finally with your rotation/flip/transform matrix1 you've created.
matrix2.idt().mul(yourCamera.combined).mul(matrix1);
matrix2.scale(-1,1,1); //flipping like this will work, but may screw up any anchor
//position if you calculated one earlier.
//matrix2 is the final projection matrix for your model. ie. you just pass that matrix
to a shader and it should be used to multiply with each vertex position vector to create
the fragment positions.
Hopefully you'll be able to adapt the above to your needs. I suggest trying one operation at a time and making sure your next operation doesn't screw up what you've already done.
The above code assumes you know where you want to translate the model to, that is you know where the center is going to be. If you have an anchor point, lets say -3 units in the x direction, you need to find out where that anchor point has been moved to after the rotation and maybe flip. You can do that by multiplying a vector with matrix1, I'd suggest before any translation to the correct location.
Vector3 anchor=new vector3(-3,0,0);
anchor.mul(matrix1); //after this operation anchor is now set to the correct location
//for the new rotation and flipping of the model. This offset should
//be applied to your translation if your anchor point is not at 0,0,0
//of the model.
This can all be a bit of a pain, particularly if you don't like matrices. It doesn't help that everything is done in a different way to what you've tried so far, but this is the method I use to display all the 3D models in my game and will work if you can adapt it to your code. Hopefully it'll help someone anyway.
I'm fairly new to Shader Development and currently working on a SCNProgram to replace the rendering of a plane geometry.
Within the programs vertex shader I'd like to access the position (or basically anchor position) point of the node/mesh as a clip space coordinate. Is there an easy way to accomplish that, maybe through the supplied Node Buffer?
I got kinda close with:
xCoordinate = scn_node.modelViewProjectionTransform[3].x / povZPosition
yCoordinate = scn_node.modelViewProjectionTransform[3].y / povZPosition
The pov z position is being injected from outside through a custom buffer.
This breaks though, when the POV is facing the scene at an angle.
I figured that I could probably just calculate the node position by myself via:
renderer.projectPoint(markerNode.presentation.worldPosition)
and then passing that through my shader via »program.handleBinding(ofBufferNamed: …« on every frame. I hope there is a better way though.
While digging through Google the Unity equivalent would probably be: https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Screen-Position-Node.html
I would be really thankful for any hints. Attached is a little visualization.
If I'm reading you correctly, it sounds like you actually want the NDC position of the center of the node. This differs subtly from the clip-space position, but both are computable in the vertex shader as:
float4 clipSpaceNodeCenter = scn_node.modelViewProjectionTransform[3];
float2 ndcNodeCenter = clipSpaceNodeCenter.xy / clipSpaceNodeCenter.w;
I am using Three.js version 65.
I am displaying a set of points # time t=0 in 3D space using ParticleSystem. And also I am having next set of points at time t=1. Now I want to animate it as in JSONLoader morphTarget animation? Could anybody suggest me the best way to achieve this?
(or)
Can I prefer WebGL shader programming for this? Please suggest.
Thanks in advance.
Yes you can do that with shaders. You'd e.g. create a custom shader for your particle system with the attributes vec3 position, vec3 nextPosition and a uniform float scale which goes from 0 to 1.
Then you can add some logic to the shader where you calculate the new position like vec3 pos = position * scale + nextPosition * (1.0 - scale) (along with the usual billboard / GL_Point code ofc). And when you reached scale 1 you swap position with nextPosition and fill nextPosition with the relative follower.
Good luck have fun :)
PS: My code mentioned is just for linear interpolation. In your case you might consider other interpolations. Maybe even add another two attribute vectors to indicate the leading and the following point in order to calculate the new position with a bezier curve.
Lastly you'll have to give a thought to performance sooner or later. If you have 10k particles and 1k "states" you might run into performance issues.
I have a very subtle problem with XNA, specifically the SpriteBatch.
In my game I have a Camera class. It can Translate the view (obviously) and also zoom in and out.
I apply the Camera to the scene when I call the "Begin" function of my spritebatch instance (the last parameter).
The Problem: When the cameras Zoomfactor is bigger than 1.0f, the spritebatch stops drawing.
I tried to debug my scene but I couldn't find the point where it goes wrong.
I tried to just render with "Matrix.CreateScale(2.0f);" as the last parameter for "Begin".
All other parameters were null and the first "SpriteSortMode.Immediate", so no custom shader or something.
But SpriteBatch still didn't want to draw.
Then I tried to only call "DrawString" and DrawString worked flawlessly with the provided scale (2.0f).
However, through a lot of trial and error, I found out that also multiplying the ScaleMatrix with "Matrix.CreateTranslation(0, 0, -1)" somehow changed the "safe" value to 1.1f.
So all Scale values up to 1.1f worked. For everything above SpriteBatch does not render a single pixel in normal "Draw" calls. (DrawString still unaffected and working).
Why is this happening?
I did not setup any viewport or other matrices.
It appears to me that this could be some kind of strange Near/Farclipping.
But I usually only know those parameters from 3d stuff.
If anything is unclear please ask!
It is near/far clipping.
Everything you draw is transformed into and then rasterised in projection space. That space runs from (-1,-1) at the bottom left of the screen, to (1,1) at the top right. But that's just the (X,Y) coordinates. In Z coordinates it goes from 0 to 1 (front to back). Anything outside this volume is clipped. (References: 1, 2, 3.)
When you're working in 3D, the projection matrix you use will compress the Z coordinates down so that the near plane lands at 0 in projection space, and the far plane lands at 1.
When working in 2D you'd normally use Matrix.CreateOrthographic, which has near and far plane parameters that do exactly the same thing. It's just that SpriteBatch specifies its own matrix and leaves the near and far planes at 0 and 1.
The vertices of sprites in a SpriteBatch do, in fact, have a Z-coordinate, even though it's not normally used. It is specified by the layerDepth parameter. So if you set a layer depth greater than 0.5, and then scale up by 2, the Z-coordinate will be outside the valid range of 0 to 1 and won't get rendered.
(The documentation says that 0 to 1 is the valid range, but does not specify what happens when you apply a transformation matrix.)
The solution is pretty simple: Don't scale your Z-coordinate. Use a scaling matrix like:
Matrix.CreateScale(2f, 2f, 1f)
I'm a little bit lost, and this is somewhat related to another question I've asked about fragment shaders, but goes beyond it.
I have an orthographic scene (although that may not be relevant), with the scene drawn here as black, and I have one billboarded sprite that I draw using a shader, which I show in red. I have a point that I know and define myself, A, represented by the blue dot, at some x,y coordinate in the 2d coordinate space. (Lower-left of screen is origin). I need to mask the red billboard in a programmatic fashion where I specify 0% to 100%, with 0% being fully intact and 100% being fully masked. I can either pass 0-100% (0 to 1.0) in to the shader, or I could precompute an angle, either solution would be fine.
( Here you can see the scene drawn with '0%' masking )
So when I set "15%" I want the following to show up:
( Here you can see the scene drawn with '15%' masking )
And when I set "45%" I want the following to show up:
( Here you can see the scene drawn with '45%' masking )
And here's an example of "80%":
The general idea, I think, is to pass in a uniform 'A' vec2d, and within the fragment shader I determine if the fragment is within the area from 'A' to bottom of screen, to the a line that's the correct angle offset clockwise from there. If within that area, discard the fragment. (Discarding makes more sense than setting alpha to 0.0 or 1.0 if keeping, right?)
But how can I actually achieve this?? I don't understand how to implement that algorithm in terms of a shader. (I'm using OpenGL ES 2.0)
One solution to this would be to calculate the difference between gl_FragCoord (I hope that exists under ES 2.0!) and the point (must be sure the point is in screen coords) and using the atan function with two parameters, giving you an angle. If the angle is not some value that you like (greater than minimum and less than maximum), kill the fragment.
Of course, killing fragments is not precisely the most performant thing to do. A (somewhat more complicated) triangle solution may still be faster.
EDIT:
To better explain "not precisely the most performant thing", consider that killing fragments still causes the fragment shader to run (it only discards the result afterwards) and interferes with early depth/stencil fragment rejection.
Constructing a triangle fan like whoplisp suggested is more work, but will not process any fragments that are not visible, will not interfere with depth/stencil rejection, and may look better in some situations, too (MSAA for example).
Why don't you just draw some black triangles ontop of the red rectangle?