Monogame - Rotate Sprite around centre of screen and itself - rotation

I have a problem and although I serached everywhere I couldn't find a solution.
I have a stacked sprite and I'm rotating this sprite around the center of the screen. So I iterate over a list of sprites (stacked) and increase the y-coordinate by 2 every loop (rotation is increased step by step by 0.01f outside of the loop):
foreach(var s in stacked)
{
Vector2 origin = new Vector2(Basic.width / 2, Basic.height / 2);
Rectangle newPosition = new Rectangle(position.X, position.Y - y, position.Width, position.Height);
float angle = 0f;
Matrix transform = Matrix.CreateTranslation(-origin.X, -origin.Y, 0f) *
Matrix.CreateRotationZ(rotation) *
Matrix.CreateTranslation(origin.X, origin.Y, 0f);
Vector2 pos = new Vector2(newPosition.X, newPosition.Y);
pos = Vector2.Transform(pos, transform);
newPosition.X = (int)pos.X;
newPosition.Y = (int)pos.Y;
angle += rotation;
s.Draw(newPosition, origin, angle, Color.White);
y += 2;
}
This works fine. But now my problem. I want not only to rotate the sprite around the center of the screen but also around itself. How to achieve this? I can only set one origin and one rotation per Draw. I would like to rotate the sprite around the origin 'Basic.width / 2, Basic.height / 2' and while it rotates, around 'position.Width / 2, position.Height / 2'. With different rotation speed each. How is this possible?
Thank you in advance!

Just to be clear:
When using SpriteBatch.Draw() with origin and angle, there is only one rotation: the final angle of the sprite.
The other rotations are positional offsets.
The origin in the Draw() call is a translation, rotation, translate back. Your transform matrix shows this quite well:
Matrix transform = Matrix.CreateTranslation(-origin.X, -origin.Y, 0f) *
Matrix.CreateRotationZ(rotation) *
Matrix.CreateTranslation(origin.X, origin.Y, 0f);
//Class level variables:
float ScreenRotation, ScreenRotationSpeed;
float ObjectRotation, ObjectRotationSpeed;
Vector2 ScreenOrigin, SpriteOrigin;
// ...
// In constructor and resize events:
ScreenOrigin = new Vector2(Basic.width <<1, Basic.height <<1);
// shifts are faster for `int` type. If "Basic.width" is `float`:
//ScreenOrigin = new Vector2(Basic.width, Basic.height) * 0.5f;
// In Update():
ScreenRotation += ScreenRotationSpeed; // * gameTime.ElapsedGameTime.Seconds; // for FPS invariant speed where speed = 60 * single frame speed
ObjectRotation+= ObjectRotationSpeed;
//Calculate the screen center rotation once per step
Matrix baseTransform = Matrix.CreateTranslation(-ScreenOrigin.X, -ScreenOrigin.Y, 0f) *
Matrix.CreateRotationZ(ScreenRotation) *
Matrix.CreateTranslation(ScreenOrigin.X, ScreenOrigin.Y, 0f);
// In Draw() at the start of your code snippet posted:
// moved outside of the loop for a translationally invariant vertical y interpretation
// or move it inside the loop and apply -y to position.Y for an elliptical effect
Vector2 ObjectOrigin = new Vector2(position.X, position.Y);
Matrix transform = baseTransform *
Matrix.CreateTranslation(-ObjectOrigin.X, -ObjectOrigin.Y, 0f) *
Matrix.CreateRotationZ(ObjectRotation) *
Matrix.CreateTranslation(ObjectOrigin.X, ObjectOrigin.Y, 0f);
foreach(var s in stacked)
{
Vector2 pos = new Vector2(ObjectOrigin.X, ObjectOrigin.Y - y);
pos = Vector2.Transform(pos, transform);
float DrawAngle = ObjectRotation;
// or float DrawAngle = ScreenRotation;
// or float DrawAngle = ScreenRotation + ObjectRotation;
// or float DrawAngle = 0;
s.Draw(pos, SpriteOrigin, DrawAngle, Color.White);
}
I suggest moving the Draw() parameter away from destinationRectangle and use the Vector2 position directly with scaling. Rotations within square rectangles can differ up to SQRT(2) in aspect ratio, i.e. stretching/squashing. Using Vector2 incurs a cost of higher collision complexity.
I am sorry for the ors, but without complete knowledge of the problem...YMMV
In my 2D projects, I use the vector form of polar coordinates.
The Matrix class requires more calculations than the polar equivalents in 2D. Matrix operates in 3D, wasting cycles calculating Z components.
With normalized direction vectors (cos t,sin t) and a radius(vector length),in many cases I use Vector2.LengthSquared() to avoid the square root when possible.
The only time I have used Matrices in 2D is display projection matrix(entire SpriteBatch) and Mouse and TouchScreen input deprojection(times the inverse of the projection matrix)

Related

Creating gyroid pattern in 2D image algorithm

I'm trying to fill an image with gyroid lines with certain thickness at certain spacing, but math is not my area. I was able to create a sine wave and shift a bit in the X direction to make it looks like a gyroid but it's not the same.
The idea behind is to stack some images with the same resolution and replicate gyroid into 2D images, so we still have XYZ, where Z can be 0.01mm to 0.1mm per layer
What i've tried:
int sineHeight = 100;
int sineWidth = 100;
int spacing = 100;
int radius = 10;
for (int y1 = 0; y1 < mat.Height; y1 += sineHeight+spacing)
for (int x = 0; x < mat.Width; x++)
{
// Simulating first image
int y2 = (int)(Math.Sin((double)x / sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1+y2), radius, EmguExtensions.WhiteColor, -1, LineType.AntiAlias);
// Simulating second image, shift by x to make it look a bit more with gyroid
y2 = (int)(Math.Sin((double)x / sineWidth + sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1 + y2), radius, EmguExtensions.GreyColor, -1, LineType.AntiAlias);
}
Resulting in: (White represents layer 1 while grey layer 2)
Still, this looks nothing like real gyroid, how can I replicate the formula to work in this space?
You have just single ugly slice because I do not see any z in your code (its correct the surface has horizontal and vertical sin waves like this every 0.5*pi in z).
To see the 3D surface you have to raycast z ...
I would expect some conditional testing of actually iterated x,y,z result of gyroid equation against some small non zero number like if (result<= 1e-6) and draw the stuff only then or compute color from the result instead. This is ideal to do in GLSL.
In case you are not familiar with GLSL and shaders the Fragment shader is executed for each pixel (called fragment) of the rendered QUAD so you just put the code inside your nested x,y for loops and use your x,y instead of pos (you can ignore the Vertex shader its not important).
You got 2 basic options to render this:
Blending the ray casted surface pixels together creating X-Ray like image. It can be combined with SSS techniques to get the impression of glass or semitransparent material. Here simple GLSL example for the blending:
Vertex:
#version 400 core
in vec2 position;
out vec2 pos;
void main(void)
{
pos=position;
gl_Position = vec4(position.xy,0.0,1.0);
}
Fragment:
#version 400 core
in vec2 pos;
out vec3 out_col;
void main(void)
{
float n,x,y,z,dz,d,i,di;
const float scale=2.0*3.1415926535897932384626433832795;
n=100.0; // layers
x=pos.x*scale; // x postion of pixel
y=pos.y*scale; // y postion of pixel
dz=2.0*scale/n; // z step
di=1.0/n; // color increment
i=0.0; // color intensity
for (z=-scale;z<=scale;z+=dz) // do all layers
{
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
if (d<=1e-6) i+=di; // if near surface add to color
}
out_col=vec3(1.0,1.0,1.0)*i;
}
Usage is simple just render 2D quad covering screen without any matrices with corner pos points in range <-1,+1>. Here result:
Another technique is to render first hit to surface creating mesh like image. In order to see the details we need to add basic (double sided) directional lighting for which surface normal is needed. The normal can be computed by simply partialy derivate the equation by x,y,z. As now the surface is opaque then we can stop on first hit and also ray cast just single period in z as anything after that is hidden anyway. Here simple example:
Fragment:
#version 400 core
in vec2 pos; // input fragmen (pixel) position <-1,+1>
out vec3 col; // output fragment (pixel) RGB color <0,1>
void main(void)
{
bool _discard=true;
float N,x,y,z,dz,d,i;
vec3 n,l;
const float pi=3.1415926535897932384626433832795;
const float scale =3.0*pi; // 3.0 periods in x,y
const float scalez=2.0*pi; // 1.0 period in z
N=200.0; // layers per z (quality)
x=pos.x*scale; // <-1,+1> -> [rad]
y=pos.y*scale; // <-1,+1> -> [rad]
dz=2.0*scalez/N; // z step
l=vec3(0.0,0.0,1.0); // light unit direction
i=0.0; // starting color intensity
n=vec3(0.0,0.0,1.0); // starting normal only to get rid o warning
for (z=0.0;z>=-scalez;z-=dz) // raycast z through all layers in view direction
{
// gyroid equation
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
// surface hit test
if (d>1e-6) continue; // skip if too far from surface
_discard=false; // remember that surface was hit
// compute normal
n.x =+cos(x)*cos(y); // partial derivate by x
n.x+=+sin(y)*cos(z);
n.x+=-sin(z)*sin(x);
n.y =-sin(x)*sin(y); // partial derivate by y
n.y+=+cos(y)*cos(z);
n.y+=+sin(z)*cos(x);
n.z =+sin(x)*cos(y); // partial derivate by z
n.z+=-sin(y)*sin(z);
n.z+=+cos(z)*cos(x);
break; // stop raycasting
}
// skip rendering if no hit with surface (hole)
if (_discard) discard;
// directional lighting
n=normalize(n);
i=abs(dot(l,n));
// ambient + directional lighting
i=0.3+(0.7*i);
// output fragment (render pixel)
gl_FragDepth=z; // depth (optional)
col=vec3(1.0,1.0,1.0)*i; // color
}
I hope I did not make error in partial derivates. Here result:
[Edit1]
Based on your code I see it like this (X-Ray like Blending)
var mat = EmguExtensions.InitMat(new System.Drawing.Size(2000, 1080));
double zz, dz, d, i, di = 0;
const double scalex = 2.0 * Math.PI / mat.Width;
const double scaley = 2.0 * Math.PI / mat.Height;
const double scalez = 2.0 * Math.PI;
uint layerCount = 100; // layers
for (int y = 0; y < mat.Height; y++)
{
double yy = y * scaley; // y position of pixel
for (int x = 0; x < mat.Width; x++)
{
double xx = x * scalex; // x position of pixel
dz = 2.0 * scalez / layerCount; // z step
di = 1.0 / layerCount; // color increment
i = 0.0; // color intensity
for (zz = -scalez; zz <= scalez; zz += dz) // do all layers
{
d = Math.Sin(xx) * Math.Cos(yy); // compute gyroid equation
d += Math.Sin(yy) * Math.Cos(zz);
d += Math.Sin(zz) * Math.Cos(xx);
if (d > 1e-6) continue;
i += di; // if near surface add to color
}
i*=255.0;
mat.SetByte(x, y, (byte)(i));
}
}

Generating a pixel-based spiral gradient

I have a program that creates pixel-based gradients (meaning it calculates the step in the gradient for each pixel, then calculates the colour at that step, then gives the pixel that colour).
I'd like to implement spiral gradients (such as below).
My program can create conic gradients (as below), where each pixel is assigned a step in the gradient according to the angle between it and the midpoint (effectively mapping the midpoint-pixel angle [0...2PI] to [0...1]).
It would seem to me that a spiral gradient is a conic gradient with some additional function applied to it, where the gradient step for a given pixel depends not only on the angle, but on some additional non-linear function applied to the euclidean distance between the midpoint and pixel.
I envisage that a solution would take the original (x, y) pixel coordinate and displace it by some amounts in the x and y axes resulting in a new coordinate (x2, y2). Then, for each pixel, I'd simply calculate the angle between the midPoint and its new displaced coordinate (x2, y2) and use this angle as the gradient step for that pixel. But it's this displacement function that I need help with... of course, there may be other, better ways.
Below is a simple white-to-black conic gradient. I show how I imagine the displacement would work, but its the specifics about this function (the non-linearity), that I'm unable to implement.
My code for conic gradient:
public void conicGradient(Gradient gradient, PVector midPoint, float angle) {
float rise, run;
double t = 0;
for (int y = 0, x; y < imageHeight; ++y) {
rise = midPoint.y - y;
run = midPoint.x;
for (x = 0; x < imageWidth; ++x) {
t = Functions.fastAtan2(rise, run) + Math.PI - angle;
// Ensure a positive value if angle is negative.
t = Functions.floorMod(t, PConstants.TWO_PI);
// Divide by TWO_PI to get value in range 0...1
step = t *= INV_TWO_PI;
pixels[imageWidth * y + x] = gradient.ColorAt(step); // pixels is 1D pixel array
run -= 1;
}
}
}
By eye, it looks like after t = ... fastAtan2..., you just need:
t += PConstants.TWO_PI * Math.sqrt( (rise*rise + run*run) / (imageWidth * imageWidth + imageHeight * imageHeight) )
This just adds the distance from the center to the angle, with appropriate scaling.

Different Processing rendering between native and online sketch

I get different results when running this sample with Processing directly, and with Processing.js in a browser. Why?
I was happy about my result and wanted to share it on open Processing, but the rendering was totally different and I don't see why. Below is a minimal working example.
/* Program that rotates a triange and draws an ellipse when the third vertex is on top of the screen*/
float y = 3*height/2;
float x = 3*width/2;
float previous_1 = 0.0;
float previous_2 = 0.0;
float current;
float angle = 0.0;
void setup() {
size(1100, 500);
}
void draw() {
fill(0, 30);
// rotate triangle
angle = angle - 0.02;
translate(x, y);
rotate(angle);
// display triangle
triangle(-50, -50, -30, 30, -90, -60);
// detect whether third vertex is on top by comparing its 3 successive positions
current = screenY(-90, -60); // current position of the third vertex
if (previous_1 < previous_2 && previous_1 < current) {
// draw ellipse at the extrema position
fill(128, 9, 9);
ellipse(-90, -60, 7, 10);
}
// update the 2 previous positions of the third vertex
previous_2 = previous_1;
previous_1 = current;
}
In processing, the ellipse is drawn when a triangle vertex is on top, which is my goal.
In online sketching, the ellipse is drawn during the whole time :/
In order to get the same results online as you get by running Processing locally you will need to specify the rendering mode as 3d when calling size
For example:
void setup() {
size(1100, 500, P3D);
}
You will also need to specify the z coordinate in the call to screenY()
current = screenY(-90, -60, 0);
With these two changes you should get the same results online as you get running locally.
Online:
Triangle Ellipse Example
Local:
The problem lies in the screenY function. Print out the current variable in your processing sketch locally and online. In OpenProcessing, the variable current grows quickly above multiple thousands, while it stays between 0 and ~260 locally.
It seems like OpenProcessing has a bug inside this function.
To fix this however, I would recommend you to register differently when you drew a triangle at the top of the circle, for example by using your angle variable:
// Calculate angle and modulo it by 2 * PI
angle = (angle - 0.02) % (2 * PI);
// If the sketch has made a full revolution
if (previous_1 < previous_2 && previous_1 < angle) {
// draw ellipse at the extrema position
fill(128, 9, 9);
ellipse(-90, -60, 7, 10);
}
// update the 2 previous angles of the third vertex
previous_2 = previous_1;
previous_1 = angle;
However, because of how you draw the triangles, the ellipse is at an angle of about PI / 3. To fix this, one option would be to rotate the screen by angle + PI / 3 like so:
rotate(angle + PI / 3);
You might have to experiment with the angle offset a bit more to draw the ellipse perfectly at the top of the circle.

Having a point from 3 static cameras prespectives how to restore its position in 3d space?

We have same rectangle position relative to 3 same type of staticly installed web cameras that are not on the same line. Say on a flat basketball field. Thus we have tham all inside one 3d space and (x, y, z); (ax, ay, az); positionas and orientations set for all of them.
We have a ball color and we found its position on all 3 images im1, im2, im3. Now having its position on 2d frames (p1x, p1y);(p2x, p2y);(p3x, p3y), and cameras pos\orientations how to get ball position in 3d space?
You need to unproject 2D screen coordinates into 3D coordinates in space.
You need to solve system of equation to find real point in 3D from 3 rays you got on the first step.
You can find source code for gluUnProject here. I also provide here my code for it:
public Vector4 Unproject(float x, float y, Matrix4 View)
{
var ndcX = x / Viewport.Width * 2 - 1.0f;
var ndcY = y / Viewport.Height * 2 - 1.0f;
var invVP = Matrix4.Invert(View * ProjectionMatrix);
// We don't z-coordinate of the point, so we choose 0.0f for it.
// We are going to find out it later.
var screenPos = new Vector4(ndcX, -ndcY, 0.0f, 1.0f);
var res = Vector4.Transform(screenPos, invVP);
return res / res.W;
}
Vector3 ComputeRay(Camera camera, Vector2 p)
{
var worldPos = Unproject(p.X, p.Y, camera.View);
var dir = new Vector3(worldPos) - camera.Eye;
return new Ray(camera.Eye, Vector3.Normalize(dir));
}
Now you need to find intersection of three such rays. Theoretically that would be enough to use only two rays. It depends on positions of your cameras.
If we had infinite precision floating point arithmetic and input was without noise that would be trivial. But in reality you might need to exploit some simple numerical scheme to find the point with an appropriate precision.

Rotate a Sprite around another Sprite -libGDX-

video game link
I'm trying to make a game (see link above) , and I need to have the stick rotate around himself to maintain the orientation face to center of the circle.
this is how I declare the Sprite, and how I move it around the circle:
declaration:
line = new Sprite(new Texture(Gdx.files.internal("drawable/blockLine.png")));
line.setSize(140, 20);
lineX = Gdx.graphics.getWidth()/2 - line.getWidth()/2;
lineY = (Gdx.graphics.getHeight()/2 - line.getHeight()/2) + circle.getHeight()/2;
movement:
Point point = rotatePoint(new Point(lineX, lineY), new Point(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2), angle+= Gdx.graphics.getDeltaTime() * lineSpeed);
line.setPosition(point.x, point.y);
rotatePoint function:
Point rotatePoint(Point point, Point center, double angle){
angle = (angle ) * (Math.PI/180); // Convert to radians
float rotatedX = (int) (Math.cos(angle) * (point.x - center.x) - Math.sin(angle) * (point.y-center.y) + center.x);
float rotatedY = (int) (Math.sin(angle) * (point.x - center.x) + Math.cos(angle) * (point.y - center.y) + center.y);
return new Point(rotatedX,rotatedY);
}
Any sugestions ?
I can't test right now but I think the rotation of the line should simply be:
Math.atan2(rotatedPoint.getOriginX() - middlePoint.getOriginX(), rotatedPoint.getOriginY() - middlePoint.getOriginY()));
Then you'll have to adjust rad to degrees or whatever you'll use. Tell me if it doesn't work!
I would take a different approach, I just created a method that places n Buttons around a click on the screen. I am using something that looks like this:
float rotation; // in degree's
float distance; //Distance from origin (radius of circle).
vector2 originOfRotation; //Center of circle
vector2 originOfSprite; //Origin of rotation sprite we are calculating
Vector2 direction = new vector2(0, 1); //pointing up
//rotate the direction
direction.rotate(rotation);
// add distance based of the direction. Warning: originOfRotation will change because of chaining method.
// use originOfRotation.cpy() if you do not want to init each frame
originOfSprite = originOfRotation.add(direction.scl(distance));
Now you have the position of your sprite. You need to increment rotation by x each frame to have it rotate. If you want the orientation of the sprite to change you can use the direction vector, probably rotated by 180 again. Efficiency wise I'm not sure what the difference would be.

Resources