I have a beer bottle positioned at top of a glass at 90%. I want to rotate it around its Pivot which is at the top. In order to do so i'm trying to find the angle between the the mouseposition(mp) and the bottle and rotate ti by it.
The center of rotation is the current position of the GameObject since the Pivot of the sprite is at the top. I tried to find two Vectors one being the vector from mp to center of rotation and the other one being the position of the bottle. Then i used: gameObject.transform.Rotate(Vector3.Forward, Vector3.Angle(v2,v1)).
The result its not what i expected of course. I'm new to this game math, i'd appreciate an explanation.
(Its an android game and i intend to drag the bottle up and down from 90 to 180 degrees).
I hope that i understand you question correctly.
But if you want to find the angle between the mouse point you can use the two lines you have drawn on the picture and just calculate the angle between them.
Check out this answer:
Calculating the angle between two lines without having to calculate the slope? (Java)
I have tried and failed to find an Applescript code that returns a smart object's current rotation angle in Photoshop. Anyone have an idea of where that property is listed? I'm beginning to think this feature isn't currently supported by Applescript.
In Photoshop, objects like a selection has no angle value, because it means nothing: if your selection is made by multiple segments making a complexe shape, there is no mathematical way you can define angle for that shape !
However, you can work with boundary rectangle (which includes that shape). You can rotate this complete boundary (i.e. the selection) and then you will get a new boundary (new rectangle where new rotated shape fits in).
A boundary rectangle is made of a list of for values :
top left corner horizontal position (X1)
top left corner vertical position (Y1)
bottom right corner horizontal position (X2)
bottom right corner vertical position (Y2)
Positions are real numbers, starting with border of canvas (not border of layer ! so you may have negative values). The units depends of the unit of measure of the document.
Once that's clear (I hope !) if you use mathematical calculation between initial boundary and new boundary, you can calculate the rotation angle:
(Pythagore triangle)
If you assume that initial rectangle borders were vertical and horizontal :
cosinus (Teta) = (X2-X1) / (X'2 - X'1)
Teta = angle you are looking for
X1, X2 are the positions of the boundary corners before rotation and X'1, X'2 are position of same corners after rotation.
Please note that this method is OK for selection (any shape), or layers.
It should also be OK for the full canvas, but I never test it for canvas.
There is a problem:
Have a rectangle, the coordinates of the lower left corner - (0, 0), upper right (width of the rectangle, the height of the rectangle). Inside this rectangle is the point having certain coordinates. I need to draw around this point a segment of a circle with a certain (const) arc length. Calculate the internal radius and the angle of the segment of a circle.
illustration:
Like this. As can be seen from the figure, if the segment to be placed in the side of the screen (eg bottom left), it is necessary to find such a range that it satisfies our long arc.
Prompt, in what side to search?
Thank you in advance.
I have a set of rectangle tiles, each of them with different shapes on them. One tile could for example contain the texture of a circle, another a rectangle, or maybe even a polygon.
These shapes do not fill up the whole tile, instead they are somewhere on the texture. One tile could for example just contain a small rectangle in the top-right corner. The other parts of the tiles are empty or transparent, i.e. these other pixels have an alpha value of 0.
Now I need to calcuate the "center of gravity" (CoG) within each tile. I know that is not be the best term to describe it, but I don't know of any better. With CoG in this context I mean the spot on the tile that is the center point of the shapes, i.e. those parts of the tiles that are not transparent.
For example, if the tile has one small rectangle in the top-right corner, then the CoG as I mean it would be in the center of the rectangle. In this case thus that CoG would not be in the center of the tile, but also somewhere in the top-right corner.
Important is that fact that the color of the shapes do not count. I am solely interested in the transparent vs. non-transparent pixels/areas on the tile.
Is there any "best practice" to calculate what I am looking for?
I'm trying to write a simple voxel raycaster as a learning exercise. This is purely CPU based for now until I figure out how things work exactly -- fow now, OpenGL is just (ab)used to blit the generated bitmap to the screen as often as possible.
Now I have gotten to the point where a perspective-projection camera can move through the world and I can render (mostly, minus some artifacts that need investigation) perspective-correct 3-dimensional views of the "world", which is basically empty but contains a voxel cube of the Stanford Bunny.
So I have a camera that I can move up and down, strafe left and right and "walk forward/backward" -- all axis-aligned so far, no camera rotations. Herein lies my problem.
Screenshots: (1) raycasting voxels while... ...(2) the camera remains... ...(3) strictly axis-aligned.
Now I have for a few days been trying to get rotation to work. The basic logic and theory behind matrices and 3D rotations, in theory, is very clear to me. Yet I have only ever achieved a "2.5 rendering" when the camera rotates... fish-eyey, bit like in Google Streetview: even though I have a volumetric world representation, it seems --no matter what I try-- like I would first create a rendering from the "front view", then rotate that flat rendering according to camera rotation. Needless to say, I'm by now aware that rotating rays is not particularly necessary and error-prone.
Still, in my most recent setup, with the most simplified raycast ray-position-and-direction algorithm possible, my rotation still produces the same fish-eyey flat-render-rotated style looks:
camera "rotated to the right by 39 degrees" -- note how the blue-shaded left-hand side of the cube from screen #2 is not visible in this rotation, yet by now "it really should"!
Now of course I'm aware of this: in a simple axis-aligned-no-rotation-setup like I had in the beginning, the ray simply traverses in small steps the positive z-direction, diverging to the left or right and top or bottom only depending on pixel position and projection matrix. As I "rotate the camera to the right or left" -- ie I rotate it around the Y-axis -- those very steps should be simply transformed by the proper rotation matrix, right? So for forward-traversal the Z-step gets a bit smaller the more the cam rotates, offset by an "increase" in the X-step. Yet for the pixel-position-based horizontal+vertical-divergence, increasing fractions of the x-step need to be "added" to the z-step. Somehow, none of my many matrices that I experimented with, nor my experiments with matrix-less hardcoded verbose sin/cos calculations really get this part right.
Here's my basic per-ray pre-traversal algorithm -- syntax in Go, but take it as pseudocode:
fx and fy: pixel positions x and y
rayPos: vec3 for the ray starting position in world-space (calculated as below)
rayDir: vec3 for the xyz-steps to be added to rayPos in each step during ray traversal
rayStep: a temporary vec3
camPos: vec3 for the camera position in world space
camRad: vec3 for camera rotation in radians
pmat: typical perspective projection matrix
The algorithm / pseudocode:
// 1: rayPos is for now "this pixel, as a vector on the view plane in 3d, at The Origin"
rayPos.X, rayPos.Y, rayPos.Z = ((fx / width) - 0.5), ((fy / height) - 0.5), 0
// 2: rotate around Y axis depending on cam rotation. No prob since view plane still at Origin 0,0,0
rayPos.MultMat(num.NewDmat4RotationY(camRad.Y))
// 3: a temp vec3. planeDist is -0.15 or some such -- fov-based dist of view plane from eye and also the non-normalized, "in axis-aligned world" traversal step size "forward into the screen"
rayStep.X, rayStep.Y, rayStep.Z = 0, 0, planeDist
// 4: rotate this too -- 0,zstep should become some meaningful xzstep,xzstep
rayStep.MultMat(num.NewDmat4RotationY(CamRad.Y))
// set up direction vector from still-origin-based-ray-position-off-rotated-view-plane plus rotated-zstep-vector
rayDir.X, rayDir.Y, rayDir.Z = -rayPos.X - me.rayStep.X, -rayPos.Y, rayPos.Z + rayStep.Z
// perspective projection
rayDir.Normalize()
rayDir.MultMat(pmat)
// before traversal, the ray starting position has to be transformed from origin-relative to campos-relative
rayPos.Add(camPos)
I'm skipping the traversal and sampling parts -- as per screens #1 through #3, those are "basically mostly correct" (though not pretty) -- when axis-aligned / unrotated.
It's a lot easier if you picture the system as a pinhole camera rather than anything else. Instead of shooting rays from the surface of a rectangle representing your image, shoot the rays from a point, through the rectangle that will be your image plane, into the scene. All the primary rays should have the same point of origin, only with slightly different directions. The directions are determined using basic trig by which pixel in the image plane you want them to go through. To make the simplest example, let's imagine your point is at the camera, and your image plane is one unit along the z axis, and two units tall and wide. That way, the pixel at the upper-left corner wants to go from (0,0,0) through (-1, -1, 1). Normalize (-1, -1, 1) to get the direction. (You don't actually need to normalize the direction just to do ray intersection, but if you decide not to, remember that your directions are non-normalized before you try to compute the distance the ray has travelled or anything like that.) For every other pixel, compute the point on the plane it wants to go through the way you've already been doing, by dividing the size of the plane by the number of pixels, in each direction.
Then, and this is the most important thing, don't try to do a perspective projection. That's necessary for scan-conversion techniques, to map every vertex to a point on the screen, but in ray-tracing, your rays accomplish that just by spreading out from one point into space. The direction from your start point (camera position, the origin in this example), through your image plane, is exactly the direction you need to trace with. If you were to want an orthographic projection instead (and you almost never want this), you'd accomplish this by having the direction be the same for all the rays, and the starting positions vary across the image plane.
If you do that, you'll have a good starting point. Then you can try again to add camera rotation, either by rotating the image plane about the origin before you iterate over it to compute ray directions, or by rotating the ray directions directly. There's nothing wrong with rotating directions directly! When you bear in mind that a direction is just the position your ray goes through if it starts from the origin, it's easy to see that rotating the direction, and rotating the point it goes through, do exactly the same thing.