libgdx: Rotate a texture when drawing it with spritebatch - rotation

Im trying to rotate textures when I draw them. I figured it would make more sense to do this than to rotate the images 90 degrees in paint.net and save them in different files. I looked thought the api documentation for spritebatch drawing arguments but I just dont understand. There are a bunch of arguments such as srcX, srcY, originX and so on. Also i would like to know how to do the same for texture regions. Heres a link to the api documentation page:http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/graphics/g2d/SpriteBatch.html
Thank you!

again from the documentation, but copied here for ease of use and so I can explain a little better.
x - the x-coordinate in screen space
y - the y-coordinate in screen space
these two values represent the location to draw your texture in screen space (game space). Pretty self explanatory.
originX - the x-coordinate of the scaling and rotation origin relative to the screen space coordinates
originY - the y-coordinate of the scaling and rotation origin relative to the screen space coordinates
these two values represent the location where rotations (and scaling) happen from with respect to the screen space. So for instance, if you give the value 0, 0 here, the rotation and scaling will happen around one of the corners of your texture (the bottom left I believe), whereas if you give the center (width/2, height/2), the rotation and scaling would happen around the center of your texture (this is probably what you want for any "normal" rotations)
width - the width in pixels
height - the height in pixels
the dimensions for drawing your texture on screen.
scaleX - the scale of the rectangle around originX/originY in x
scaleY - the scale of the rectangle around originX/originY in y
values representing the scale of your rectangle, where values between 0 and 1 will shrink the rectangle, and values greater than 1 will expand the rectangle. Note that this is with respect to the origin you gave earlier, which means that if this is not the center the image may look distorted.
rotation - the angle of counter clockwise rotation of the rectangle around originX/originY
the angle to rotate the image by. Again, this is around the origin given earlier, so the rotation may not appear "correct" if the origin is not the center of the image
srcX - the x-coordinate in texel space
srcY - the y-coordinate in texel space
these two values are the starting location of the actual region of the image file (.png, .jpg, whatever) that you wish to use, in pixels. Basically the start of your image.
srcWidth - the source with in texels
srcHeight - the source height in texels
similarly, these two values are the width and height of the actual region of the image file you are using, in pixels.
flipX - whether to flip the sprite horizontally
flipY - whether to flip the sprite vertically
Finally, these two booleans are used to flip the image either horizontally or vertically.
Now you may notice that the similar method for drawing TextureRegions has no srcX, srcY, srcWidth, or srcHeight. This is because those are the values you give to a texture region when you create it from a texture.
Essentially what that means is that the command
//with TextureRegions
SpriteBatch.draw(textureRegion, x, y, originX, originY, width, height, scaleX, scaleY, rotation);
is equivalent to
//with Textures from TextureRegions
SpriteBatch.draw(textureRegion.getTexture(), x, y, originX, originY, width, height, scaleX, scaleY, rotation, textureRegion.getRegionX(), textureRegion.getRegionY(), textureRegion.getRegionWidth(), textureRegion.getRegionHeight(), false, false);

Related

Monogame - sprite not rotating around origin point

I'm trying to rotate a sprite around its center with following code:
Vector2 origin = new Vector2(position.Width / 2, position.Height / 2);
s.Draw(position, origin, angle, Color.White);
spriteBatch.Draw(texture, position, sourceRectangle, color, rotation, origin, SpriteEffects.None, 0);
Note: Since I'm drawing from a sprite sheet, the source rectangle is being calculated.
The original size of my sprite is 15x32. If I use this size, the rotation looks nearly correct but it's still a little bit shifted:
However, when I resize the width and height to 75x128 the sprite is completely displaced:
Is there a way to always place the sprite correct, when resizing it? And why is the sprite even displaced when drawing it in the original size?
By the way, the green box is the origin point with the size of the sprite.
Thank you very much!
Ok I figured it out myself!
When creating the origin point, I used the new Width and Height (75, 128) but one has to use the width and height of the original sprite. In this case (15, 32).

Webot camera default parameters like pixel size and focus

I am using two cameras without lens or any other settings in webot to measure the position of an object. To apply the localization, I need to know the focus length, which is the distance from the camera center to the imaging plane center,namely f. I see the focus parameter in the camera node, but when I set it NULL as default, the imaging is still normal. Thus I consider this parameter has no relation with f. In addition, I need to know the width and height of a pixel in the image, namely dx and dy respectively. But I have no idea how to get these information.
This is the calibration model I used, where c means camera and w means world coordinate. I need to calculate xw,yw,zw from u,v. For ideal camera, gama is 0, u0, v0 are just half of the resolution. So my problems exist in fx and fy.
First important thing to know is that in Webots pixels are square, therefore dx and dy are equivalent.
Then in the Camera node, you will find a 'fieldOfView' which will give you the horizontal field of view, using the resolution of the camera you can then compute the vertical field of view too:
2 * atan(tan(fieldOfView * 0.5) / (resolutionX / resolutionY))
Finally, you can also get the near projection plane from the 'near' field of the Camera node.
Note also that Webots cameras are regular OpenGL cameras, you can therefore find more information about the OpenGL projection matrix here for example: http://www.songho.ca/opengl/gl_projectionmatrix.html

Three.js Image Pixel coordinate to World Coordinate Mapping

I'm creating a 3D object in Three.js with 6 faces. Each face has a mesh which uses a THREE.PlaneGeometry(width and height both are 256). On the mesh I'm using a JPEG picture which is 256 by 256 for the texture. I'm trying to find a way to find the world coordinate of a pixel coordinate(for example 200,250 is the pixel coordinate) on the Object3D's PlaneGeometry corresponding to where that picture was used as texture.
Object hierarchy:-
Object3D-->face(object3d) (total 6 faces)-->Each face has a mesh(planegeometry) and uses a jpeg file as texture.
Picture1 pixel coordinate-->Used to create texture for Plane1-->World Coordinate corresponding to that pixel coordinate.
Can someone please help me.
Additional information:-
Thanks for the answer. I'm trying to compare 2 results.
Method 1:- One yaw/pitch is obtained by clicking on a specific point in the 3d object(e.g, center of a particular car headlight which is the front face) using a mouse and getting the point of intersection with the front face using raycasting.
Method 2:-The other yaw/pitch is obtained by taking the pixel coordinate of the same point(center of a particular car headlight) and calculating the world space coordinate for that pixel point. Pls note that pixel coordinate is taken from the JPEG file that was used as texture to create the PlaneGeometry for the mesh(which is a child of the front face).
Do you think the above comparison approach is supposed to produce the same results, assuming all other parameters are identical between the 2 approaches?
Well assuming your planes are PlaneGeometry(1,1) then the local coordinate X/Y/ZZ for a given pixel is pixelX / 256, pixelY / 256 and the Z is 0.5
so something like:
var localPoint = new THREE.Vector3(px/256,py/256,0.5)
var worldPoint = thePlaneObject.localToWorld(localPoint)

Update plane texture offset from movement on a sphere

I'm working on a driving simulation in Three.js using height map data from the planet Venus.
GitHub repo here: https://github.com/hypothete/venus-walk
Here's how the simulation works so far:
In a hidden scene, a camera called the globeCamera moves at a fixed height over a sphere textured with the Venus height map. You can see this happening in the lower left viewport in my picture. The globeCamera renders its view to a WebGLRenderTarget to be used as a local height map. The result is in the second viewport in the middle left.
In the visible scene, a plane mesh called the terrainMesh has its vertices displaced up and down in correspondence with the values from the local height map. This gives the illusion that a vehicle placed in the center of the plane is moving across a surface when actually we're just updating the plane's vertices from the movement of the globeCamera.
Since I know the rotation of the globeCamera, I can pass that value to my fragment shader to rotate the terrainMesh's rock texture with the height map.
How can I offset the rock texture's position so that texture units translate with the terrain as well? I've tried tracking the globeCamera's offset as a 2D vector and adding that to the rotated UV in the fragment shader, but my results were inconsistent. Thanks for your help.

Pixels in Direct2D

The dark gray lines are supposed to be black and 1 pixel wide:
pRT->DrawLine(Point2F(100, 120), Point2F(300, 120), blackbrush, 1);
The light gray lines are supposed to be black and 0.5 pixel wide:
pRT->DrawLine(Point2F(120, 130), Point2F(280, 130), blackbrush, 0.5);
Instead, they are both 2 pixels wide. If I ask for 2 pixels wide, the line is black, but naturally 2 pixels wide.
The render target has the same size as the client area of the window. I would like pixel accuracy like in GDI, one coordinate = one pixel and pure colors...
Thanks.
Direct2D is rendering correctly. When you give it a pixel coordinate such as (100, 120), that refers to the top and left corner of the pixel element that spans from pixel coordinates (100, 120) to (101, 121) (top/left are inclusive, right/bottom are exclusive). Since it's a straight horizontal line you are effectively getting a filled rectangle from (99.5, 119.5) - (300.5, 120.5). Since the edges of this spill into adjacent pixels, that's why you're getting "2 pixel width" lines at the "wrong" brightness. You must think in terms of pixel coordinates (points with no area) and pixel elements (physical points on the screen with an area of 1x1, or just 1 of course).
If you want to draw a straight line from that covers the pixels (100, 120) to (300, 120), you should either use SemMike's suggestion of using aliased rendering (which is great for straight lines!), or you can use half-pixel offsets (because strokeWidth=1; for other strokeWidths, adjust by strokeWidth/2). Drawing from (100.5, 120.5) - (299.5, 120.5) with a stroke width of 1.0 will get you what you're looking for. That stroke extends around the pixel coordinates you specify, so you will get the "filled rectangle" over the pixel elements (100, 120) - (300, 121). And again, that's an exclusive range, so 'y=121' isn't actually filled, neither is x=300.
If you're wondering why this doesn't happen with something like GDI, it's because it doesn't do antialiased rendering, so everything always snaps to pixel elements. If you're wondering why this doesn't happen with WPF while using Shapes, it's because it uses layout rounding (UseLayoutRounding) and pixel snapping. Direct2D does not provide those services because it's a relatively low-level API.
You can play around with pRenderTarget->DrawLine(Point2F(100-0.5, 120-0.5), Point2F(300-0.5, 120-0.5), blackbrush, 1), but it becomes rapidly tricky. The simplest is:
pRenderTarget->SetAntialiasMode(D2D1_ANTIALIAS_MODE_ALIASED);
Hope it helps somebody...

Resources