Can you please explain the difference between texture and sprite? When we zoom in a sprite, it appears blurry because it's basically an image. Is it the same for a texture?
I read this comment on the image below online:
The background layers are textures and not sprites.
Can someone explain?
Sprites and Textures are both images.
A Sprite is an image that can be used as a 2d object, which have coordinates (x, y) and which you can move, destroy or create during the game.
A Texture is also an image, but that will be used to change the appearence of an object. E.g. you can set a texture for the faces of a cube, a layer (like the background) or even a sprite. But as texture are not objects, you can't move them during the game.
Sprite is the image that is moving related to static images (for example background). Sprites are usually planes (rectangles) with texture on it. Sprites are used in 3D graphics for tricks such as Billboard or Impostor. In 2D games sprites are used instead of moving objects and also as backgrounds.
Texture is an raster image that is to be projected on polygonal object. It worth using textures each time when using polygons is expensive for given objects details (for example bullet dots)
Related
I have been working on programming a game where everything is rendered in 3d. Though the bullets are 2d sprites. this poses a problem. I have to rotate the bullet sprite by rotating the material. This turns every bullet possessing that material rather than the individual sprite I want to turn. It is also kind of inefficient to create a new sprite clone for every bullet. is there a better way to do this? Thanks in advance.
Rotate the sprite itself instead of the texture.
edit:
as OP mentioned.. the spritematerial controls the sprites rotation.y, so setting it manually does nothing...
So instead of using the Sprite type, you could use a regular planegeometry mesh with a meshbasic material or similar, and update the matrices yourself to both keep the sprite facing the camera, and rotated toward its trajectory..
Then at least you can share the material amongst all instances.
Then the performance bottleneck becomes the number of drawcalls.. (1 per sprite)..
You can improve on that by using a single BufferGeometry, and computing the 4 screen space vertices for each sprite, each frame. This moves the bottleneck away from drawCalls, and will be limited by the speed at which you can transform vertices in javascript, which is slow but not the end of the world. This is also how many THREE.js particle systems are implemented.
The next step beyond that is to use a custom vertex shader to do the heavy vertex computation.. you still update the buffergeometry each frame, but instead of transforming verts, you're just writing the position of the sprite into each of the 4 verts, and letting the vertex shader take care of figuring out which of the 4 verts it's transforming (possibly based on the UV coordinate, or stored in one of the vertex color channels..., .r for instace) and which sprite to render from your sprite atlas (a single texture/canvas with all your sprites layed out on a grid) encoded in the .g of the vertex color..
The next step beyond that, is to not update the BufferGeometry every frame, but store both position and velocity of the sprite in the vertex data.. and only pass a time offset uniform into the vertex shader.. then the vertex shader can handle integrating the sprite position over a longer time period. This only works for sprites that have deterministic behavior, or behavior that can be derived from a texture data source like a noise texture or warping texture. Things like smoke, explosions, etc.
You can extend these techniques to draw gigantic scrolling tilemaps. I've used these techniques to make multilayer scrolling/zoomable hexmaps that were 2048 hexes square, (which is a pretty huge map)(~4m triangles). with multiple layers of sprites on top of that, at 60hz.
Here the original stemkoski particle system for reference:
http://stemkoski.github.io/Three.js/Particle-Engine.html
and:
https://stemkoski.github.io/Three.js/ParticleSystem-Dynamic.html
Hello i am new to ThreeJS and texture mapping,
Let's say I have a 3D-Plane with the size of (1000x1000x1). When I apply a texture to it, it will be repeated or it will be scaled, to atleast filling the full plane.
What I try to achieve is, to change the scaling of the picture on the plane at runtime. I want the Image to get smaller and stop fitting the full plane.
I know there is a way to map each face to a part of a picture, but is it also possible to map it to a negative number in the picture, so it will be transparent?
My question is:
I UV-Mapped a Model in Blender and imported it with the UV-Coords into my ThreeJS-Code. Now i need to scale the texture down, like described before. Do I have to remap the UV-Cords or do i have to manipulate the image and add an transparent edge?
Further, will I be able on the same way to move the image on the picture?
I already achieved this kind of usage in java3d by manipulating bufferedImages and drawing them onto transparent ones. I am not sure this will be possible using javascript, so i want to know if it is possible by texture-mapping.
Thank you for your time and your suggestions!
This can be done using mapping the 3d -plane to a canvas ,where the image is drawn (fabric.js can be used for canvas drawings).Inshort set the canvas as texture for the 3d model
yourmodel.material.map = document.getElementById("yourCanvas");
Hope it helps :)
Yes. In THREE, there are some controls on the texture object..
texture.repeat, and texture.offset .. they are both Vector2()s.
To repeat the texture twice you can do texture.repeat.set(2,2);
Now if you just want to scale but NOT repeat, there is also the "wrapping mode" for the texture.
texture.wrapS (U axis) and texture.wrapT (V axis) and these can be set to:
texture.wrapS = texture.wrapT = THREE.ClampToEdgeWrapping;
This will make the edge pixels of the texture extend off to infinity when sampling, so you can position a single small texture, anywhere on the surface of your uv mapped object.
https://threejs.org/docs/#api/textures/Texture
Between those two options (including texture.rotation) you can position/repeat a texture pretty flexibly.
If you need something even more complex.. like warping the texture or changing it's colors, you may want to change the UV's in your modeller, or draw your texture image into a canvas, modify the canvas, and use the canvas as your texture image, as described in ArUns answer. Then you can modify it at runtime as well.
I'm using sprites for an animated menu in my game.
I tried two methods:
Image Renderer: Replacing the image per frame with the sprite slice in the animation window
Sprite Renderer: Same method
I'm playing the sprite animation with no loop then rotating the transform on the z-axis.
The problem is that with the image the Screen Space overlay works well but the rotation of the transform causes the sprite to look glitchy and rough. With the sprite renderer however the Screen Space must be put to Camera and the sprites get placed between other assets in the world.
Example: http://postimg.org/image/436q9jvax/
Is there a way to either fix the roughness on the rotation using image or force the Camera Screen Space on top? My only concern with the 2nd option would be in relation to responsiveness for multiple devices.
The easiest fix was to apply "sorting layers" to the canvas with the sprite renderers on to keep it on top.
I did however incorporate #beuzel's idea about separate cameras in the end and opted for 2D sprites with physics instead of a 3D rendered animation on canvas.
http://postimg.org/image/6qmtiirb9/
Thanks for making the good sample. A fix for the menu intersecting the world is using a seperate camera for the GUI layer. The rough animation might be a pixel perfect setting in the sprite rendering (just guessing).
I don't have enough reputation points to write this as a comment.
If I render a big texture 1024x1024 but almost the texture is transparent, only about 40% of the texture have data (not transparent). Does it more slower than render a texture with less transparent part?
I have this question because when render a animation, it is more easy to set the pivot of sprite in the image itself, so when I render i only need to draw each sprite at the center of my object's position.
It is more performant, because your image will be smaller. But I doubt it will make a noticeable difference. So, the way you are doing it right now is good, thats how I do it.
I am trying to make a 3d car race in iphone using OPENGL ES 1.x.
I do not know how to draw the background sky in my scene. I tried using only planes for background but where should i placed that plane? I mean if i placed that plane outside the whole track then the frustum is not so big to show that planes in the scene.
Any suggestions will be of great help.
You can make a small skysphere or box, as suggested by Davido and turbovonce's link, which is centered around the viwer and fits into the frustum. You draw this first, without writing into the depth buffer. Then you draw the other stuff and as the skybox has not written to depth buffer it is just overwritten, except the parts where no scene objects are rendered, which are exactly the parts of the image where the sky should be visible.
You want a sky dome. Take a look at this website, it contains tons of references that should help you.
http://www.vterrain.org/Atmosphere/
Create a sphere in a 3d modeling app such as Maya or Blender and map a sky texture to the sphere. Export the model then load the model and its texture into the app, place in the scene. You should now have a background sky rendering in your game.