getimagedata() to particle texture three.js - three.js

I'm trying to do this: getimagedata() from diferent positions of the same canvas elemente and make each of the imagedata chunks to be the texture of individuals particles in a particle system. I dont want the all the system to have the same texture, rather each particle's texture has to correspond to a a chunk of the image on the canvas. Once I have my imgData[i] array filled with the information, how can I assign each of its elements to the texture value of each particle? (remember, I want each particle to have a different texture that corresponds to each element in the imgData[i] array )

ParticleSystem only supports a single texture for all particles in the system, and all particles share that texture.

Related

Where to store cached data for access within vertex shader?

Is there a reasonable place to store computed data for a vertex shader that is computed once, and then used many times (i.e. for each vertex)?
I'm writing a shader that follows a catmull-rom curve, and I need to pre-compute (just once!) a series of evenly spaced positions along the curve so that I can plot text glyphs correctly. Once computed, I intend to use the evenly spaced array of positions as a fast lookup.
It's possible there could be hundreds of vec3 or vec4 points in this cache, depending on how finely sliced into arclengths the spline is.
Would such data best be placed in a uniform? A texture? Something else?
This question is pretty broad, but if you're thinking of performing calculations in the GPU, then you're looking for a THREE.WebGLRenderTarget. Instead of rendering a shader to the <canvas> you can render it to a RenderTarget, which stores the result in a texture that you can attach to other materials later.
Take a look at this example,
They perform position calculations in a fragment shader
These positions gets stored in a RenderTarget's texture
The texture is then passed to a plane to displace the vertex.y positions.
Here's some pseudocode on how it could be achieved:
// Create renderTarget
const renderTarget = new THREE.WebGLRenderTarget(width, height);
// Perform GPU calculations, store result in renderTarget.texture
renderer.setRenderTarget(renderTarget);
renderer.render(calculationScene, calculationCamera);
// Resulting texture can now be assigned to materials
object.material.map = renderTarget.texture;
// Now we render to canvas as usual
renderer.setRenderTarget(null);
renderer.render(scene, camera);
This texture data could be used in lieu of a vec3 or vec4 if you use RGB or RGBA channels respectively.

ThreeJS - Scale texture's size down (no repeat - using UV-Coords)

Hello i am new to ThreeJS and texture mapping,
Let's say I have a 3D-Plane with the size of (1000x1000x1). When I apply a texture to it, it will be repeated or it will be scaled, to atleast filling the full plane.
What I try to achieve is, to change the scaling of the picture on the plane at runtime. I want the Image to get smaller and stop fitting the full plane.
I know there is a way to map each face to a part of a picture, but is it also possible to map it to a negative number in the picture, so it will be transparent?
My question is:
I UV-Mapped a Model in Blender and imported it with the UV-Coords into my ThreeJS-Code. Now i need to scale the texture down, like described before. Do I have to remap the UV-Cords or do i have to manipulate the image and add an transparent edge?
Further, will I be able on the same way to move the image on the picture?
I already achieved this kind of usage in java3d by manipulating bufferedImages and drawing them onto transparent ones. I am not sure this will be possible using javascript, so i want to know if it is possible by texture-mapping.
Thank you for your time and your suggestions!
This can be done using mapping the 3d -plane to a canvas ,where the image is drawn (fabric.js can be used for canvas drawings).Inshort set the canvas as texture for the 3d model
yourmodel.material.map = document.getElementById("yourCanvas");
Hope it helps :)
Yes. In THREE, there are some controls on the texture object..
texture.repeat, and texture.offset .. they are both Vector2()s.
To repeat the texture twice you can do texture.repeat.set(2,2);
Now if you just want to scale but NOT repeat, there is also the "wrapping mode" for the texture.
texture.wrapS (U axis) and texture.wrapT (V axis) and these can be set to:
texture.wrapS = texture.wrapT = THREE.ClampToEdgeWrapping;
This will make the edge pixels of the texture extend off to infinity when sampling, so you can position a single small texture, anywhere on the surface of your uv mapped object.
https://threejs.org/docs/#api/textures/Texture
Between those two options (including texture.rotation) you can position/repeat a texture pretty flexibly.
If you need something even more complex.. like warping the texture or changing it's colors, you may want to change the UV's in your modeller, or draw your texture image into a canvas, modify the canvas, and use the canvas as your texture image, as described in ArUns answer. Then you can modify it at runtime as well.

Difference between sprite and texture?

Can you please explain the difference between texture and sprite? When we zoom in a sprite, it appears blurry because it's basically an image. Is it the same for a texture?
I read this comment on the image below online:
The background layers are textures and not sprites.
Can someone explain?
Sprites and Textures are both images.
A Sprite is an image that can be used as a 2d object, which have coordinates (x, y) and which you can move, destroy or create during the game.
A Texture is also an image, but that will be used to change the appearence of an object. E.g. you can set a texture for the faces of a cube, a layer (like the background) or even a sprite. But as texture are not objects, you can't move them during the game.
Sprite is the image that is moving related to static images (for example background). Sprites are usually planes (rectangles) with texture on it. Sprites are used in 3D graphics for tricks such as Billboard or Impostor. In 2D games sprites are used instead of moving objects and also as backgrounds.
Texture is an raster image that is to be projected on polygonal object. It worth using textures each time when using polygons is expensive for given objects details (for example bullet dots)

Three.js Merge objects and textures

My question is related to this article:
http://blog.wolfire.com/2009/06/how-to-project-decals/
If my understanding is correct, a mesh made from the intersection of the original mesh and a cube is added to the scene to make a decal appear.
I need to save the final texture. So I was wondering if there is a way to 'merge' the texture of the original mesh and the added decal mesh?
You'd need to do some tricky stuff to convert from the model geometry space into UV coordinate space so you could draw the new pixels into the texture map. If you want to be able to use more than one material that way, you'd also probably need to implement some kind of "material map" similar to how some deferred rendering systems work. Otherwise you're limited to at most, one material per face, which wouldn't work for detailed decals with alpha.
I guess you could copy the UV coordinates from the original mesh into the decal mesh, and the use that information to reproject the decal texture into the original texture

How to load textures to different faces of a cube in GLGE? (or at least WebGL)

I have 6 textures I would like to load on 6 different faces of a cube. I'm trying to make a new texture by using GLGE.TextureCube();. And then I load all six images to the faces the supposedly should be on the cube like so
mapTex = new GLGE.TextureCube();
mapTex.setSrcNegX("models/map/negx.jpg"); // they are all 1024x1024
mapTex.setSrcNegY("models/map/negy.jpg");
mapTex.setSrcNegZ("models/map/negz.jpg");
mapTex.setSrcPosX("models/map/posx.jpg");
mapTex.setSrcPosY("models/map/posy.jpg");
mapTex.setSrcPosZ("models/map/posz.jpg");
And then I add the texture to the Wavefront object. However, it seems only one of the 6 texture images is getting mapped and its mapped incorrectly.
My guess is that when it creates the new texture map out of the other 6, it tiles them beside each other so the new texture map's co-ordinates no longer correspond to that my obj file.
How can I properly combine 6 textures to one map to be used with GLGE? Or is there a way to manually load a texture on a face of a Mesh?
Cube maps are somewhat special, as the usual UV (ST) texture coordinates don't work for them. A cube map, the name suggests it, consists of 6 quadratic textures, arranged as the faces of a cube. The texture coordinates are not absolute positions on the cube's faces, but directions from the center of the cube away, and the position where a ray from the center in the given direction hits the cube, is the position of the texture on that particular face.
If you apply texture coordinates with the third coordinate being zero, like those in Wavefront, you will address only a slice of the cube's face, namely the part that intersects with the XY plane. If you want to see a working cubemap in action, use the object's smooth normals as texture coordinates.
You'll need to use a different texture coordinate, eg:
materialLayer.setMapinput(GLGE.MAP_OBJ)
depending on what you want try GLGE.MAP_OBJ,GLGE.MAP_NORM or GLGE.MAP_ENV

Resources