I have a 6 vertex rectangle 100x100 in size, this i covered with a 100x100 background image. now i would like to render two "sub textures" on top of it.
lets say i have two sub textures size 20x20 one of then i like to position at x:10 y:10 and the other x:50 and y:50
(these are actually to get used as masks on the background image.)
how should i go with this ? my first thought was to send a uniform vec2 with the position of the two sub textures info the fragment shader, but i cant really figure out how to convert those into texture2d(subtexture, coordinate) because texture2d takes 0-1 values. i cant really wrap my head around this, and i hope to get some pointers in what direction i should go.
(this is to be used on OpenGL ES 2.0)
Then I think what you're really looking for is how to update a texture. In you case, the simplest method may be to use glTexSubImage2D. The following will do what you ask in your original post
glTexSubImage2D( GL_TEXTURE_2D, 0, 10, 10, 20, 20, <format>, <type>, <subimage1> );
glTexSubImage2D( GL_TEXTURE_2D, 0, 50, 50, 20, 20, <format>, <type>, <subimage2> );
where <format>, and <type> describe the pixels in the sub-texture stored at <subimage*>. There are quite a lot of additional answers in other questions; just search for glTexSubImage2D.
A more complex method (for when things aren't as simple as your problem) is to use framebuffer objects to render to textures (have search for that one too, if you need).
Related
I have made an image below to indicate my problem. I render my scene to an offscreen framebuffer with a texture the size of the screen. I then render said texture to a screen-filling quad. This produces the case 1 on the image. I then run the exact same program, but with with a texture size, let's say, 1.5 times greater (enough to contain the entire smiley), and afterwards render it once more to the screen-filling quad. I then get result 3, but I expected to get result 2.
I remember to change the viewport according to the new texture size before rendering to the texture, and reset the viewport before drawing the quad. I do NOT understand, what I am doing wrong.
problem shown as an image!
To summarize, this is the general flow (too much code to post it all):
Create MSAAframebuffer and ResolveFramebuffer (Resolve contains the texture).
Set glViewport(0, 0, Width*1.5, Height*1.5)
Bind MSAAframebuffer and render my scene (the smiley).
Blit the MSAAframebuffer into the ResolveFramebuffer
Set glViewport(0, 0, Width, Height), bind the texture and render my quad.
Note that all the MSAA is working perfectly fine. Also both buffers have the same dimensions, so when I blit it is simply: glBlitFramebuffer(0, 0, Width*1.5, Height*1.5, 0, 0, Width*1.5, Height*1.5, ClearBufferMask.ColorBufferBit, BlitFramebufferFilter.Nearest)
Hope someone has a good idea. I might get fired if not :D
I found that I actually used an AABB somewhere else in the code to determine what to render; and this AABB was computed from the "small viewport's size". Stupid mistake.
I'm tring to developing a configurator. It's about cups. These should be displayed in 3D. A design should be uploaded. It works by uploading a texture like this.
Otherwise the design will not fit. Is there a way to load a full-size rectangular image as a texture? The Texture may like to be stretched. The texture should not be made cubic by the user, but automatically in the background maybe.. I hope you understand me.
This is the OBJ-File
Your UV mapping looks difficult to apply a texture to. Especially because it has so much empty space, and is skewed in an arc, so you would need to warp all your textures for them to fit nicely.
You should make the UV mapping work for you. Why don't you use the built-in CylinderBufferGeometry class to apply a texture on top of your cup geometry? You could use its attributes to match the side of your cup's shape:
CylinderBufferGeometry(
radiusTop,
radiusBottom,
height,
radialSegments,
heightSegments,
openEnded,
thetaStart,
thetaLength
);
With this approach, you could leave your cup geometry untouched, then apply a "sticker" texture on top of it. It could wrap all the way around the cup if you wanted, or it could be constrained to only the front. You could scale it up, rotate it around, and it would be independent of a baked-in UV mapping done in Blender. Another benefit is that this approach occupies the entire [0, 1] UV range, so you could simply use square textures, and you wouldn't be wasting data with empty space.
Look at this demo to see how you can play with the geometry's configuration.
I am trying to understand how the "size" attribute in the THREE.PointCloudMaterial translates to the size of it's points on the screen.
With an orthographic camera set at (-1,1,1-1) and size = 1, the points do not fill half the screen, so apparently this parameter does not refer to camera space. Nor does it refer to pixels; at "size = 1", the points >> 1 pixel.
Furthermore, if I resize the browser window, changing it's height, the points scale in size, while if I resize the window's width, the points do not scale in size (!?!)
Any clarification on how "size" get's translated to screen or camera space would be greatly appreciated.
In case it is of interest why I need to know this: I am trying to overlay a PointCloud with a THREE.PointCloudMaterial (with which I can use a texture map) over a second PointCloud that uses a ShaderMaterial (where I can send the size parameter straight to gl_PointSize and know exactly how big each point will be). I am having trouble matching up the point sizes in the two clouds.
Thanks!
-mike
Here, at line 368 the code starts.
It uses gl_PointSize to rasterize a vertex. Two options are present, one with attenuation, the other without. Without, the point gets rasterized to a fixed size in pixels. With, the size is divided by depth and creates a perspective effect. This is happening in the vertex shader.
Looking at the code, it seems that the size would be expressed in world units in the case of attenuation, and to a fixed pixel size if not.
I'm drawing a fairly simple 2D scene containing only rectangles. I have one FloatBuffer into which I put X, Y, Z, R, G, B, A, U, and V data for each vertex.
I draw using glDrawArrays and GL_TRIANGLE_STRIP, keeping the rectangles separate with degenerate vertices.
To facilitate the use of multiple textures, I keep separate float arrays for each texture's draw calls. The texture is binded, the float array is put into the FloatBuffer, and I draw.
Then the next texture is then binded and this continues until I have drawn all of my textures for this render.
I use an Orthographic projection so I can use the Z coordinates and GL_DEPTH_TEST for setting depth independently of the draw order.
To use alpha blending, every piece of advice on the internet seems to say:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
This works fine per each texture's "draw", because I have the draw calls sorted in the buffer from back to front before drawing. I have no way to correctly draw texture2 under a partially transparent texture1 because of the depth test and texture1 being drawn before texture2. texture1 chops off the overlapping part of texture2 because the depth test says that texture1 is in front of texture2.
The only ways I see around this are
1) only using 1 texture in the whole program, and 2) not using transparent textures. Neither of these are acceptable options.
Basically, I need a way to have alpha blending without needing to sort back-to-front. Is this possible?
It sounds like you might need to do Depth Peeling. Here's a PDF that shows how to do it.
Using the WebGL API, how can I get a value from the depth buffer, or in any other way determine 3D coordinates from screen coordinates (i.e. to find a location clicked on), other than by performing my own raycasting?
Several years have passed, these days the WEBGL_depth_texture extension is widely available... unless you need to support IE.
General usage:
Preparation:
Query the extension (required)
Allocate a separate color and depth texture (gl.DEPTH_COMPONENT)
Combine both textures in to a single framebuffer (gl.COLOR_ATTACHMENT0, gl.DEPTH_ATTACHMENT)
Rendering:
Bind the framebuffer, render your scene (usually a simplified version)
Unbind the framebuffer, pass the depth texture to your shaders and read it like any other texture:
texPos.xyz = (gl_Position.xyz / gl_Position.w) * 0.5 + 0.5;
float depthFromZBuffer = texture2D(uTexDepthBuffer, texPos.xy).x;
I don't know if it's possible to directly access the depth buffer but if you want depth information in a texture, you'll have to create a rgba texture, attach it as a colour attachment to an frame buffer object and render depth information into the texture, using a fragment shader that writes the depth value into gl_FragColor.
For more information, see the answers to one of my older questions: WebGL - render depth to fbo texture does not work
If you google for opengl es and shadow mapping or depth, you'll find more explanations and example source code.
From section 5.13.12 of the WebGL specification it seems you cannot directly read the depth buffer, so maybe Markus' suggestion is the best way to do it, although you might not neccessarily need an FBO for this.
But if you want to do something like picking, there are other methods for it. Just browse SO, as it has been asked very often.
Not really a duplicate but see also: How to get object in WebGL 3d space from a mouse click coordinate
Aside of unprojecting and casting a ray (and then performing intersection tests against it as needed), your best bet is to look at 'picking'. This won't give exact 3D coordinates, but it is a useful substitute for unprojection when you only care about which object was clicked on, and don't really need per-pixel precision.
Picking in WebGL means to render the entire scene (or at least, the objects you care about) using a specific shader. The shader renders each object with a different unique ID, which is encoded in the red and green channels, using the blue channel as a key (non-blue means no object of interest). The scene is rendered into an offscreen framebuffer so that it's not visible to the end user. Then you read back, using gl.readPixels(), the pixel or pixels of interest and see which object ID was encoded at the given position.
If it helps, see my own implementation of WebGL picking. This implementation picks a rectangular region of pixels; passing in a 1x1 region results in picking at a single pixel. See also the functions at lines 146, 162, and 175.
As of January 23, 2012, there is a draft WebGL extension to enable depth buffer reading, WEBGL_depth_texture. I have no information about its availability in implementations, but I do not expect it at this early date.