i want to blend two rects, but i want to draw only blended area (area where rects are intersecting), How to do it
If you don't want to compute the intersection you can probably use the stencil buffer to achieve that. read about it here:
http://bluevoid.com/opengl/sig00/advanced00/notes/node118.html
You can draw the two rects and with increment on the stencil buffer and then mask only the pixels that have value > 2, i.e. the pixels where 2 or more rects were drawn.
The intersection of two convex rects is always a rect. so why not just compute the intersection and draw only the that?
GLES20.glEnable( GLES20.GL_BLEND );
GLES20.glBlendFunc( GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA );
But you should set behavior of your blend function yourown.
And in the shader I set alpha channel. You can see the result:
blending post.
the source of android project
Related
First I am explaining the above image. Image is marking with 1, 2 and 3.
1 - This is the rectangle shape.
2 - This is the rectangle shape.
3 - This is the circle shape (draw with destination-in global composite operation).
Every shape draw using HTML5 canvas.
Now I want to same draw using threejs with WebGLRenderer. So is it possible to draw? If yes then how?
3rd shape can be anything (for ex - circle, rectangle, polygon).
Any suggestion?
We can erase an area in threejs by set the blending property. In threejs different types of blending property available. For example THREE.SubtractiveBlending which is use to subtract the area.
For details -
1) http://threejs.org/docs/#Reference/Constants/Materials
2) http://threejs.org/examples/#webgl_materials_blending
3) http://threejs.org/examples/#webgl_materials_blending_custom
To draw using WebGLRenderer, is basically changing the canvas by:
renderer = new THREE.WebGLRenderer();
Just beware of the methods of Canvas that do not exist in WebGLRenderer.
If you show as part of the code, it would be good to give more precision. But anything, just comment here!
I'm drawing a fairly simple 2D scene containing only rectangles. I have one FloatBuffer into which I put X, Y, Z, R, G, B, A, U, and V data for each vertex.
I draw using glDrawArrays and GL_TRIANGLE_STRIP, keeping the rectangles separate with degenerate vertices.
To facilitate the use of multiple textures, I keep separate float arrays for each texture's draw calls. The texture is binded, the float array is put into the FloatBuffer, and I draw.
Then the next texture is then binded and this continues until I have drawn all of my textures for this render.
I use an Orthographic projection so I can use the Z coordinates and GL_DEPTH_TEST for setting depth independently of the draw order.
To use alpha blending, every piece of advice on the internet seems to say:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
This works fine per each texture's "draw", because I have the draw calls sorted in the buffer from back to front before drawing. I have no way to correctly draw texture2 under a partially transparent texture1 because of the depth test and texture1 being drawn before texture2. texture1 chops off the overlapping part of texture2 because the depth test says that texture1 is in front of texture2.
The only ways I see around this are
1) only using 1 texture in the whole program, and 2) not using transparent textures. Neither of these are acceptable options.
Basically, I need a way to have alpha blending without needing to sort back-to-front. Is this possible?
It sounds like you might need to do Depth Peeling. Here's a PDF that shows how to do it.
I'm successfully drawing the convex polys which make up the following white concave shape.
The orange color is my attempt to add a uniform outline around the white shape. As you can see it's not so uniform. On some edges the orange doesn't show at all.
Evidently using...
glScalef(1.1, 1.1, 0.0);
... to draw a slightly larger orange shape before I drew the white shape wasn't the way to go.
I just have a nagging feeling I'm missing a more simple way to do this.
Note that the white part is going to be mapped with a texture which has areas of transparency, so the orange part needs to be behind the white shapes too, not just surrounding them.
Also, I'm using a parallel projection matrix, that's why glScalef's z is set to 0.0 - reminds me there is no perspective scaling.
Any ideas? Thanks!
Nope, you wont be going anywhere with glScale in this case. Possible options are
a) construct an extruded polygon from the original one (possibly rounding sharp corners)
b) draw the polygon with GL_LINES and set glLineWidth to your desired outline width (in fact you might want to draw the outline with 2x width first)
The first approach will generate CPU load, the second one might slow down rendering significantly AFAIK.
You can displace your polygon in the 8 directions of the compass.
You can have a look at this link: http://simonschreibt.de/gat/cell-shading/
It's a nice trick, and might do the job
Unfortunately there is no simple way to get an outline of consistent width - you just have to do the maths:
For each edge: calculate the normal, scale to the desired width, and add to the edge vertices to get a line segment on the new expanded edge
Calculate the intersection of the lines through two adjacent segments to find the expanded vertex positions
A distinct answer from those offered to date, posted just for interest; if you're in GLES 2.0 have access to shaders then you could render the source polygon to a framebuffer with a texture bound as the colour renderbuffer, then do a second parse to write to the screen (so you're using the image of the white polygon as the input texture and running a post-processing pixel shader to every pixel on the screen) with a shader that obeys the following logic for an outline of thickness q:
if the input is white then output a white pixel
if the input pixel is black then sample every pixel within a radius of q from the current pixel; if any one of them is white then output an orange pixel, otherwise output a black pixel
In practise you'd spend an awful lot on texture sampling and probably turn that into the bottleneck. And they'd be mostly dependent reads, which are bad for the pipeline on lots of GPUs — including the PowerVR SGX that powers the overwhelming majority of OpenGL ES 2.0 devices.
EDIT: actually, you could speed this up substantially; if your radius is q then have the hardware generate mip maps for your framebuffer object, take the first one for which the output pixels are at least q by q in the source image. You've then essentially got a set of bins that'll be pure black if there were no bits of the polygon in that region and pure white if that area was entirely internal to the polygon. For each output fragment that you're considering might be on the border you can quite possibly just straight to a conclusion of definitely in or definitely out and beyond the border based on four samples of the mipmap.
Currently, I have blending and depth testing turn on for a 2D game. When I draw my textures, the "upper" texture remove some portion of the lower textures if they intersect. Clearly, transparent pixels of the textures are taken into account of the depth test, and it clear out all the colors of the drawn lower textures if they intersect. Moreover, alpha blendings are incorrectly rendered. Are there any sort of functions that can tell OpenGL to not include transparent pixels into depth testing?
glEnable( GL_ALPHA_TEST );
glAlphaFunc( GL_EQUAL, 1.0f );
This will discard all pixels with an alpha of anything other than fully opaque. These pixels will, then, not be rendered to the Z-Buffer. This does, however, affect various Z-Buffer pipeline optimisations so it may cause some serious slowdowns. Only use it if you really have too.
No it's not possible. This is true of all hardware depth testing.
GL (full or ES -- and D3D) all have the same model -- they paint in the order you specify polygons. If you draw polygon A in before polygon B, and logically polygon A should be in front on polygon B, polygon B won't be painted (courtesy of the depth test).
The solution is to draw you polygons in order from farthest to nearest the current view origin. Happily in a 2D game this should just be a simple sort (one you probably won't even need to do very often).
In 3D games BSPs are the basic solution to this issue.
if you're using shaders, can try disabling blending and discard the pixels with alpha 0
if(texColor.w == 0.0)
discard;
What type of blending are you using?
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Should prevent any fragments with alpha of 0 from writing to the depth buffer.
My goal is to draw a semi-transparent curve. User moves cursor and I draw the curve under the cursor.
I've tried to use antialiased points to draw line, but I don't know how to make it transparent.
I can't use lines to draw the curve, because can't set both antialiasing and line width.
Should I use triangle strip to draw curve?
Yeah, if you want to do a nice job with this, you could tessellate your wide curve into a triangle strip. There are many papers written about stroke tessellation.
You can then texture your triangle strip with a square alpha texture that has a nice solid, anti-aliased circle in it -- this causes the wide line to appear anti-aliased! Check it out:
http://homepage.mac.com/arekkusu/bugs/invariance/TexAA.html
Very cool stuff.