I know it's possible to repeat an entire texture by setting the wrap mode to GL_REPEAT, but is it somehow possible to repeat only a subregion of the texture? For example, when the texture is part of an atlas.
I'm targetting OpenGL ES 1.x, so shaders are out.
Unfortunatelly, it is not possible. The only thing you can do it to repeat side pixels (if the image is at the edge of a texture altals).
If you need tiling – probably the only solution here is generate is with geometry. Otherwise, just go with a separate texture.
Related
I load an image (biological image scans) and want to a) display it and b) draw markers on it. How would I program the shaders? I guess the vertex shaders are simple enough, since it is an 2D image. On idea I had was to overwrite the image data in the buffer, the pixels with the markers set to a specific values. My markers are boxes (so lines), is this the right way to go? I read that there are different primitives, lines too, so is there a way to draw my lines on my image without manipulating the data in the buffer, simply an overlay, so to speak? My framework is vispy, but pseudocode would also help.
Draw a rectangle/square with your image as a texture on it. Then, draw the markers (probably as monotone quads/rectangles).
If you want the lines to be over the image, but under the markers, simply put the rendering code in between.
No shaders are required, if older OpenGL is suitable for you (since OpenGL 3.3 most old stuff was moved to compatibility profile, while modern features are core profile; the latter requires self-written shaders, but they should be pretty simple for your case).
To sum up, the things that you need understanding of are primitives (lines, triangles) and basic texturing.
I'm currently working on a GPGPU project that uses OpenGL ES 2.0. I have a rendering pipeline that uses framebuffer objects (FBOs) as targets, i.e. the result of each rendering pass is saved in a texture which is attached to an FBO. So far, this works when using fragment shaders. For example I have to following rendering pipeline:
Preprocessing (downscaling, grayscale conversion)
-> Adaptive Thresholding Pass 1 -> Adapt. Thresh. Pass 2
-> Copy back to CPU
However, I wanted to extend this pipeline by adding a grayscale histogram calculation after the proprocessing step. With OpenGL ES 2.0 this only works with texture reads in the vertex shader, as far as I know [1]. I can confirm that my shaders work in a different program where the input is a "real" image, not a rendered texture that is attached to an FBO. Hence I think it is not possible to read texture data in a vertex shader if it comes from an FBO. Can anyone confirm this assumption or am I missing something? I'm using a Nexus 10 for my experiments.
[1]: It basically works by reading each pixel value from the texture in the vertex shader, then calculating of the histogram bin from it and "adding" it in the fragment shader by using alpha blending.
Texture reads within a vertex shader are not a required element in OpenGL ES 2.0, so you'll find some manufacturers supporting them and some not. In fact, there was a weird situation where iOS supported it on some devices for one version of iOS, but not the next (and it's now officially supported in iOS 7). That might be the source of the inconsistency you see here.
To work around this, I implemented a histogram calculation by instead feeding the colors from the FBO (or its attached texture) in as vertices and using a scattering operation similar to what you describe. This doesn't require a texture read of any kind in the vertex shader, but it does involve a round-trip from and to the GPU and potentially a lot of vertices. It works on all OpenGL ES 2.0 hardware, but it can be costly.
I am trying to render 2D (flat) sprites in a 3D environment using OpenGL ES 2. The way I create each sprite is pretty standard: I create a quad consisting of two triangles, and I map the texture onto that. Everything works fine, except I noticed something strange: when depth testing is turned on (which it should be in 3D mode), the corners of my sprites are painted using the background color.
The easiest way to show this is by illustration:
When I turn off depth testing (on the left) it looks fine, but when I turn it on (on the right) you can see the green sprite's rectangle overlapping on top of the yellow sprite. They both use the same code, the same PNG file, the same shader. Everything is the same except depth testing.
I'm hoping someone might know a way to work around this.
What you can do is alpha testing. Basically your texture has to have an alpha value of 0 where it should be transparent (which it may already have). Then you configure alpha test like e.g.
glAlphaFunc(GL_GREATER, 0.5f);
glEnable(GL_ALPHA_TEST);
This way every pixel (or better fragment) with an alpha value <= 0.5 will not be written into the framebuffer (and therefore not into the depth buffer). You can also do the alpha test yourself in the fragment shader by just discarding the fragment:
...
if(color.a < 0.5)
discard;
...
Then you don't need the fixed-function alpha test (I think that is the reason why it is deprecated in modern desktop GL, don't know about ES).
EDIT: After looking into the ES 2.0 spec, it seems there is no fixed-function alpha test any more, so you will have to do it in the fragment shader like written above. This way you can also make it dependent on a specific color or any other computable property instead of the alpha channel.
Is this easy to do? I don't want to use texture images. I want to create a rectangle, probably of two polygons, and then set a color on this. A friend who claims to know OpenGL a little bit said that I must always use triangles for everything and that I must use textures for everything when I want it colored. Can't imagine that is true.
You can set per-vertex colors (which can all be the same) and draw quads. The tricky part about OpenGL ES is that they don't support immediate mode, so you have a much steeper initial learning curve compared to OpenGL.
This question covers the differences between OpenGL and ES:
OpenGL vs OpenGL ES 2.0 - Can an OpenGL Application Be Easily Ported?
With OpenGL ES 2.0, you do have to use a shader, which (among other things) normally sets the color. As long as you want one solid color for the whole thing, you can do it in the vertex shader.
I'm writing a drawing application, and the drawing canvas is an OpenGL texture. When you draw onto the canvas, it determines which region of the canvas texture has been changed, and copies that pixel data out (using glReadPixels) before applying the changes you made.
To undo, I want to simply revert to the previous texture state using that pixel data that was copied out. However, OpenGL ES doesn't provide a glDrawPixels command. What's the best way to do it?
I've considered two options, but I'm not sure either is that great:
Create a temporary texture using the pixels I copied out and draw that in. (However, copied region is not a power of two!)
Unbind the large canvas texture completely, manually alter the bytes of the texture, and then put it back into OpenGL. I'm not using any sort of compression, so this might not be that bad. But it seems like a hack?
Anybody have any ideas? I'd really appreciate it!
In case anyone stumbles across this while trying to do something similar, I've come up with a solution that seems to work well.
Grab an image of the current texture by binding it to the framebuffer and then writing the framebuffer to a CGImageRef.
Create a new CGContext and draw in the existing texture CGImageRef. Then draw old texture data in to the portion that the user changed, effectively "undoing" that change to the image.
Destroy old OpenGL texture and create a texture from the CGContext.
I think this is a pretty slow way of going about things, but I don't need huge performance - my real concern was limiting the amount of data being kept to represent the "old" texture.
If you need help with this (there's quite a bit of code) feel free to email me.