How to draw on an image in openGL? - image

I load an image (biological image scans) and want to a) display it and b) draw markers on it. How would I program the shaders? I guess the vertex shaders are simple enough, since it is an 2D image. On idea I had was to overwrite the image data in the buffer, the pixels with the markers set to a specific values. My markers are boxes (so lines), is this the right way to go? I read that there are different primitives, lines too, so is there a way to draw my lines on my image without manipulating the data in the buffer, simply an overlay, so to speak? My framework is vispy, but pseudocode would also help.

Draw a rectangle/square with your image as a texture on it. Then, draw the markers (probably as monotone quads/rectangles).
If you want the lines to be over the image, but under the markers, simply put the rendering code in between.
No shaders are required, if older OpenGL is suitable for you (since OpenGL 3.3 most old stuff was moved to compatibility profile, while modern features are core profile; the latter requires self-written shaders, but they should be pretty simple for your case).
To sum up, the things that you need understanding of are primitives (lines, triangles) and basic texturing.

Related

Unity, fresnel shader on raw image

Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.

Draw bitmap using Metal

Question is quite simple. On Windows you have BitBlt() to draw to the screen. On OS X you could normally use OpenGl but how do I draw the bitmap to the screen using Apples new Metal framework? I cant find anything valuable in Apples Metal references.
I'm right now using Core Graphics for the drawing part but since my bitmap is updating all the time, I feel like I should move to Metal to reduce the overhead.
The very short answer is this: First, you need to setup a standard rendering pipeline using Metal, which is a bit of work, if you don't know anything about the rendering pipeline (note: but this can be simplified by avoiding 3D stuff by rendering a quad with two triangles, just give xy coordinates for the vertices). There is a lot of sample code from Apple that shows how to setup a standard rendering pipeline, the simplest is maybe MetalImageProcessing, so you could strip that down (even though that uses a compute shader which is overcomplicated to draw in, you'd want to substitute it for standard vertex and fragment shaders).
Second, you need to learn how vertex and fragment shaders work and how to draw stuff with them, see shadertoy for this.
Just noticed the users last comment, if you have an array of bytes that represent an image then you can just make an MTLTexture out of that and render it to the screen, see Apple's example above but change the compute shader to standard vertex and fragments shaders for faster performance.

Using multiple primitives in WebGL at the same time

I am actually trying to develop a web application that would visualize a Finite Element mesh. In order to do so, I am using WebGl. Right now I have a page with all the code necessary to draw the mesh in the viewport using triangles as primitives (each quad element of the mesh was splitted into two triangles to draw it). The problem is that, when using triangles, all the piece is "continuous" and you cant see the separation between triangles. In fact, what I would like to achieve is to add lines between the nodes so that, around each quad element (formed by two triangles) we have these lines in black, and so the mesh can actually be shown.
So I was able to define the lines in my page, but since one shader just can have one type of primitive, if I add the code for the line buffers and bind them it just show the lines, not the element (as they were the last buffers binded).
So the closest solution I have found is using multiple shaders, and managing them with multiple programs, but this solution would just enable me whether to plot the geometry with trias or to draw just the lines, depending on which program is currently selected.
Could any of you help me about how to approach this issue? I have seen a windows application that shows FE meshes using OpenGL and it is able to mix the triangles with points and lines, apart from using different layers, illumination etc. So I am aware that this may be complicated, but I assume that if it is possible somehow with OpenGl it should be as well with webGL.
Please if you provide any solution I would appreciate a lot that it contains some code as an example, for instance drawing a single triangle but including three black lines at its borders and maybe three points at the vertices.
setup()
{
<your current code here>
Additional step - Unbind the previous textures, upload and bind one 1x1 black pixel as a texture. Let this texture object be borderID;
}
Draw loop()
{
Unbind the previous textures, bind your normal textures, and draw the mesh like your current setup. This will fill the entire area with different colours, without border (the current case)
Bind the borderID texture, and draw the same vertices again except this time, use context.LINE_STRIP instead of context.TRIANGLES. This will draw lines with the black texture, and will appear as border, on top of the previously drawn colors for each triangle. You can have something like below
if(currDrawMode==0)
context3dStore.bindTexture(context3dStore.TEXTURE_2D, meshTextureObj[bindId]); else context3dStore.bindTexture(context3dStore.TEXTURE_2D, borderTexture1pixObj[bindId]);
context3dStore.drawElements((currDrawMode == 0) ? context3dStore.TRIANGLES: context3dStore.LINE_LOOP, indicesCount[bindId], context3dStore.UNSIGNED_SHORT, 0); , where currDrawMode toggles between drawing the border and drawing the meshfill.
Since the line texture appears as a border over the flat colors you had earlier, this should solve your need
}

Repeating only a portion of a texture in OpenGL ES?

I know it's possible to repeat an entire texture by setting the wrap mode to GL_REPEAT, but is it somehow possible to repeat only a subregion of the texture? For example, when the texture is part of an atlas.
I'm targetting OpenGL ES 1.x, so shaders are out.
Unfortunatelly, it is not possible. The only thing you can do it to repeat side pixels (if the image is at the edge of a texture altals).
If you need tiling – probably the only solution here is generate is with geometry. Otherwise, just go with a separate texture.

Copy arbitrarily sized block of pixels into OpenGL ES texture... somehow?

I'm writing a drawing application, and the drawing canvas is an OpenGL texture. When you draw onto the canvas, it determines which region of the canvas texture has been changed, and copies that pixel data out (using glReadPixels) before applying the changes you made.
To undo, I want to simply revert to the previous texture state using that pixel data that was copied out. However, OpenGL ES doesn't provide a glDrawPixels command. What's the best way to do it?
I've considered two options, but I'm not sure either is that great:
Create a temporary texture using the pixels I copied out and draw that in. (However, copied region is not a power of two!)
Unbind the large canvas texture completely, manually alter the bytes of the texture, and then put it back into OpenGL. I'm not using any sort of compression, so this might not be that bad. But it seems like a hack?
Anybody have any ideas? I'd really appreciate it!
In case anyone stumbles across this while trying to do something similar, I've come up with a solution that seems to work well.
Grab an image of the current texture by binding it to the framebuffer and then writing the framebuffer to a CGImageRef.
Create a new CGContext and draw in the existing texture CGImageRef. Then draw old texture data in to the portion that the user changed, effectively "undoing" that change to the image.
Destroy old OpenGL texture and create a texture from the CGContext.
I think this is a pretty slow way of going about things, but I don't need huge performance - my real concern was limiting the amount of data being kept to represent the "old" texture.
If you need help with this (there's quite a bit of code) feel free to email me.

Resources