I am using Imagemagick to create an offline texture atlas, using composite commands to add all the tiles to the atlas.
I want to avoid bleeding of the tiles between each other when using GL_LINEAR, so I want to add a border around each tile. As the tiles are not neccessarily tileable, I’d prefer to replicate the corresponding edge of the tile for creating the edge.
I am fine to do this conversion to the individual images before composing them together. The most important point is that it needs to be able to run from a Makefile.
I know that I can do this using composite with adequate cropping of the borders, but that would take lots of commands to write. Does ImageMagick have any such tool built in?
Related
I am using Project Tango C API. I signed up to color image and depth image callbacks(TangoService_connectOnPointCloudAvailable and TangoService_connectOnFrameAvailable). So I do have TangoPointCloud and matching TangoImageBuffer. I've rendered them separately and they are valid. So I understand that I essentially can loop through each 3D point now in TangoPointCloud and do whatever I want with them. The trouble is I also want the corresponding color for each 3D point.
The Tango standard examples have lots of examples such as drawing depth and color separately or do OpenGL texture on depth image but they don't have a simple sample that maps 3D point to its color.
I tried TangoSupport_projectCameraPointToDistortedPixel but it gives weird results. I also tried TangoXYZij approach but it is obsolete.
Please those who achieved this help, I wasted two days going back and forth with this.
I load an image (biological image scans) and want to a) display it and b) draw markers on it. How would I program the shaders? I guess the vertex shaders are simple enough, since it is an 2D image. On idea I had was to overwrite the image data in the buffer, the pixels with the markers set to a specific values. My markers are boxes (so lines), is this the right way to go? I read that there are different primitives, lines too, so is there a way to draw my lines on my image without manipulating the data in the buffer, simply an overlay, so to speak? My framework is vispy, but pseudocode would also help.
Draw a rectangle/square with your image as a texture on it. Then, draw the markers (probably as monotone quads/rectangles).
If you want the lines to be over the image, but under the markers, simply put the rendering code in between.
No shaders are required, if older OpenGL is suitable for you (since OpenGL 3.3 most old stuff was moved to compatibility profile, while modern features are core profile; the latter requires self-written shaders, but they should be pretty simple for your case).
To sum up, the things that you need understanding of are primitives (lines, triangles) and basic texturing.
I'm developing a little videogame in which I have an infinite background image which moves horizontally. The image obviously is not infinite, it just finishes the same way it starts, so if I concatenate the image with itself, it seems is infinite.
The problem I'm having is that in the place where the two images join, a vertical black line appear. Looks like is not joining them in the exact position and I can see the black background.
I thought it was because the width of the images were not integers, but even if I superimpose one image over the other, the black vertical line still appear.
Any tips please?
What you are trying to do is called tiling. The image should be inherently 'tile-able'. To do this, put two copies of the image side by side, edges flush with each other and see if they are seamless.
Now then, to make things work in OpenGL, the simplest way might to make the quad (i.e. the mesh) holding your background pretty large and map this texture to a small part of this mesh (so that the image itself doesn't look stretched). Use the GL_REPEAT flag when texture mapping so the image is tiled across the entire large quad.
I am making an app in Unity but when I add graphics, they are distorted and out of proportion. I am able to use them, but they don't look good. How do I fix it?
When imported, images default to Texture format which will make them power of two (to be used as textures in 3D space.) If you meant to use them as 2D textures, you will have to update the values in the texture import setting panel to GUI. You can also change or even disable compression if higher quality is needed.
I am working on a simple painting app using LibGDX, and I am having trouble getting it to "paint" properly with the setup I am using. The way I am trying to do this is to draw with sprites, and add these individual sprites into a background texture, using LibGDX's FBO commands, when it is appropriate.
The problem I am having is something relating to blending, in that when the sprites are added to this texture that I am building, any transparent pixels of the sprite that are on top of pixels that have been drawn to previous will be brightened up substantially, which obviously doesn't look very good. The following is what the result looks like, using a circle with a green>red gradient as the "brush". The top row is part of the background texture now, while the bottom one is still in its purely sprite drawn form.
http://i238.photobucket.com/albums/ff307/Muriako/hmm.png
Basically, the transparent areas of each sprite are brightening anything below them, and I need to make them completely transparent. I have messed around with many different blending mode combinations and couldn't find one that was any better. GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA for example did not have this problem, but instead the transparent pixels of each sprite seem to be lowered in alpha and even take on some of the color from the layer below, which seemed even more annoying.
I will be happy to post any code snippets on request, but my code has become a bit of mess since I started trying to fix these problems, so I would rather only put up the necessary bits as necessary.
What order are you drawing the sprites in? Alpha blending only works with respect to pixels already in the target, so you have to draw all alpha-containing things (and everything "behind" them) in Z order to get the right result. I'm using .glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);