I need to put black stripes up and down for full screen rendering when wide resolutions are not supported.
Example: for 1280x1024 resolution I need to render in 1280x720 and put black stripes to fill the screen up to 1280x1024
I do believe that the thing you need can be achieved with a viewport change, check out http://msdn.microsoft.com/en-us/library/windows/desktop/bb206341(v=vs.85).aspx
make your backbuffer the size of the screen resolution (eg: 1280x1024), then render every to a texture (of 1280x720) and render that texture to the center of the screen, after all post-processing. this is probably not the most efficient solution, but its a starting point.
A better solution is actually making your system work on 4:3 resolutions, users would probably enjoy that more than a letterboxed view, and it should be to difficult.
Related
I want to draw a series of textures into METAL view in order to present a complete image. On a regular screen, the images are presented on exactly 1:1 scale (Meaning a 100x100 pixels texture will be presented in a 100x100 pixels square)
Drawing it on a retina display, will actually give me a 200x200 square.
Now, there may be 2 different approaches:
1) Generate entire image into 100x100 square and let Metal View to upscale it to 200x200 square - It works.
2) Upscale each texture and generate image directly into 200x200 square. Why to take this approach? Because some of the textures (Like texts) are generated dynamically, and can be generated in a better resolution. Something impossible if you take the first approach.
Unfortunately, in this approach, some ugly square is visible around each texture.
I tried to play with sizes, clamp options etc, yet I could not find any solution.
Any help would be highly appreciated!
Image from regular screen
Image from retina screen
Found a solution. In Fragment shader, texture sampler was defined as:
constexpr sampler s = sampler(coord::normalized, address::repeat, filter::nearest);
instead of:
constexpr sampler s = sampler(coord::normalized, address::clamp_to_edge, filter::nearest);
I want to make exact 1-pixel thick line without distortions. (means not appeared as 2-pixel lines or 1.5-pixel lines, etc) Because it seems like the Canvas just can't stand Pixel Perfect at times.
It is also depends on CanvasScaler setting, make sure that screen/canvas output is exactly at scale 1x.
Confirm that canvas final scale is all 1x
Also confirm that your displaying game window has 1x scale so that 1 pixel show up nicely too!
(View full unscaled image in another window if sprite in above image appear jagged)
For canvas scaler setting, if you use it in other mode such as "Scale with screen size", and its reference resolution did not match current game window, it will result in non 1x scaling.
If scale is non uniform, jagged or blurry line will start to appear on canvas.
Notice the middle sword sprite.
Canvas' pixel perfect tick box helped nothing so far.
Actually, sorry. Canvases try to respect screen pixels when scaling with PixelPerfect set to true.
The solution was pretty easy - just setting PixelPerfect to false. I got so used to set it to true (because of the UI style I was going for before) that I didn't even consider turning it off. I guess that's mainly due to its name - Pixel Perfect.
xD
I've done a few little games here and there, same for GUI programs but I can't seem to make things re-sizable. Most GUIs handle re-sizing fairly well, but how is this achieved?
Lets say we have a 600x800 window, which has a 100x100 box in the very center, and another identical box 200x above it. If I stretched the window to 1280x720, what algorithm would ensure nothing slips out of place while moving and re-sizing the buttons to the new resolution?
I'd like this to apply to "pixel by pixel" displays, such as drawing a centered quad on the screen is where you tell at which pixel each vertex is, not something scalable such as OpenGL's default matrix.
I am using LibGDX for a small app project, and I need to somehow take a series of sprites and place them (or their pixels rather) into a Pixmap. The basic idea is to take random sprites that are generated through various means while the app is running, and, only at specific times, merge some of them onto a single background sprite.
I believe that most of this can be done easily, but the step of getting the sprite images into the Pixmap isn't quite so obvious to me. The sprites also have various transparent and semi-transparent pixels, so simply grabbing the color at each pixel while it is all on the same screen isn't really applicable either, as it obviously shouldn't take the background colors with it.
If there is a suitable alternative to this that would accomplish what I am looking for I would also love to hear it. Any help is highly appreciated.
I think you want to render your sprites to an off-screen buffer (called an "FBO" or FrameBuffer in libgdx) (blending them as they're added), and then render that offscreen buffer to the screen as a single draw call? If so, this question should help: libgdx SpriteBatch render to texture
This requires OpenGL ES 2.0, which will eliminate support for some older devices.
I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.