Corrupted resized textures with Metal on Retina screens - macos

I want to draw a series of textures into METAL view in order to present a complete image. On a regular screen, the images are presented on exactly 1:1 scale (Meaning a 100x100 pixels texture will be presented in a 100x100 pixels square)
Drawing it on a retina display, will actually give me a 200x200 square.
Now, there may be 2 different approaches:
1) Generate entire image into 100x100 square and let Metal View to upscale it to 200x200 square - It works.
2) Upscale each texture and generate image directly into 200x200 square. Why to take this approach? Because some of the textures (Like texts) are generated dynamically, and can be generated in a better resolution. Something impossible if you take the first approach.
Unfortunately, in this approach, some ugly square is visible around each texture.
I tried to play with sizes, clamp options etc, yet I could not find any solution.
Any help would be highly appreciated!
Image from regular screen
Image from retina screen

Found a solution. In Fragment shader, texture sampler was defined as:
constexpr sampler s = sampler(coord::normalized, address::repeat, filter::nearest);
instead of:
constexpr sampler s = sampler(coord::normalized, address::clamp_to_edge, filter::nearest);

Related

Use GIMP to resize image in one layer only

This is my set up. I have 2 layers with transparency (I don't know if transparency matters here). Layers are the same size, 5x7 inches. Each layer has their image (say I draw a square on it and a circle on the other).
I want to resize ONLY the square.
The problem is when I scale the square I end up either scaling both, the circle AND the square, equally and they retain their layer size, or BOTH layers are rezise and no longer 5x7 inches. I've tried 'Tools-Transform-Scale' and 'Image-Resize canvas or image', but I can't find the tool to just resize ONE of the images.
Any ideas what I'm doing wrong?
Thanks
What you want is the Scale tool, and it will resize only the active layer if it is in Scale: layer mode (you seem to have it in Scale: image mode)(*).
Otherwise, to clear up things:
Image > Canvas size changes the size of the canvas, but nothing is stretched/compressed, the layers retain their size or are extended with transparency or white.
Image > Scale image scales everything in the image (layers, channels, paths...)
(*) Also,if what you apply a transform such as Scale to an item that has the chainlink, the same transform will be applied to all other chainlinked items (other layers, but also paths).

Image pixelation library, non-square "pixel" shape

I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.

image smoothing in opengl?

Does opengl provide any facilities to help with image smoothing?
My project converts scientific data to textures, each of which is a single line of colored pixels which is then mapped onto the appropriate area of the image. Lines are mapped next to each other.
I'd like to do simple image smoothing of this, but am wondering of OGL can do any of it for me.
By smoothing, I mean applying a two-dimensional averaging filter to the image - effectively increasing the number of pixels but filling them with averages of nearby actual colors - basically normal image smoothing.
You can do it through a custom shader if you want. Essentially you just bind your input texture, draw it as a fullscreen quad, and in the shader just take multiple samples around each fragment, average them together, and write it out to a new texture. The new texture can be an arbitrary higher resolution than the input texture if you desire that as well.

Cropping an image with a focus area (face) using ImageMagick

I'm struggling to find the right approach to resize and crop and image, with a focus area. In my case the focus area is a face detected in the image, and I need to make sure that this area is visible in the cropped version.
I have focus area given by eg. face_height, face_width, face_center_x and face_center_y. These values are percentages of dimensions of the original image.
What I want to do, is getting a eg. 60x60 thumbnail. The normal approach would be to resize so either height or width of the image is equal 60px and then crop a 60x60 from center, like this:
mogrify -resize 60x -gravity 'Center' -crop 60x60 image.jpg
What approach can be taken focus my crop around a given area instead?
I'm thinking of a solution that includes several paths:
If the face area is bigger than the wanted thumbnail, resize the image just enough to make the whole face visible in 60x60 pixels, then crop
If the face area is smaller than the wanted thumbnail, then crop "expand" my face area until my wanted thumb can fit inside the area. Then crop. I guess I need to make sure that this doesn't exceed the bounds of the original image.
Is there a smarter approach? Can you try make some example code?
Thanks!
I'd first do the arithmetic in script or program, then feed exact coordinates to ImageMagick.
The arithmetic steps:
It'll be easier to operate with exact pixel values than percentages, so convert face_height, face_width, face_center_x and face_center_y to pixel values.
You'll want rectangular thumbnail, so pick the longest side and operate with that:
longest_side = max(face_height, face_width)
Now you can calculate top left point for your crop:
crop_x = face_center_x - longest_side / 2
crop_y = face_center_y - longest_side / 2
If any of the four crop corners fall outside your picture dimensions, adjust for that:
crop_x and crop_y should both be >= 0
crop_x + longest_side should be less than image width
crop_y + longest_side should be less than image height
Having calculated these, ImageMagick call gets quite straightforward:
mogrify -crop {longest_side}x{longest_side}+{crop_x}+{crop_y} -resize 60x60 image.jpg

What are the pros and cons of a sprite sheet compared to an image sequence?

I come from a 2D animation background and so when ever I us an animated sequence I prefer to use a sequence of images. To me this makes a lot of sense because you can easily export the image sequence from your compositing/editing software and easily define the aspect.
I am new to game development and am curious about the use of a sprite sheet. What are the advantages and disadvantages. Is file size an issue? - to me it would seem that a bunch of small images would be the same as one massive one. Also, defining each individual area of the sprites seems time cumbersome.
Basically, I dont get why you would use a sprite sheet - please enlighten me.
Thanks
Performance is better for sprite sheets because you have all your data contained in a single texture. Lets say you have 1000 sprites playing the same animation from a sprite sheet. The process for drawing would go something like.
Set the sprite sheet texture.
Adjust UV's to show single frame of animation.
Draw sprite 0
Adjust UV's
Draw sprite 1
.
.
.
Adjust UV's
Draw sprite 998
Adjust UV's
Draw sprite 999
Using a texture sequence could result in a worst case of:
Set the animation texture.
Draw sprite 0
Set the new animation texture.
Draw sprite 1
.
.
.
Set the new animation texture.
Draw sprite 998
Set the new animation texture.
Draw sprite 999
Gah! Before drawing every sprite you would have to set the render state to use a different texture and this is much slower than adjusting a couple of UV's.
Many (most?) graphics cards require power-of-two, square dimensions for images. So for example 128x128, 512x512, etc. Many/most sprites, however, are not such dimensions. You then have two options:
Round the sprite image up to the nearest power-of-two square. A 16x32 sprite becomes twice as large with transparent pixel padding to 32x32. (this is very wasteful)
Pack multiple sprites into one image. Rather than padding with transparency, why not pad with other images? Pack in those images as efficiently as possible! Then just render segments of the image, which is totally valid.
Obviously the second choice is much better, with less wasted space. So if you must pack several sprites into one image, why not pack them all in the form of a sprite sheet?
So to summarize, image files when loaded into the graphics card must be power-of-two and square. However, the program can choose to render an arbitrary rectangle of that texture to the screen; it doesn't have to be power-of-two or square. So, pack the texture with multiple images to make the most efficient use of texture space.
Sprite sheets tend to be smaller
files (since there's only 1 header
for the whole lot.)
Sprite sheets load quicker as there's
just one disk access rather than
several
You can easily view or adjust multiple frames
at once
Less wasted video memory when you
load the whole lot into one surface
(as Ricket has said)
Individual sprites can be delineated by offsets (eg. on an implicit grid - no need to explicitly mark or note each sprite's position)
There isn't a massive benefit for using sprite sheets, but several small ones. But the practice dates back to a time before most people were using proper 2D graphics software to make game graphics so the artist workflow wasn't necessarily the most important thing back then.

Resources