Pixel Size Different Than Grid Cell Size - Photoshop Pixel Art - pixel

I was trying to learn pixel art using Photoshop and following different tutorials available in the internet.
For Pixel Art drawing, most important thing is to setup grid structure so we can draw easily for easy pixel.
Now if you check my attached image then you will notice that my pixel size not matching with grid cell size though I have set pencil size as 1.
So one pixel size needs to be similar as per grid single cell size.
As per my understanding, I can say that there is some mistake in Gridline settings.
Please suggest me some correction for this, if you got my point.
I want to create grid lines exact similar to one pixel size.

Related

How can I track/color seperate shapes generated using perlin noise?

So I have created a 2D animation that consists of 3D Perlin noise where the X & Y axes are the pixel positions on the matrix/screen and the Z axis just counts up over time. I then apply a threshold so it only shows solid shapes unlike the cloud type pattern of the normal noise. In effect it creates a forever moving fluid animation like so https://i.imgur.com/J9AqY5s.gifv
I have been trying to think of a way I can track and maybe index the different shapes so I can have them all be different colours. I'm tried looping over the image and flood filling each shape but this only works for one frame as it doesn't track which shape is which and how they grow and shrink.
I think there must be a way to do something like this because if I had a colouring pencil and each frame on a piece of paper I would be able to colour and track each blob and combine colours when two blobs join. I just can't figure out how to do this programmatically. The nature in which Perlin-noise works and since the shapes aren't defined objects I find it difficult to wrap my head around how I would index them.
Hopefully, my explanation has been sufficient, any suggestions or help would be greatly appreciated.
Your current algorithm effectively marks every pixel in a frame as part of a blob or part of the background. Let's say you have a second frame buffer that can hold a color for every pixel location. As you noted, you can use flood fill on the blob/background buffer to find all the pixels that belong to a blob.
For the first frame, assign colors to each blob you find and save them in the color buffer.
For the second (and each subsequent) frame, you can again use flood fill on the blob/background buffer to determine all the pixels that belong to a discrete blob. Look up the colors corresponding to those each of those pixels from the color buffer (which represents the colors from the last frame) and build a histogram of all the colors you find.
The histogram will contain some of the pixels of the background color (because the blob may have moved or grown into an area that was background).
But since the blobs move smoothly, many of the pixels that are part of a given blob this frame will have been be part of the same blob on the last frame. So if your histogram has just one non-background color, that's the color you would use.
If the histogram contains only the background color, this is a new blob and you can assign it a new color.
If the histogram contains two (or more) blob colors, then two (or more) blobs have merged, and you can blend their colors (perhaps in proportion to their histogram counts with correspond to their areas).
This trick will be to do all this efficiently. The algorithm I've outlined here gives the idea. An actual implementation may not literally build histograms and might take recalculate each pixel color frame scratch on every frame.

Image pixelation library, non-square "pixel" shape

I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.

UI text display in World Space

I have added to my scene a simple text element on a Canvas that is set to World Space, so I can see it in VR, but no matter how much I change the size or how far away or close I get it to the camera, etc... it still shows a very blurry kind of text (which I use to display time).
The situation can be seen in the attached image, as well as my Canvas settings in the other image. Could someone please help me understand which setting(s) I should deal with to get this sorted?
It looks to me like the Canvas Scaler is the source of the problem here. For a Canvas set to the world space, the Canvas Scaler controls the pixel density of UI element in the Canvas.
To increase the pixel density (which should make rendered text sharper), you can raise the value of Dynamic Pixels Per Unit, which should increase the number of pixels used per unit to render your Text. I don't have an exact value for you, as this may vary based on circumstance; you'll just have to experiment to see what value works best for you.
An alternative workaround is to scale the Text way, way down, but increase its Font Size property proportionally.
Hope this helps! Let me know if you have any questions.

Any ideas on how to remove small abandoned pixel in a png using OpenCV or other algorithm?

I got a png image like this:
The blue color is represent transparent. And all the circle is a pixel group. So, I would like to find the biggest one, and remove all the small pixel, which is not group with the biggest one. In this example, the biggest one is red colour circle, and I will retain it. But the green and yellow are to small, so I will remove them. After that, I will have something like this:
Any ideas? Thanks.
If you consider only the size of objects, use the following algorithm: labellize the connex components of the mask image of the objects (all object pixels are white, transparent ones are black). Then compute the areas of the connex components, and filter them. At this step, you have a label map and a list of authorized labels. You can read the label map and overwrite the mask image with setting every pixel to white if it has an authorized label.
OpenCV does not seem to have a labelling function, but cvFloodFill can do the same thing with several calls: for each unlabeled white pixel, call FloodFill with this pixel as marker. Then you can store the result of this step in an array (of the size of the image) by assigning each newly assigned pixel with its label. Repeat this as long as you have unlabellized pixels.
Else you can recode the connex component function for binary images, this algorithm is well known and easy to implement (maybe start with Matlab's bwlabel).
The handiest way to filter objects if you have an a priori knowledge of their size is to use morphological operators. In your case, with opencv, once you've loaded your image (OpenCV supports PNG), you have to do an "openning", that is an erosion followed by a dilation.
The small objects (smaller than the size of the structuring element you chose) will disappear with erosion, while the bigger will remain and be restored with the dilation.
(reference here, cv::morphologyEx).
The shape of the big object might be altered. If you're only doing detection, it is harmless, but if you want your object to avoid transformation you'll need to apply a "top hat" transform.

Cocoa Resolution Independent Button Graphic

I'm trying to create a graphic in Sketch (a vector-based graphic design application). I export to PDF and this is what my original graphic looks like:
But when I set it as the image of an NSButton, it gets drawn like this:
Why does this occur? The right and bottom edges in particular are altered a lot. I'm not sure if this is a Cocoa drawing issue or an issue with my original graphic.
The problem is with (mis)alignment with the pixel grid and anti-aliasing. It looks like you've scaled the image so that the borders on the left, right, and bottom are roughly one pixel in thickness. However, the right and bottom borders are straddling the boundary between pixels. The result is that they contribute half their "darkness" to the pixel on one side of the boundary and the other half to the pixel on the other side of the boundary.
You should tweak either the proportions of the image or the size at which you're drawing it to avoid that particular alignment. It looks as though it's being rendered as roughly 10.5 pixels wide. You want it to be either 10 pixels or 11 pixels wide, so the right edge corresponds more closely to a pixel column.

Resources