Polybar Math: Calculating offset-x to center bar automatically - bash

I am trying to calculate the offset-x value by doing some simple math. I want offset-x to be calculated by looking at width value, subtracting it from 100, strip off the % (main issue) and dividing by 2 . This will make sure the bar is always centered no matter the width value.
This is what I have so far:
[bar/top]
; Dimension defined as pixel value (e.g. 35) or percentage (e.g. 50%),
; the percentage can optionally be extended with a pixel offset like so:
; 50%:-10, this will result in a width or height of 50% minus 10 pixels
width = 70%
height = 28pt
; divide negative space (20%) evenly and set the offset-x to that value
; 80% width = 20% negative space /2 = 10%
; test variable calculation:
barXoffset="$((echo 100-70)/2 | bc))"
; Offset defined as pixel value (e.g. 35) or percentage (e.g. 50%)
; the percentage can optionally be extended with a pixel offset like so:
; 50%:-10, this will result in an offset in the x or y direction
; of 50% minus 10 pixels
; offset-x = 15%
offset-x = ${barXoffset}%
offset-y = 2
How would you try to solve this?
Currently trying to pipe into bc but still need to strip of the % at the end.

Related

Get X/Y position of pixel in PNG file

After stripping off header bytes and de-compressing the pixel values, a PNG file leaves us with a set of rows (a horizontal strip of the image one pixel high).
Each row starts with a single byte specifying the filter used, followed by RGB values:
+-----+-----+-----+-----+-----+-----+-----+
| 0:F | 1:R | 2:G | 3:B | 4:R | 5:G | 6:B | // end of first row in image
+-----+-----+-----+-----+-----+-----+-----+
| 7:F | 8:R | 9:G |10:B |11:R |12:G |13:B | // end of second row
+-----+-----+-----+-----+-----+-----+-----+
In an image without the filter byte, I could just divide the index by 3 (since there are three values per RGB pixel), then use these formulas to get the x/y position of that pixel:
x = index % width
y = index / width
But the filter byte is throwing me off! How do I get the x/y position of a pixel, given a red pixel's byte index? (Say at byte 4 or at byte 11, as shown above.)
I've tried all kinds of permutations but I think there must be an elegant solution!
Based on comments from #usr2564301, I think this works correctly:
y = ((index-1) / 3) / width
x = ((index-y) / 3) % width
Where width is the width of the image in pixels, not the width of the row of bytes.
We subtract y from the index because each row has a single filter byte and we need to remove them all to get the x position.
Alternatively, y can be calculated using:
y = index / row_width
Where row_width is the number of bytes per row: three for RGB and one filter byte times the width of the image.

Best fitting rectangle with a variable number of small rectangles keeping aspect ratio

I need to find the optimal placement of a given N child rectangles keeping the aspect ratio of the father rectangle.
Use case is the following:
- the father rectangle is a big picture, let's say 4000x3000 pixels (this one can be rescaled).
- child rectangles are 296x128 pixels (e-ink displays of users)
The objective is to show the big picture across all the current number of displays (this number can change from 1 to 100)
This is an example:
Can happen that number of small rectangles will not fit the big rectangle aspect ratio, like if number of small rectangles is odd, in this case I can think to have like a small number (max 5) of spare rectangles to add in order to complete the big rectangle.
this seems to be a valid approach (python + opencv)
import cv2
import imutils
def split_image(image, boards_no=25, boards_shape=(128, 296), additional=5):
# find image aspect ratio
aspect_ratio = image.shape[1]/image.shape[0]
print("\nIMAGE INFO:", image.shape, aspect_ratio)
# find all valid combination of a,b that maximize your available badges
valid_props = [(a, b) for a in range(boards_no+additional+1) for b in range(boards_no+additional+1) if a*b in [q for q in range(boards_no, boards_no+additional)]]
print("\nVALID COMBINATIONS", valid_props)
# find all aspect ratio from previous combination
aspect_ratio_all = [
{
'board_x': a,
'board_y': b,
'aspect_ratio': (a*boards_shape[1])/(b*boards_shape[0]),
'shape': (b*boards_shape[0], a*boards_shape[1]),
'type': 'h'
} for (a, b) in valid_props]
aspect_ratio_all += [
{
'board_x': a,
'board_y': b,
'aspect_ratio': (a*boards_shape[0])/(b*boards_shape[1]),
'shape': (b*boards_shape[1], a*boards_shape[0]),
'type': 'v'
} for (a, b) in valid_props]
min_ratio_diff = min([abs(aspect_ratio-x['aspect_ratio']) for x in aspect_ratio_all])
best_ratio = [x for x in aspect_ratio_all if abs(aspect_ratio-x['aspect_ratio']) == min_ratio_diff][0]
print("\MOST SIMILAR ASPECT RATIO:", best_ratio)
# resize image maximining height or width
resized_img = imutils.resize(image, height=best_ratio['shape'][0])
border_width = int((best_ratio['shape'][1] - resized_img.shape[1]) / 2)
border_height = 0
if resized_img.shape[1] > best_ratio['shape'][1]:
resized_img = imutils.resize(image, width=best_ratio['shape'][1])
border_height = int((best_ratio['shape'][0] - resized_img.shape[0]) / 2)
border_width = 0
print("RESIZED SHAPE:", resized_img.shape, "BORDERS (H, W):", (border_height, border_width))
# fill the border with black
resized_img = cv2.copyMakeBorder(
resized_img,
top=border_height,
bottom=border_height,
left=border_width,
right=border_width,
borderType=cv2.BORDER_CONSTANT,
value=[0, 0, 0]
)
# split in tiles
M = resized_img.shape[0] // best_ratio['board_y']
N = resized_img.shape[1] // best_ratio['board_x']
return [resized_img[x:x+M,y:y+N] for x in range(0,resized_img.shape[0],M) for y in range(0,resized_img.shape[1],N)]
image = cv2.imread('image.jpeg')
tiles = split_image(image)
Our solutions will always be rectangles into which we have fit the biggest picture that we can keeping the aspect ratio correct. The question is how we grow them.
In your example a single display is 296 x 128. (Which I assume is length and height.) Our scaled image to 1 display is 170.6 x 128. (You can take out fractional pixels in your scaling.)
The rule is that at all points, whatever direction is filled gets filled in with more displays so we can expand the picture. In the single display solution we therefore go from a 1x1 rectangle to a 1x2 one and we now have 296 x 256. Our scaled image is now 296 x 222.
Our next solution will be a 2x2 display. This gives us 594 x 256 and our scaled image is 321.3 x 256.
Next we get a 2x3 display. This gives us 594 x 384 and our scaled display is now 512 x 384.
Since we are still maxing on the second dimension we next go to 2x4. This gives us 594 x 512 and our scaled display is 594 x 445.5. And so on.
For your problem it will not take long to run through all of the sizes up to however many displays you have, and you just take the biggest rectangle that you can make from the list.
Important special case. If the display rectangle and image have the same aspect ratio, you have to add to both dimensions. Which in the case that the image and the displays have the same aspect ratio gives you 1 x 1, 2 x 2, 3 x 3 and so on through the squares.

Simplifying an image in matlab

I have a picture of a handwritten letter (say the letter, "y"). Keeping only the first of the three color values (since it is a grayscale image), I get a 111x81 matrix which I call aLetter. I can see this image (please ignore the title) using:
colormap gray; image(aLetter,'CDataMapping','scaled')
What I want is to remove the white space around this letter and somehow average the remaining pixels so that I have an 8x8 matrix (let's call it simpleALetter). Now if I use:
colormap gray; image(simpleALetter,'CDataMapping','scaled')
I should see a pixellated version of the letter:
Any advice on how to do this would be greatly appreciated!
You need several steps to achieve what you want (updated in the light of #rwong's observation that I had white and black flipped…):
Find the approximate 'bounding box' of the letter:
make sure that "text" is the highest value in the image
set things that are "not text" to zero - anything below a threshold
sum along row and column, find non-zero pixels
upsample the image in the bounding box to a multiple of 8
downsample to 8x8
Here is how you might do that with your situation
aLetter = max(aLetter(:)) - aLetter; % invert image: now white = close to zero
aLetter = aLetter - min(aLetter(:)); % make the smallest value zero
maxA = max(aLetter(:));
aLetter(aLetter < 0.1 * maxA) = 0; % thresholding; play with this to set "white" to zero
% find the bounding box:
rowsum = sum(aLetter, 1);
colsum = sum(aLetter, 2);
nonzeroH = find(rowsum);
nonzeroV = find(colsum);
smallerLetter = aLetter(nonzeroV(1):nonzeroV(end), nonzeroH(1):nonzeroH(end));
% now we have the box, but it's not 8x8 yet. Resampling:
sz = size(smallerLetter);
% first upsample in both X and Y by a factor 8:
bigLetter = repmat(reshape(smallerLetter, [1 sz(1) 1 sz(2)]), [8 1 8 1]);
% then reshape and sum so you end up with 8x8 in the final matrix:
letter8 = squeeze(sum(sum(reshape(bigLetter, [sz(1) 8 sz(2) 8]), 3), 1));
% finally, flip it back "the right way" black is black and white is white:
letter8 = 255 - (letter8 * 255 / max(letter8(:)));
You can do this with explicit for loops but it would be much slower.
You can also use some of the blockproc functions in Matlab but I am using Freemat tonight and it doesn't have those… Neither does it have any image processing toolbox functions, so this is "hard core".
As for picking a good threshold: if you know that > 90% of your image is "white", you could determine the correct threshold by sorting the pixels and finding the threshold dynamically - as I mentioned in my comment in the code "play with it" until you find something that works in your situation.

Matlab: Barcode scanner

I'm trying to make a barcode scanner in matlab. In a barcode every white bar is 1 and every black bar is 0. i'm trying to get these bars . But this is the problem:
as you can see the bars are not the same width one time they are 3 pixels ... then 2 pixels etc ... And to make it even worse they differ in images too. So my question is . How can i get the values of these bars without knowing the width of 1 bar. Or how do i give them all the same width. (2 of the same bars can be next to eachother). It's not possible to detect the transition between bars because a transition is possible after a certain amount of pixels ... and then there can be another bar or the same bar. But because it's not possible to know this certain amount of pixels it's not possible to detect a transition. It's also not possible to work with some kind of window because the bars have no standard width. So how can i normalize this ?
A barcode :
thx in advance !
Let's assume that the bars are strictly vertical (as in your example). Here is a possible workflow:
%# read the file
filename = 'CW4li.jpg';
x = imread(filename);
%# convert to grayscale
x = rgb2gray(x);
%# get only the bars area
xend = find(diff(sum(x,2)),1);
x(xend:end,:) = [];
%# sum intensities along the bars
xsum = sum(x);
%# threshold the image by half of all pixels intensities
th = ( max(xsum)-min(xsum) ) / 2;
xth = xsum > th;
%# find widths
xstart = find(diff(xth)>0);
xstop = find(diff(xth)<0);
if xstart(1) > xstop(1)
xstart = [1 xstart];
end
if xstart(end) > xstop(end)
xstop = [xstop numel(xth)];
end
xwidth = xstop-xstart;
%# look at the histogram
hist(xwidth,1:12)
%# it's clear that single bar has 2 pixels (can be automated), so
barwidth = xwidth / 2;
UPDATE
To get relative bar width we can devide width in pixels to minimum bar width:
barwidth = xwidth ./ min(xwidth);
I believe it's good assumption that there always will be a bar on width 1.
If you won't get integer value (due to noise, for example), try to round the numbers to closest integer and get residuals. You can summarize those residuals to get quality assessment of recognition.
Some clustering algorithm (like k-mean clustering) might also work well.

When zooming an image how can I determine a proper w/h so that I won't get decimals

Lets say I have a non-square image..
If I increment the width and I recalculate the height according to the incremented width (ratio), sometimes I get xxx.5 (decimals) for the width
ex.: width = 4, height = 2
I augment the width with a factor of 1.25 I'll get : width = 5
Next, the height will be : heigth = 2.5
How can I determine the nearest image format that would have integers on both sides? (bigger if possible)
Thanks
Let g be the http://en.wikipedia.org/wiki/Greatest_common_divisor of w and h. The next biggest image has width w + w/g and height h + h/g. You can compute g with http://en.wikipedia.org/wiki/Euclidean_algorithm .
reduce the fraction to lowest terms and then multiply by integers. You reduce a/b to lowest terms by dividing each by their common gcd. If d = gcd(a,b), then (a/d) / (b/d) is in lowest terms. Now, if you want the next largest integer fraction with the same ration, then multiple the numerator and denominator by d+1. Thus,
(d+1) * (a/d) is the numerator and (d+1) * (b/d) is the denominator.

Resources