Find equivalent X/Y coordinates such that rotated text is printed at the same position using both ^FO as well as ^FT command using ZPL 2 - zpl

I am trying to convert the X/Y coordinates from the ^FO command to the equivalent ^FT coordinates in such a way that the position of text in the label does not change for rotated text.
The solution that I am looking for was already addressed partially in a question that I previously asked [link added here] (Is it possible to find equivalent X/Y coordinates in order to print some text at the same position using both ^FO as well as ^FT command using ZPL 2)
I am able to find some equations for different rotations (as stated below for font 0) but I am unsure about the exact formula to be used for converting corresponding Y coordinate for 270 rotation or equivalently X coordinate for 180 rotation:
For 0 rotation:
FOx = FTx and
FOy = FTy - (0.75 * height)
For 90 rotation:
FOx = FTx - (0.25 * height) and
FOy = FTy
For 180 rotation:
FOx = Not found yet and
FOy = FTy - (height * 0.25)
For 270 rotation:
FOx = FTx - (height * 0.75) and
FOy = Not yet found
I guess those missing equations depend on the number of characters in the text to be printed as well as the width of the text but I am not able to find the exact equation.
Any thoughts or suggestions on the finding the same would be greatly appreciated.

The calculation for FOx (180 rotation) would be:
FOx = FTx - text length
and for FOy (270 rotation) would be:
FOy = FTy - text length
Text length for non-monospaced font (font 0) is difficult to calculate since width will vary for different characters.
For monospaced font, the length of text in dots would be calculated for different fonts as per Table 26 (Page #: 1413) in the ZPL Programming guide.
Another useful answer on how to calculate length is answered below:
How to Calculate Zebra Font 0 text width?

Related

How to calculate the area at half length of a cone shaped spray image (2D) using MATLAB Image Processing?

I have a image of fuel spray and i want to find the angle of the fuel spray. One of the research papers I was reading tells me that I can find the angle using the area at half length of the spray and I've been trying to find the area at half length for a couple of weeks right now.
The code below shows what I tried. I've also tried other methods such as trimming out all the non zero elements and just calculating the angle from the end of the spray. Since that is giving me an inaccurate answer, I'm here looking for help.
img_subt_binary= imbinarize(img_subt);
BW2= BiggestImageOnly(img_subt_binary);% Clear out all white areas that have less than 175 pixels.
% figure(2),imshow(BW2),
% title('Filtered Binary Image')
% [pixelCount, grayLevels] = imhist(BW2);
% figure(3)
% bar(grayLevels, pixelCount);
[the_length,the_width]=size(BW2)
%% Spray Angle
half_length=the_length/2;
for j=1:half_length
j=j+1;
[LL(j),WW(j)]= size(BW2);
final_width=max(WW);
end
angle= atan(final_width/half_length)
I'm expecting the spray angle to be around 20 degrees.
To get a better estimation of the change in width (spray angle) you might want to fit a line across the entire image
[h w] = size(BW2);
margin = ceil(h/10); % ignore top/bottom parts
row_width = sum(BW2(margin:end-margin,:), 2); % number of white pixel in each row
x = 1:numel(row_width);
pp = polyfit(x, row_width.', 1); % fit a line
% see the line
figure;
plot(row_width);
hold all;
plot(x, x*pp(1) + pp(2));
% get the angle (in degrees)
angle = atan(pp(1)) * 180 / pi
The estimated angle is
7.1081
The plot:

Algorithm for distributing/aligning multi selected shapes vertically or horizontally at equal distance

I need to write logic to distribute or align multi selected shapes horizontally or vertically with equal spaces between the selected shapes/objects.
In PowerPoint 2010 we have option "Distribute horizonatlly" and "Distribute vertically". Please refer this link for clarification Similar functionality I have to implement in my application.
Is there any algorithm already available to meet my requirement?
Note: Here I only convert #SaiBot comment to steps
Calculate the min rectangle border for each shape (this depends on how you implement your shapes).You can have help in this step by posting another question with the tag of your programming language..
Total Shapes Width = the sum of all shapes widths.
Remaining White Width = The width of your page - Total Shapes Width
Space (the space between each shape) = Remaining White Width / (n - 1). Where n is the number of shapes.
First shape position is Zero (i.e. at the left most point).
Each shape (except the first) position equals the sum of all shapes positioned before it + "Space" * the number of these shapes.
if the shapes indexes (i) start from 0 to n, the shape width is Wi, the shape start position is Pi, and Space is the calculated white space between each shape then...
Space = Sum[from 0 to n] (Wi) / (n - 1)
Pi = Sum[from 0 to i] (Wi) + n * Space

Calculate the angles of a pixel to a camera plane in a depth-image

I have a z-image from a ToF Camera (Kinect V2). I do not have the pixel size, but I know that the depth image has a resolution of 512x424. I also know that I have a fov of 70.6x60 degrees.
I asked how to get the Pixel size before here. In Matlab this code looks like the following.
The brighter the pixel, the closer the object.
close all
clear all
%Load image
depth = imread('depth_0_30_0_0.5.png');
frame_width = 512;
frame_height = 424;
horizontal_scaling = tan((70.6 / 2) * (pi/180));
vertical_scaling = tan((60 / 2) * (pi/180));
%pixel size
with_size = horizontal_scaling * 2 .* (double(depth)/frame_width);
height_size = vertical_scaling * 2 .* (double(depth)/frame_height);
The image itself is a cube rotated by 30 degree, and can be seen here: .
What I want to do now is calculate the horizontal angle of a pixel to the camera-plane and the vertical angle to the camera plane.
I tried to do this with triangulation, I calculate the z-distance from one pixel to another, first in the horizontal direction and then in the vertical direction. I do this with a convolution:
%get the horizontal errors
dx = abs(conv2(depth,[1 -1],'same'));
%get the vertical errors
dy = abs(conv2(depth,[1 -1]','same'));
After this I calculate it via the atan, like this:
horizontal_angle = rad2deg(atan(with_size ./ dx));
vertical_angle = rad2deg(atan(height_size ./ dy));
horizontal_angle(horizontal_angle == NaN) = 0;
vertical_angle(vertical_angle == NaN) = 0;
Which gives back promising results, like these:
However, using a little bit more complex image like this, which is turned by 60° and 30°.
Gives back the same angle images for horizontal and vertical angles, which look like this:
After subtracting both images from each other, I get the following image - which shows that there is a difference between those two.
So, I have the following questions: How can I proof this concept? Is the math correct, and the test case is just poorly chosen? Is the angle difference from horizontal to vertical angles in the two images too close? Are there any errors in the calculation ?
While my previous code may looks good, it had a flaw. I tested it with smaller images (5x5,3x3 and so on) and saw, that there is an offset created by the difference picture (dx,dy) made by the convolution. It is simple not possible to map the difference picture (which holds the difference between two pixels) to the pixels itself, since the difference picture is smaller than the original one.
For a fast fix, I do a downsampling. So I changed the filter mask to:
%get the horizontal differences
dx = abs(conv2(depth,[1 0 -1],'valid'));
%get the vertical differences
dy = abs(conv2(depth,[1 0 -1]','valid'));
And changed the angle function to:
%get the angles by the tangent
horizontal_angle = rad2deg(atan(with_size(2:end-1,2:end-1)...
./ dx(2:end-1,:)))
vertical_angle = rad2deg(atan(height_size(2:end-1,2:end-1)...
./ dy(:,2:end-1)))
Also I used a padding function to get the angle map to the same size as the original images.
horizontal_angle = padarray(horizontal_angle,[1 1],0);
vertical_angle = padarray(vertical_angle[1 1],0);

Simplifying an image in matlab

I have a picture of a handwritten letter (say the letter, "y"). Keeping only the first of the three color values (since it is a grayscale image), I get a 111x81 matrix which I call aLetter. I can see this image (please ignore the title) using:
colormap gray; image(aLetter,'CDataMapping','scaled')
What I want is to remove the white space around this letter and somehow average the remaining pixels so that I have an 8x8 matrix (let's call it simpleALetter). Now if I use:
colormap gray; image(simpleALetter,'CDataMapping','scaled')
I should see a pixellated version of the letter:
Any advice on how to do this would be greatly appreciated!
You need several steps to achieve what you want (updated in the light of #rwong's observation that I had white and black flipped…):
Find the approximate 'bounding box' of the letter:
make sure that "text" is the highest value in the image
set things that are "not text" to zero - anything below a threshold
sum along row and column, find non-zero pixels
upsample the image in the bounding box to a multiple of 8
downsample to 8x8
Here is how you might do that with your situation
aLetter = max(aLetter(:)) - aLetter; % invert image: now white = close to zero
aLetter = aLetter - min(aLetter(:)); % make the smallest value zero
maxA = max(aLetter(:));
aLetter(aLetter < 0.1 * maxA) = 0; % thresholding; play with this to set "white" to zero
% find the bounding box:
rowsum = sum(aLetter, 1);
colsum = sum(aLetter, 2);
nonzeroH = find(rowsum);
nonzeroV = find(colsum);
smallerLetter = aLetter(nonzeroV(1):nonzeroV(end), nonzeroH(1):nonzeroH(end));
% now we have the box, but it's not 8x8 yet. Resampling:
sz = size(smallerLetter);
% first upsample in both X and Y by a factor 8:
bigLetter = repmat(reshape(smallerLetter, [1 sz(1) 1 sz(2)]), [8 1 8 1]);
% then reshape and sum so you end up with 8x8 in the final matrix:
letter8 = squeeze(sum(sum(reshape(bigLetter, [sz(1) 8 sz(2) 8]), 3), 1));
% finally, flip it back "the right way" black is black and white is white:
letter8 = 255 - (letter8 * 255 / max(letter8(:)));
You can do this with explicit for loops but it would be much slower.
You can also use some of the blockproc functions in Matlab but I am using Freemat tonight and it doesn't have those… Neither does it have any image processing toolbox functions, so this is "hard core".
As for picking a good threshold: if you know that > 90% of your image is "white", you could determine the correct threshold by sorting the pixels and finding the threshold dynamically - as I mentioned in my comment in the code "play with it" until you find something that works in your situation.

Matlab: Barcode scanner

I'm trying to make a barcode scanner in matlab. In a barcode every white bar is 1 and every black bar is 0. i'm trying to get these bars . But this is the problem:
as you can see the bars are not the same width one time they are 3 pixels ... then 2 pixels etc ... And to make it even worse they differ in images too. So my question is . How can i get the values of these bars without knowing the width of 1 bar. Or how do i give them all the same width. (2 of the same bars can be next to eachother). It's not possible to detect the transition between bars because a transition is possible after a certain amount of pixels ... and then there can be another bar or the same bar. But because it's not possible to know this certain amount of pixels it's not possible to detect a transition. It's also not possible to work with some kind of window because the bars have no standard width. So how can i normalize this ?
A barcode :
thx in advance !
Let's assume that the bars are strictly vertical (as in your example). Here is a possible workflow:
%# read the file
filename = 'CW4li.jpg';
x = imread(filename);
%# convert to grayscale
x = rgb2gray(x);
%# get only the bars area
xend = find(diff(sum(x,2)),1);
x(xend:end,:) = [];
%# sum intensities along the bars
xsum = sum(x);
%# threshold the image by half of all pixels intensities
th = ( max(xsum)-min(xsum) ) / 2;
xth = xsum > th;
%# find widths
xstart = find(diff(xth)>0);
xstop = find(diff(xth)<0);
if xstart(1) > xstop(1)
xstart = [1 xstart];
end
if xstart(end) > xstop(end)
xstop = [xstop numel(xth)];
end
xwidth = xstop-xstart;
%# look at the histogram
hist(xwidth,1:12)
%# it's clear that single bar has 2 pixels (can be automated), so
barwidth = xwidth / 2;
UPDATE
To get relative bar width we can devide width in pixels to minimum bar width:
barwidth = xwidth ./ min(xwidth);
I believe it's good assumption that there always will be a bar on width 1.
If you won't get integer value (due to noise, for example), try to round the numbers to closest integer and get residuals. You can summarize those residuals to get quality assessment of recognition.
Some clustering algorithm (like k-mean clustering) might also work well.

Resources