The base of my exploration into itext7 for Java was the c03e02_premierleague jumpstart tutorial. The following line of code is probably the one that needs some function added to make the line have a sharper edge, but although I have looked through the list of available methods, I could not make out the one that would achieve this.
canvas.moveTo(llx, lly)
.lineTo(urx, lly)
.lineTo(urx, ury - r)
.curveTo(urx, ury - r * b, urx - r * b, ury, urx - r, ury)
.lineTo(llx + r, ury)
.curveTo(llx + r * b, ury, llx, ury - r * b, llx, ury - r)
.lineTo(llx, lly).stroke();
After a few problems like for example the line stroke not closing with closePathStroke(), I saw that the canvas stroke lines method used for the header cells is different from the SolidBorders used for the other cells.
The borders created using a PdfCanvas gray out to the edges of the line, whereas the standard borders go directly from black to white. Clear, sharp edges is what I want on all of those lines.
Here a screenshot zoomed into the top right corner of my slightly modified version of this example (purposefully left in the not properly rendered state to clarify my point).
Related
I need to draw a circle with a specific range. In my case I need to start from the top and then go down to the bottom of the circle, going over the left outer border of the circle. I need to know the length of each line and the X and Y coordinates. I basically see a circle as a bunch of lines stacked on top of each other where as it goes further down the horizontal line length grows and grows until it reaches the middle point, then it goes all the way back and shrinks and shrinks. Then you have a circle. I need to iterate over each of those lines, knowing their X and Y coordinates from the left side so that I can then do line_to_the_left(x, y, length) to draw the circle.
How would an algorithm taking a range look like that does this? I know that one thing I would need is Pi.
You can make loop over integer Y values. For center coordinates cx, cy and radius R:
for y = - R ... R:
hw = (int) sqrt(R^2 - y^2) //halfwidth
line(cy + y, cx - hw, cy + y, cx + hw) //left and right ends of line
I am struggling to plot a round image on the surface of the dome in Matlab. Here is what I have. This is the png image:
And this is the dome:
Now, I need to project this image on the dome. I've written a code to place the image on the surface:
r = 10;
r2 = 9;
cdata = imread('circle_image.png');
props.EdgeColor = 'none';
figure();
n = 50;
[X,Y,Z] = sphere(n) ;
X1 = X * r;
Y1 = Y * r;
Z1 = Z * r;
for i = 1:n+1
for j = 1:n+1
if Z1(i,j) < r2
X1(i,j) = NaN;
Y1(i,j) = NaN;
Z1(i,j) = NaN;
end
end
end
my_dome = surf(X1,Y1,Z1,props) ;
alpha = 1;
set(kopula, 'FaceColor', 'texturemap', 'CData', cdata, 'FaceAlpha', alpha, 'EdgeColor', 'none');
axis equal
What I am getting looks like this:
It seems like the image is centred in the wrong place, or even in the wrong axis. How can I fix that?
Yes, the image is centered in the wrong place.
The texture mapping is applied on the whole surface, not just the "active" points (the one you didn't NaN). Basically, what you are getting is the image being spread onto the full sphere, and then when you crop the top of the sphere to get your dome, the image is cropped as well:
What you need to do, is actually remove all those points you converted to NaN, so they are not part of the surface at all and the texture mapping is aplied only on the top dome surface.
So replace your nested for loop with the following code:
idx_Raws2crop = Z1(:,1) < r2 ;
X1(idx_Raws2crop,:) = [] ;
Y1(idx_Raws2crop,:) = [] ;
Z1(idx_Raws2crop,:) = [] ;
Then continue with your code, you'll get:
Actually I also added the instruction:
cdata = flipud(cdata) ;
in order to have the THE CIRCLEwriting in the proper orientation (otherwise it appears upside down).
Edit:
To render the picture on the dome in the way you'd like according to your comment, I can see 2 options:
Options 1: Build upon what we have already
This will consist of:
Extend the [X,Y] domain on a square grid (so the picture is not distorted when texture mapped onto the surface).
Extend the picture itself (add some transparent margin), so the margin will cover the domain extension we introduced and the actual visible part of the picture will be nicely centered on the dome.
With the same X1, Y1 and Z1 than we obtained with the code above:
% Create a square grid large enough to cover the dome and a bit more:
[X2,Y2] = meshgrid(linspace(-5,5,100),linspace(-5,5,100)) ;
% reinterpolate Z1 over this new grid
Z2 = griddata(X1,Y1,Z1,X2,Y2) ;
Now your surface looks like this:
As I warned you, the image is now properly applied but the edge of the dome looks rather ugly. To ease that you could replace the NaN with the base value for your dome, this will transition a lot better between dome and flat domain:
Z2(isnan(Z2)) = 9 ;
Will now yield:
The next problem, as you can see, is that (as in your first question), the image is stretched onto the whole surface. So part of the writing is now on the flat part of the surface. To simply alleviate that, you can modify the picture (add a bit of transparent margin on every side), until the visible part of the image matches the dome size.
Options 2: Build your surface differently
This is actually easier and more straighforward. We will build a dome out of a square surface in the first place (no trimming and NaNing).
This code does not require any part of your previous code, it is self contained:
r = 10 ;
% Build a square grid
[X2,Y2] = meshgrid(linspace(-5,5,100),linspace(-5,5,100)) ;
% build Z2 according to the sphere equation.
Z2 = sqrt( r^2 - X2.^2 - Y2.^2) ;
figure();
my_dome = surf(X2,Y2,Z2,'EdgeColor', 'none', 'FaceColor', 'texturemap', 'CData', cdata, 'FaceAlpha', 1) ;
axis equal
Which yields a nicely centered texture map:
Note: The texture mapping works so well because your actual surface is still a square, only bent a bit to conform to a sphere. The texture mapping does not display the part of the surface where the picture is transparent, so it is invisible, but if you look at your surface without the texture mapping you'll understand what went on in the background:
hs=surf(X2,Y2,Z2) ; shading interp ; axis equal
I'm making an image using python. But the Lambertian shading does not work.
At first the image saved like this.
enter image description here
But when I reversed the normal vector of sphere, the image saved like this.enter image description here
This is my shading code.
v = -m*ray
if s == 'Sphere':
n = view.viewPoint - list[idx].c - v
n = -n / np.sqrt(np.sum(n*n))
for i in light:
l_i = v + i.position - view.viewPoint
l_i = l_i / np.sqrt(np.sum(l_i * l_i))
x = list[idx].s.d[0] * i.intensity[0] * max(np.dot(l_i, n), 0)
y = list[idx].s.d[1] * i.intensity[1] * max(np.dot(l_i, n), 0)
z = list[idx].s.d[2] * i.intensity[2] * max(np.dot(l_i, n), 0)
list is sphere's list and idx is the number of the closest sphere.
I'd be grateful if anyone could help me. I have been doing this for a week
You have not stated what you think is wrong.
Where is the light in relation to the spheres in the first image? Is it above and slightly behind them? If so - the image looks correct.
Assuming the statements above are correct, then the second image looks correct. The reason the light is on the bottom of the spheres is because the normal is now pointing "in" so the dot() product sign will be opposite to that in the first image.
Note that in your example code, it doesn't look like you have any shadow ray treatment. In other words - all objects will be lit as if all other objects are transparent. No objects will cast shadows on to other objects. This also explains why you can see the bottom of the spheres when the light is coming from the top. If you had proper shadow rays, then it wouldn't actually matter which way the normal is pointing (I would remove the max() functions at that point).
Hi sorry for the confusing title.
I'm trying to make a race track using points. I want to draw 3 rectangles which form my roads. However I don't want these rectangles to overlap, I want to leave an empty space between them to place my corners (triangles) meaning they only intersect at a single point. Since the roads have a common width I know the width of the rectangles.
I know the coordinates of the points A, B and C and therefore their length and the angles between them. From this I think I can say that the angles of the yellow triangle are the same as those of the outer triangle. From there I can work out the lengths of the sides of the blue triangles. However I don't know how to find the coordinates of the points of the blue triangles or the length of the sides of the yellow triangle and therefore the rectangles.
This is an X-Y problem (asking us how to accomplish X because you think it would help you solve a problem Y better solved another way), but luckily you gave us Y so I can just answer that.
What you should do is find the lines that are the edges of the roads, figure out where they intersect, and proceed to calculate everything else from that.
First, given 2 points P and Q, we can write down the line between them in parameterized form as f(t) = P + t(Q - P). Note that Q - P = v is the vector representing the direction of the line.
Second, given a vector v = (x_v, y_v) the vector (y_v, -x_v) is at right angles to it. Divide by its length sqrt(x_v**2 + y_v**2) and you have a unit vector at right angles to the first. Project P and Q a distance d along this vector, and you've got 2 points on a parallel line at distance d from your original line.
There are two such parallel lines. Given a point on the line and a point off of the line, the sign of the dot product of your normal vector with the vector between those two lines tells you whether you've found the parallel line on the same side as the other, or on the opposite side.
You just need to figure out where they intersect. But figuring out where lines P1 + t*v1 and P2 + s*v2 intersect can be done by setting up 2 equations in 2 variables and solving that. Which calculation you can carry out.
And now you have sufficient information to calculate the edges of the roads, which edges are inside, and every intersection in your diagram. Which lets you figure out anything else that you need.
Slightly different approach with a bit of trigonometry:
Define vectors
b = B - A
c = C - A
uB = Normalized(b)
uC = Normalized(c)
angle
Alpha = atan2(CrossProduct(b, c), DotProduct(b,c))
HalfA = Alpha / 2
HalfW = Width / 2
uB_Perp = (-uB.Y, ub.X) //unit vector, perpendicular to b
//now calculate points:
P1 = A + HalfW * (uB * ctg(HalfA) + uB_Perp) //outer blue triangle vertice
P2 = A + HalfW * (uB * ctg(HalfA) - uB_Perp) //inner blue triangle vertice, lies on bisector
(I did not consider extra case of too large width)
How would you check if a rgb or hex value is within a specific range of colors? Preferably with ruby.
I'm using ruby and rmagick to extract colors (quantize and color_histogram) from images and then store those colors in the database. If someone searched for a color(hex or rgb) that is similar I want to be able to return that color.
e.g. If someone searched for #f4f4f4 I'd like to return #f5f5f5, #f3f3f3, and all the other close hex values.
If you treat RGB as a three-dimensional space with R, G and B being the axes, you can define "close colors" as a cube or a sphere around a color and return all the colors inside it (or check for a given color if it's close enough). Formulars for that are quite simple:
Original color R, G, B
Cube with side length L around it:
All colors between (R - L/2, G - L/2, B - L/2) and (R + L/2, G + L/2, B + L/2)
Sphere with radius R around it:
New color R_new, G_new, B_new is inside if
delta_r * delta_r + delta_g * delta_g + delta_b * delta_b < R * R
where
delta_r = abs(R - R_new)
delta_g = abs(G - G_new)
delta_b = abs(B - B_new)
Using a sphere instead of a cube is the "correct" way, but it won't make much of a difference for small ones and the colors inside the cube are a bit easier to calculate.