Draw a hexagon tessellation animation in Python - algorithm

Now I have a function called Hexagon(x,y,n) which will draw a hexagon centered on (x,y) and with side length of n in python window.
My goal is to draw a tessellation animation which will draw the hexagon one after another from the center of the screen and spread out one by one (As the picture I attached here http://s7.postimage.org/lu6qqq2a3/Tes.jpg).
I am looking for the algorithm solving this problem. New to programing and I found it hard to do that.
Thanks!

For a ring of hexagons one can define a function like this:
def HexagonRing(x,y,n,r):
dc = n*math.sqrt(3) # distance between to neighbouring hexagon centers
xc,yc = x,y-r*dc # hexagon center of one before first hexagon (=last hexagon)
dx,dy = -dc*math.sqrt(3)/2,dc/2 # direction vector to next hexagon center
for i in range(0,6):
# draw r hexagons in line
for j in range(0,r):
xc,yc = xc+dx,yc+dy
Hexagon(xc,yc,n)
# rotate direction vector by 60°
dx,dy = (math.cos(math.pi/3)*dx+math.sin(math.pi/3)*dy,
-math.sin(math.pi/3)*dx+math.cos(math.pi/3)*dy)
Then one can draw one ring after the other:
Hexagon(0,0,10)
HexagonRing(0,0,10,1)
HexagonRing(0,0,10,2)
HexagonRing(0,0,10,3)

Related

Problem while rotating a 3D object with a rotating parent to face a given direction in Three.js

I am trying to plot an scene where there is an Earth that rotates independently from the camera. In this planet I plot random bezier curves just like in this example: https://pubnub.github.io/webgl-visualization/
Therefore, I add my bezier line as:
var origin = latLonToVector3(lat_source, lon_source, earth_radius);
var destination = latLonToVector3(lat_destination, lon_destination, earth_radius);
var bezierline = bezierCurveBetween(origin, destination);
earth.add(bezierline);
so that the plotted line rotates along with the Earth. Then, I managed to load a 3D model of a plane and make it follow the bezier curve as it is being drawn. So far so good. Finally, I would like to rotate the plane so that its belly is always following a tangent line to the bezier curve. To that end, I computed the tangent vectors for every two points of the line as:
var tangent_vectors = [];
for (var i = 0; i < pnts.length - 1; i++) {
var aux = new THREE.Vector3();
aux.subVectors(pnts[i+1], pnts[i]);
tangent_vectors[i] = aux.normalize();
}
return tangent_vectors;
Just to check that these vectors are ok, I used a THREE.ArrowHelper to see if they are tangent to every segment of the curve and indeed they are. Since I add them to the scene with earth.add( arrowHelper ); they also rotate with the planet and are consistent. I repeat this process over and over as the planet rotates by plotting and erasing the same bezier curve (same origin and destination).
However, the 3D Model behaves fine for the first bezier curve but as the planet rotates and a new bezier curve is plotted at the same place (with the same origin and destination coordinates) I can see that the plane is still following (plane.lookAt(tangent_vectors[point_index]);) the tangent lines of the original bezier curve (even though I am recomputing the tangent lines).
I think that the problem is that latitudes and longitudes (lat_source, lon_source, etc) are fixed in the real life reference framework. This causes that origin and destination variables always return the same values even though the planet is rotating. Then, the new bezier curves have essentially the same points but since I add these points using earth.add(bezier_line); Three.js internally takes care of rotating them to position them in the new rotation and this is not done with my tangent vectors.
I think this is the problem, but I do not know how to solve it. I guess I need to also rotate the tangent vectos of the curve according to the new rotation but I can't find how to do it.
Thanks for your help

Convert Cubemap coordinates to equivalents in Equirectangular

I have a set of coordinates of a 6-image Cubemap (Front, Back, Left, Right, Top, Bottom) as follows:
[ [160, 314], Front; [253, 231], Front; [345, 273], Left; [347, 92], Bottom; ... ]
Each image is 500x500p, being [0, 0] the top-left corner.
I want to convert these coordinates to their equivalents in equirectangular, for a 2500x1250p image. The layout is like this:
I don't need to convert the whole image, just the set of coordinates. Is there any straight-forward conversion por a specific pixel?
convert your image+2D coordinates to 3D normalized vector
the point (0,0,0) is the center of your cube map to make this work as intended. So basically you need to add the U,V direction vectors scaled to your coordinates to 3D position of texture point (0,0). The direction vectors are just unit vectors where each axis has 3 options {-1, 0 , +1} and only one axis coordinate is non zero for each vector. Each side of cube map has one combination ... Which one depends on your conventions which we do not know as you did not share any specifics.
use Cartesian to spherical coordinate system transformation
you do not need the radius just the two angles ...
convert the spherical angles to your 2D texture coordinates
This step depends on your 2D texture geometry. The simplest is rectangular texture (I think that is what you mean by equirectangular) but there are other mappings out there with specific features and each require different conversion. Here few examples:
Bump-map a sphere with a texture map
How to do a shader to convert to azimuthal_equidistant
For the rectangle texture you just scale the spherical angles into texture resolution size...
U = lon * Usize/(2*Pi)
V = (lat+(Pi/2)) * Vsize/Pi
plus/minus some orientation signs to match your coordinate systems.
btw. just found this (possibly duplicate QA):
GLSL Shader to convert six textures to Equirectangular projection

Invariant scale geometry

I am writing a mesh editor where I have manipulators with the help of which I change the vertices of the mesh. The task is to render the manipulators with constant dimensions, which would not change when changing the camera and viewport parameters. The projection matrix is perspective. I will be grateful for ideas how to implement the invariant scale geometry.
If I got it right you want to render some markers (for example vertex drag editation area) with the same visual size for any depth they are rendered to.
There are 2 approaches for this:
scale with depth
compute perpendicular distance to camera view (simple dot product) and scale the marker size so it has the same visual size invariant on the depth.
So if P0 is your camera position and Z is your camera view direction unit vector (usually Z axis). Then for any position P compute the scale like this:
depth = dot(P-P0,Z)
Now the scale depends on wanted visual size0 at some specified depth0. Now using triangle similarity we want:
size/dept = size0/depth0
size = size0*depth/depth0
so render your marker with size or scale depth/depth0. In case of using scaling you need to scale around your target position P otherwise your marker would shift to the sides (so translate, scale, translate back).
compute screen position and use non perspective rendering
so you transform target coordinates the same way as the graphic pipeline does until you got the screen x,y position. Remember it and in pass that will render your markers just use that instead of real position. For this rendering pass either use some constant depth (distance from camera) or use non perspective view matrix.
For more info see Understanding 4x4 homogenous transform matrices
[Edit1] pixel size
you need to use FOVx,FOVy projection angles and view/screen resolution (xs,ys) for that. That means if depth is znear and coordinate is at half of the angle then the projected coordinate will go to edge of screen:
tan(FOVx/2) = (xs/2)*pixelx/znear
tan(FOVy/2) = (ys/2)*pixely/znear
---------------------------------
pixelx = 2*znear*tan(FOVx/2)/xs
pixely = 2*znear*tan(FOVy/2)/ys
Where pixelx,pixely is size (per axis) representing single pixel visually at depth znear. In case booth sizes are the same (so pixel is square) you have all you need. In case they are not equal (pixel is not square) then you need to render markers in screen axis aligned coordinates so approach #2 is more suitable for such case.
So if you chose depth0=znear then you can set size0 as n*pixelx and/or n*pixely to get the visual size of n pixels. Or use any dept0 and rewrite the computation to:
pixelx = 2*depth0*tan(FOVx/2)/xs
pixely = 2*depth0*tan(FOVy/2)/ys
Just to be complete:
size0x = size_in_pixels*(2*depth0*tan(FOVx/2)/xs)
size0y = size_in_pixels*(2*depth0*tan(FOVy/2)/ys)
-------------------------------------------------
sizex = size_in_pixels*(2*depth0*tan(FOVx/2)/xs)*(depth/depth0)
sizey = size_in_pixels*(2*depth0*tan(FOVy/2)/ys)*(depth/depth0)
---------------------------------------------------------------
sizex = size_in_pixels*(2*tan(FOVx/2)/xs)*(depth)
sizey = size_in_pixels*(2*tan(FOVy/2)/ys)*(depth)
---------------------------------------------------------------
sizex = size_in_pixels*2*depth*tan(FOVx/2)/xs
sizey = size_in_pixels*2*depth*tan(FOVy/2)/ys

3D Ellipsoid out of discrete units

I'm trying to draw an ellipsoid in 3d space out of individual blocks.
I have no problem with 2D ellipses, but as far as 3D goes I'm having some trouble. I'm using Bresenham's circle algorithm to draw 2D ellipses. What I'm trying to do is draw 2D ellipses in layers with an increasing (starting from the bottom going up, using symmetry for the other half) radius on both the X radius and Y radius.
It all sounds like it would work, but when I go to implement it, I can't figure out how to alter the x radius and y radius to make the curve of the ellipsoid.
Your 2D slices should all have the same orientation and aspect ratio.
If your ellipsoid is axis-aligned, they should also have the same center.
Your slices should scale proportionally to:
scale = sqrt(1 - ((center-z)/half_vsize)^2)
where:
z = height of the current slice
center = height of the largest slice
half_vsize = half the vertical size of the ellipsoid
If (x0, y0) is the x- and y-width of the largest slice, (x, y) = (scale*x0, scale*y0) is the x- and y-width of the slice at height z.

Hole in a Polygon

I need to create a 3D model of a cube with a circular hole punched at the center of one face passing completely through the cube to the opposite side. I am able to generate the vertices for the faces and for the holes.
Four of the faces (untouched by the hole) can be modeled as a single triangle strip. The inside of the hole can also be modeled as a single triangle strip.
How do I generate the index buffer for the faces with the holes? Is there a standard algorithm(s) to do this?
I am using Direct3D but ideas from elsewhere are welcome.
To generate the index-buffer you want, you could do like this. Thinking in 2D with the face in question as a square with vertices (±1, ±1), and the hole as a circle in the middle.
You walk along the edge of the circle, dividing it into some number of segments.
For each vertex, you project it onto the surrounding square with (x/M,y/M), where M is max(abs(x),abs(y)). M is the absolute value of the biggest coordinate, so this will scale (x,y) so that the biggest coordinate is ±1.
This line you also divide into some number of segments.
The segments of two succeeding lines you join pairwise as faces.
This is an example, subdividing the circle into 64 segments, and each ray into 8 segments. You can choose the numbers to match your requirements.
alt text http://pici.se/pictures/AVhcssRRz.gif
Here is some Python code that demonstrates this:
from math import sin, cos, pi
from itertools import izip
def pairs(iterable):
"""Yields the previous and the current item on each iteration.
"""
last = None
for item in iterable:
if last is not None:
yield last, item
last = item
def circle(radius, subdiv):
"""Yields coordinates of a circle.
"""
for angle in xrange(0,subdiv+1):
x = radius * cos(angle * 2 * pi / subdiv)
y = radius * sin(angle * 2 * pi / subdiv)
yield x, y
def line(x0,y0,x1,y1,subdiv):
"""Yields coordinates of a line.
"""
for t in xrange(subdiv+1):
x = (subdiv - t)*x0 + t*x1
y = (subdiv - t)*y0 + t*y1
yield x/subdiv, y/subdiv
def tesselate_square_with_hole((x,y),(w,h), radius=0.5, subdiv_circle=64, subdiv_ray=8):
"""Yields quads of a tesselated square with a circluar hole.
"""
for (x0,y0),(x1,y1) in pairs(circle(radius,subdiv_circle)):
M0 = max(abs(x0),abs(y0))
xM0, yM0 = x0/M0, y0/M0
M1 = max(abs(x1),abs(y1))
xM1, yM1 = x1/M1, y1/M1
L1 = line(x0,y0,xM0,yM0,subdiv_ray)
L2 = line(x1,y1,xM1,yM1,subdiv_ray)
for ((xa,ya),(xb,yb)),((xc,yc),(xd,yd)) in pairs(izip(L1,L2)):
yield ((x+xa*w/2,y+ya*h/2),
(x+xb*w/2,y+yb*h/2),
(x+xc*w/2,y+yc*h/2),
(x+xd*w/2,y+yd*h/2))
import pygame
def main():
"""Simple pygame program that displays the tesselated figure.
"""
print('Calculating faces...')
faces = list(tesselate_square_with_hole((150,150),(200,200),0.5,64,8))
print('done')
pygame.init()
pygame.display.set_mode((300,300))
surf = pygame.display.get_surface()
running = True
while running:
need_repaint = False
for event in pygame.event.get() or [pygame.event.wait()]:
if event.type == pygame.QUIT:
running = False
elif event.type in (pygame.VIDEOEXPOSE, pygame.VIDEORESIZE):
need_repaint = True
if need_repaint:
print('Repaint')
surf.fill((255,255,255))
for pa,pb,pc,pd in faces:
# draw a single quad with corners (pa,pb,pd,pc)
pygame.draw.aalines(surf,(0,0,0),True,(pa,pb,pd,pc),1)
pygame.display.flip()
try:
main()
finally:
pygame.quit()
You want to look up Tessellation which is the area of math that deals with what MizardX is showing.Folks in 3D Graphcs have to deal with this all the time and there are a variety of tessellation algorithms to take a face with a hole and calculate the triangles needed to render it.
Modern hardware usually can't render concave polygons correctly.
Specifically, there usually isn't even a way to define a polygon with a hole.
You'll need to find a triangulation of the plane around the hole somehow. The best way is probably to create triangles from a vertex of the hole to the nearest vertices of the rectangular face. This will probably create some very thin triangles. if that's not a problem then you're done. if it is then you'll need some mesh fairing/optimization algorithm to create nice looking triangles.
Is alpha blending out of the question? If not, just texture the sides with holes using a texture that has transparency in the middle. You have to do more rendering of polygons since you can't take advantage of drawing front-to-back and ignoring covered faces, but it might be faster than having a lot tiny triangles.
I'm imagining 4 triangle fans coming from the 4 corners of the square.
Just a thought -
If you're into cheating (as done many times in games), you can always construct a regular cube but have the texture for the two faces you desire with a hole (alpha = 0), you can then either clip it in the shader, or blend it (in which case you need to render with Z sort).
You get the inside hole by constructing an inner cylinder facing inwards and with no caps.
Of course this will only work if the geometry is not important to you.

Resources