how to calculate airfoil thickness using coordinate with python - curve

so far I have an airfoil file, S823, which has following coordinates.
1.000000 0.000000
0.996260 0.000570
0.985390 0.002590
0.968200 0.006440
0.945590 0.012020
0.918230 0.018900
0.886490 0.026690
0.850640 0.035270
0.811260 0.044730
0.769210 0.054910
0.725320 0.065490
0.680480 0.075890
0.635300 0.085260
0.589910 0.092770
0.544140 0.097990
0.497850 0.100890
0.451090 0.101740
0.404250 0.100920
0.357860 0.098640
0.312450 0.095050
0.268520 0.090260
0.226560 0.084410
0.187020 0.077610
0.150340 0.070010
0.116930 0.061730
0.087140 0.052940
0.061300 0.043770
0.039710 0.034410
0.022590 0.025040
0.010170 0.015870
0.002610 0.007130
0.000000 0.000000
0.000020 -0.000570
0.003550 -0.007470
0.013320 -0.014500
0.028210 -0.021620
0.047970 -0.028480
0.072370 -0.034920
0.101170 -0.040770
0.134060 -0.045950
0.170710 -0.050360
0.210750 -0.053940
0.253750 -0.056640
0.299260 -0.058430
0.346770 -0.059270
0.395780 -0.059140
0.445720 -0.058010
0.496040 -0.055800
0.546170 -0.052370
0.595730 -0.047450
0.644760 -0.040840
0.693550 -0.033050
0.741760 -0.025080
0.788480 -0.017710
0.832770 -0.011380
0.873640 -0.006380
0.910130 -0.002830
0.941340 -0.000660
0.966490 0.000320
0.984930 0.000470
0.996200 0.000190
1.000000 0.000000
The coordinate starts from trailing edge to leading edge then back to trailing edge again.
Due to the fact that the x-coordinates are not the same as the points above and below the x-axis (there is no x=0.002610 when the coordinates moves back from leading to trailing edge). It is hard for me to just subtract the coordinates, straight away, to get my maximum thickness.
I am wondering if there is a way that I can smooth the airfoil coordinates and get the maximum thickness?

Yes, you normally want to use something like cubic B-splines to smooth the curves. Note that a B-spline won't normally go precisely through each point you specify. Along the top/bottom sides, the you typically have a large enough curve that it's little problem.
Near the leading edge, you frequently need to fill in some points for things to work well. Unfortunately in most cases (probably including this one, with only 62 points) it's hard to be at all sure where those points you're trying to fill in should go.
When I've had to do this, I've used xfoil to simulate the airfoil, and played with the LE until I got the drag bucket/LD/Cp curves to match (at least pretty closely) with those given for the design. Not always easy to do that though.

Related

Three.js matrix precision for real worlds

I'm experimenting some issues when work with real worlds.
The center of my camera is 280000, 45787254 (for example).
The extension of my world is about 500 x 500 (not too big)
I'm using data based in metric units (meters).
I have created a tile map structure build with simple planes.
I see little gaps between the plane borders and this planes are built to be contiguous (that is xmin of the adjacent plane is equal to xmax of previous).
In the past I have issues related with ray cast.
Matrix projection with this big units have low precision.
Change near value to number great than 10 can be the fix. However, using this value means bad visualization (you can't place the cam much near of the scene, it disappears).
I talked with the guy who develops potree and he said me is had to move the lidar worlds to 0,0 to work properly.
So... the final solution is to work in 0,0 worlds, isn't it ?
Or is there any trick we can do at matrix calculations?
I'd like to know three.js developers.
Floating point math is best at ranges close to zero, you just end up compounding errors as you move far away. You can always do as much math as possible near the origin and then translate the result to wherever you need, that will help with some of it, but if you can, work in local coordinates.
Potree probably gets odd ripple-looking aliasing effects when too far from the origin, no?

Invoice / OCR: Detect two important points in invoice image

I am currently working on OCR software and my idea is to use templates to try to recognize data inside invoices.
However scanned invoices can have several 'flaws' with them:
Not all invoices, based on a single template, are correctly aligned under the scanner.
People can write on invoices
etc.
Example of invoice: (Have to google it, sadly cannot add a more concrete version as client data is confidential obviously)
I find my data in the invoices based on the x-values of the text.
However I need to know the scale of the invoice and the offset from left/right, before I can do any real calculations with all data that I have retrieved.
What have I tried so far?
1) Making the image monochrome and use the left and right bounds of the first appearance of a black pixel. This fails due to the fact that people can write on invoices.
2) Divide the invoice up in vertical sections, use the sections that have the highest amount of black pixels. Fails due to the fact that the distribution is not always uniform amongst similar templates.
I could really use your help on (1) how to identify important points in invoices and (2) on what I should focus as the important points.
I hope the question is clear enough as it is quite hard to explain.
Detecting rotation
I would suggest you start by detecting straight lines.
Look (perhaps randomly) for small areas with high contrast, i.e. mostly white but a fair amount of very black pixels as well. Then try to fit a line to these black pixels, e.g. using least squares method. Drop the outliers, and fit another line to the remaining points. Iterate this as required. Evaluate how good that fit is, i.e. how many of the pixels in the observed area are really close to the line, and how far that line extends beyond the observed area. Do this process for a number of regions, and you should get a weighted list of lines.
For each line, you can compute the direction of the line itself and the direction orthogonal to that. One of these numbers can be chosen from an interval [0°, 90°), the other will be 90° plus that value, so storing one is enough. Take all these directions, and find one angle which best matches all of them. You can do that using a sliding window of e.g. 5°: slide accross that (cyclic) region and find a value where the maximal number of lines are within the window, then compute the average or median of the angles within that window. All of this computation can be done taking the weights of the lines into account.
Once you have found the direction of lines, you can rotate your image so that the lines are perfectly aligned to the coordinate axes.
Detecting translation
Assuming the image wasn't scaled at any point, you can then try to use a FFT-based correlation of the image to match it to the template. Convert both images to gray, pad them with zeros till the originals take up at most 1/2 the edge length of the padded image, which preferrably should be a power of two. FFT both images in both directions, multiply them element-wise and iFFT back. The resulting image will encode how much the two images would agree for a given shift relative to one another. Simply find the maximum, and you know how to make them match.
Added text will cause no problems at all. This method will work best for large areas, like the company logo and gray background boxes. Thin lines will provide a poorer match, so in those cases you might have to blur the picture before doing the correlation, to broaden the features. You don't have to use the blurred image for further processing; once you know the offset you can return to the rotated but unblurred version.
Now you know both rotation and translation, and assumed no scaling or shearing, so you know exactly which portion of the template corresponds to which portion of the scan. Proceed.
If rotation is solved already, I'd just sum up all pixel color values horizontally and vertically to a single horizontal / vertical "line". This should provide clear spikes where you have horizontal and vertical lines in the form.
p.s. Generated a corresponding horizontal image with Gimp's scaling capabilities, attached below (it's a bit hard to see because it's only one pixel high and may get scaled down because it's > 700 px wide; the url is http://i.stack.imgur.com/Zy8zO.png ).

Union of "adjacent polylines that contain bezier curves"

Concrete example: take a map of European Countries, and a list of pointers to "the Paths that represent countries in the European Union", and output a single "Path representing the European Union".
e.g. if I have three input paths: red, green, and blue.
Red is made of straight line segments only
Green is made of line segments and beziers
Blue is made of beziers only
...then I need to create an output polyline-with-beziers that is the union of the three objects.
ADDITIONALLY, I need to cope with some error margin in the input data - c.f. the image below there are some very small "gaps" between the input shapes. In the image, the bottom figure (red) is the desired output.
This could easily go horribly wrong and take weeks of me failing to make it work. I'm trying to find a relatively simple approach which might be "good enough", but I'm currently stuck on:
How do you even begin to union Beziers?
What's a smart way of dealing with the "gaps" / error margin - I'm sure there's something cunning to do with simply rounding my float co-ordinates - but I can't see it :(
Finally ... target platform is iPhone - so I have access to all of Apple's Quartz / QuartzCore / CoreAnimation / etc. That provides some utility methods - but note: even Apple's official implementation of basics such as "does Path A intersect Path B?" are quite badly broken / incorrect in a lot of cases - so it's not very reliable :(.
IDEA of how to achieve this (maybe) - but I don't know how to go about this either:
Perhaps ... instead calculate "the internal lines", and remove them, leaving me with something that's almost correct as "the path describing the union".
It could be quite badly wrong with my example Blue object, the point of intersection could give a badly-wrong curve - but it might be good enough.
To do this, I was thinking, maybe:
Take the convex hulls of each of the shapes
any line-segments in the hulls that overlap other hulls ... are "internal"
... reading-back to the points in the original shape that created each hull-line-segment (OR were invalidated by that segment) ... those points are "internal to the union"
?
First, you need to know how to do a union of polygonal shapes. I assume you know that, if not you have to learn it first.
Now you can tesselate your curves, find the polygonal union, and fit pieces of original curved back into the union. You will have to adjust the intersection points slightly, from straight line intersections to curve intersections, but the adjustments will be small and you can find them with a simple iterative approximation algorithm.
To cope with errors, offset your polygons by a pisitive amount before the union, and offset the result by a negative amount before fitting the curve pieces.
Sorry, can't type much more on this phone :-(

Find tunnel 'center line'?

I have some map files consisting of 'polylines' (each line is just a list of vertices) representing tunnels, and I want to try and find the tunnel 'center line' (shown, roughly, in red below).
I've had some success in the past using Delaunay triangulation but I'd like to avoid that method as it does not (in general) allow for easy/frequent modification of my map data.
Any ideas on how I might be able to do this?
An "algorithm" that works well with localized data changes.
The critic's view
The Good
The nice part is that it uses a mixture of image processing and graph operations available in most libraries, may be parallelized easily, is reasonable fast, may be tuned to use a relatively small memory footprint and doesn't have to be recalculated outside the modified area if you store the intermediate results.
The Bad
I wrote "algorithm", in quotes, just because I developed it and surely is not robust enough to cope with pathological cases. If your graph has a lot of cycles you may end up with some phantom lines. More on this and examples later.
And The Ugly
The ugly part is that you need to be able to flood fill the map, which is not always possible. I posted a comment a few days ago asking if your graphs can be flood filled, but didn't receive an answer. So I decided to post it anyway.
The Sketch
The idea is:
Use image processing to get a fine line of pixels representing the center path
Partition the image in chunks commensurated to the tunnel thinnest passages
At each partition, represent a point at the "center of mass" of the contained pixels
Use those pixels to represent the Vertices of a Graph
Add Edges to the Graph based on a "near neighbour" policy
Remove spurious small cycles in the induced Graph
End- The remaining Edges represent your desired path
The parallelization opportunity arises from the fact that the partitions may be computed in standalone processes, and the resulting graph may be partitioned to find the small cycles that need to be removed. These factors also allow to reduce the memory needed by serializing instead of doing calcs in parallel, but I didn't go trough this.
The Plot
I'll no provide pseudocode, as the difficult part is just that not covered by your libraries. Instead of pseudocode I'll post the images resulting from the successive steps.
I wrote the program in Mathematica, and I can post it if is of some service to you.
A- Start with a nice flood filled tunnel image
B- Apply a Distance Transformation
The Distance Transformation gives the distance transform of image, where the value of each pixel is replaced by its distance to the nearest background pixel.
You can see that our desired path is the Local Maxima within the tunnel
C- Convolve the image with an appropriate kernel
The selected kernel is a Laplacian-of-Gaussian kernel of pixel radius 2. It has the magic property of enhancing the gray level edges, as you can see below.
D- Cutoff gray levels and Binarize the image
To get a nice view of the center line!
Comment
Perhaps that is enough for you, as you ay know how to transform a thin line to an approximate piecewise segments sequence. As that is not the case for me, I continued this path to get the desired segments.
E- Image Partition
Here is when some advantages of the algorithm show up: you may start using parallel processing or decide to process each segment at a time. You may also compare the resulting segments with the previous run and re-use the previous results
F- Center of Mass detection
All the white points in each sub-image are replaced by only one point at the center of mass
XCM = (Σ i∈Points Xi)/NumPoints
YCM = (Σ i∈Points Yi)/NumPoints
The white pixels are difficult to see (asymptotically difficult with param "a" age), but there they are.
G- Graph setup from Vertices
Form a Graph using the selected points as Vertex. Still no Edges.
H- select Candidate Edges
Using the Euclidean Distance between points, select candidate edges. A cutoff is used to select an appropriate set of Edges. Here we are using 1.5 the subimagesize.
As you can see the resulting Graph have a few small cycles that we are going to remove in the next step.
H- Remove Small Cycles
Using a Cycle detection routine we remove the small cycles up to a certain length. The cutoff length depends on a few parms and you should figure it empirically for your graphs family
I- That's it!
You can see that the resulting center line is shifted a little bit upwards. The reason is that I'm superimposing images of different type in Mathematica ... and I gave up trying to convince the program to do what I want :)
A Few Shots
As I did the testing, I collected a few images. They are probably the most un-tunnelish things in the world, but my Tunnels-101 went astray.
Anyway, here they are. Remember that I have a displacement of a few pixels upwards ...
HTH !
.
Update
Just in case you have access to Mathematica 8 (I got it today) there is a new function Thinning. Just look:
This is a pretty classic skeletonization problem; there are lots of algorithms available. Some algorithms work in principle on outline contours, but since almost everyone uses them on images, I'm not sure how available such things will be. Anyway, if you can just plot and fill the sewer outlines and then use a skeletonization algorithm, you could get something close to the midline (within pixel resolution).
Then you could walk along those lines and do a binary search with circles until you hit at least two separate line segments (three if you're at a branch point). The midpoint of the two spots you first hit, or the center of a circle touching the three points you first hit, is a good estimate of the center.
Well in Python using package skimage it is an easy task as follows.
import pylab as pl
from skimage import morphology as mp
tun = 1-pl.imread('tunnel.png')[...,0] #your tunnel image
skl = mp.medial_axis(tun) #skeleton
pl.subplot(121)
pl.imshow(tun,cmap=pl.cm.gray)
pl.subplot(122)
pl.imshow(skl,cmap=pl.cm.gray)
pl.show()

Cunning ways to draw a starfield

I'm working on a game, and I've come up with a rather interesting problem: clever ways to draw starfields.
It's a 2D game, so the action can scroll in the X and Y directions. In addition, we can adjust the scale to show more or less of the play area. I'd also like the starfield to have fake parallax to give an impression of depth.
Right now I'm doing this in the traditional way, by having a big array of stars, each of which is tagged by a 'depth' factor. To draw, I translate each star according to the camera position multiplied by the 'depth', so some stars move a lot, and some move a little. This all works fine, but of course since I have a finite number of stars in my array I have issues when the camera moves too far or we zoom out too much. This is will all work, but is involving lots of code and special cases.
This offends my sense of elegance. There has got be a better way of achieving this.
I've considered procedurally generating my stars, which allows me to have an unlimited number: e.g. by using a fixed seed and PRNG to determine the coordinates. I would need to divide the sky up into tiles, generate the seed by hashing the tile coordinates, and then draw, say, 100 stars per tile. This allows me to extend my starfield indefinitely in all directions while still only needing to consider the tiles that are visible --- but this doesn't work with the 'depth' factor, as this allows stars to stray outside their tile. I could simply use multiple layered non-parallax starfields using this algorithm but this strikes me as cheating.
And, of course, I need to do all this every frame, so it's got to be fast.
What do you all reckon?
Have a few layers of stars.
For each layer, use a seeded random number generator (or just an array) to generate the amount of blank space between a star and the next one (a poisson distibution, if you want to be picky about it). You want the stars pretty sparse, so the blank space will often be more than whole row. The back layers will be more dense than the front ones, obviously.
Use this to give yourself several tiles each (say) two screens wide. Scroll the starfield by keeping track of where that "first" star is for each layer.
The player won't notice the tiling, because you scroll the tiles at different rates for each layer, especially if you use a few layers that are each fairly sparse.
As stars in the background don't move as fast as those in the foreground, you could maybe make multi-layer tiles for the background and replace them with one-layer-ones when you've got time to do that. Oh, and how about repeating patterns in the background layers? This would maybe allow you to pregenerate all background tiles - you could still shift them in height and overlay multiple ones with random offsets or so to make it look random.
Is there anything wrong with wrapping the star field around in X and Y? Because of your depth, the wraparound distance should depend on the depth, but you can do that. Each recorded star at (x,y,depth) should appear at all points
[x + j * S * depth, y + k * S * depth]
for all integers j and k. S is a wraparound parameter. If S is 1 then wraparound happens immediately and all stars are always shown somewhere. If S is higher wraparound doesn't happen immediately and some stars are shown off screen. You'll probably want S big enough to ensure no repeats at maximum zoom out.
Each frame, render the stars on one single bitmap/layer. They are only dots, and so it will be faster than using any algorithm with multiple layers.
Now you need an infinite 2D-grid of 3D-boxes filled with a finite number of stars. For each box, you can define an individual RANDOM_SEED value, using its grid-coordinates. The stars in each box can be generated on-the-fly.
Remember to correct the perspective when you zoom: Each 3D-box has a near-rectangle (front-face) and a far-rectangle. You will see more stars of neighbouring boxes, whenever the far-rectangle or near-rectangle shrinks in your view.
Your far-rectangles should never be smaller than half the width of the near-rectangles, otherwise it might be troublesome: You might have to scan huge lists of stars where most of them are out of bounds. You can realize stars behind the far-rectangles via additional 2D-grids of 3D-boxes with other sizes and depths.
Why not combine the coordinates of the starfield 3D boxes to form the random number seed? Use a global "adjustment" if you want to produce different universes. That way you don't need to track the boxes you can't see because the contents are fixed by their location.

Resources