Calculating isobars - algorithm

If I have a set of readings (eg pressures) taken at lat-long points what are the algorithm(s) can I use to determine isobars? The recorded positions are not on any fixed pattern, and we can assume I have a few hundred over a reasonable area.

Related

How do I implement genetic algorithm on placing 2 or more kinds of element with different (repeating)distances in a grid?

Please forgive me if I do not explain my question clearly in title.
Here may I show you two pictures as my example:
my question is described as follows: I have 2 or more different objects(In the pictures, two objects: circle and cross), each one is placed repeatedly with a fixed row/column distance (In the pictures, the circle has a distance of 4 and cross has a distance of 2) into a grid.
In the first picture, each of the two objects are repeated correctly without any interruptions(here interruption means one object may occupy another one's position), but the arrangement in the first picture is ununiform distributed; on the contrary, in the second picture, the two objects may have interruptions (the circle object occupies cross objects' position) but the picture is uniformly distributed.
My target is to get the placement as uniform as possible (the objects are still placed with fixed distances but may allow some occupations). Is there a potential algorithm for this question? Or are there any similar questions?
I have some immature thinkings on this problem that: 1. occupation may relate to least common multiple; 2. how to define "uniformly distributed" mathematically? Maybe there's no genetic solution but is there a solution for some special cases? (for example, 3 objects with distance of multiple of 2, or multiple of 3?)
Uniformity can be measured as sum of squared inverse distances(or distances to equilibrium distances). Because it has squared relation, any single piece that approaches others will have big fitness penalty in system so that the system will not tolerate too close piece and prefer a better distribution.
If you do not use squared (or higher orders) distance but simple distance, then system starts tolerating even overlapped pieces.
If you want to manually compute uniformity, then compute the standard deviation of distances. You'd say its perfect with 1 distance and 0 deviation but small enough deviation also acceptable.
I tested this only on a problem to fit 106 circles in a square thats 10x the size of circle.

Choosing one point from each set of points in a map

In a map, there are some ten thousand points scattered all over the place.
Every point belongs to some set of points, and there are about a hundred sets of points.
Some sets have a lot of points, whereas some sets have a few.
The problem is to partition the map into zones such that each zone (1) is a proper size(not too large, that is), and more importantly, (2) includes various points. By various, I mean that I am picking only one point from each set in each zone, and a zone contains (representative) points from, say, about 20 different sets.
What would be a proper algorithm to do this?
What immediately comes up on my mind is some variant of Minimum Spanning Tree algorithm, which starts from a point(node), and considers different edge weights(distances between points), and picks the nearest point that belongs to a set that we did not picked yet, and do this until the number of kinds of sets reaches 20, which makes a zone, and then start again from the nearest point outside the current zone.
However, I hope there is more effective algorithm to do this job, and I am already having difficulty in formalizing my design.

extract points which satisfy certain conditions

I have an array of points in one plane. They form some shape. I need to extract points from this array which only form straight lines of this shape.
At this moment I have an algorithm but it does not work very good. I take first two points, make a straight line and then check if the following points lie on it with some tolerance. But there is a problem: the points which form straight line are not really on the straight but have some deviation. This deviation is quite large. If in my algorithm I make deviation large enough to get points from the straight part, then other points which are on the slightly bent part but have deviation less then specified also extracted.
I am looking for some idea on how to perform such task.
Here is the picture:
In circles are the parts which I want to extract. Red points are the parts which I could extract with my approach. If I increase the tolerance then I miss the straight pieces too.
First, if you already have some candidate subset of points and want to check whether they lie on a straight line. Use a form of linear regression to identify the best-fitting line, then check how well it fits and accept or reject the hypothesis that this particular segment is linear based on that.
One of the most standard ways of doing that is using Least Squares method.
Identifying the subset is a different problem, the best solution to which will depend strongly on the kind of data you have and the objective. I suggest that enumerating all the segments is a good starting point, if the amount of data is not extremely large, -- that should be doable in no more than cubic time, I gather.
There are certainly some approximations one can apply, e.g. choosing a point in the sequence and building a subset by iteratively adding points on either side as long as the segment remains linear within the tolerance threshold, than accepting or rejecting it if the segment is long enough.
I assume here that the curve is parameterizable by one of the coordinates. If this is not the case, e.g. if the curve is closed, additional steps may be required to separate the curve into parameterizable segments.
EDIT: how to check a segment is straight
There's a number of options.
First, I would expect that for a straight line the average deviation would stay roughly the same as you add the new points, then you can simply find a reasonable threshold on that given the data.
Second option is to further split the subset into a fixed number of parts (e.g. 2), find the best fitting line for each one and then compare these. In case of a straight line, roughly the same line should be predicted, but for a curve it would be different.
Third option is to perform nonlinear curve fitting, e.g. fit a quadratic curve and check the coefficient for the quadratic term -- if the line is straight, it should be close to zero.
In each case, of course, there is a tradeoff between the segment size and the deviation of the points from that segment. In the extreme case, there would either be one huge linear segment with huge deviation or a whole buch of 2-point segments with 0 deviation. The actual threshold on the deviation, the difference between the tangent curves, or the magnitude of the quadratic term (depending on the option you prefer) has to be selected for the given dataset to suit your needs. Looking at the plot, I would say that the threshold should be picked so as to allow for segments of length 10 or so.

What's the best method to compare original trajectory with two compressed trajectory

Suppose to have a GPS trajectory - i.e.: a series of spatio-temporal coords, every coord is a (x,y,t) information, where x is longitude, y is latitude and t is the time stamp.
Suppose each trajectory identified by 1000 (x,y) points, a compressed trajectory is trajectory with fewer points than the original, for instance 300 points. A compression algorithm (Douglas-Peucker, Bellman, etc) decide what points will be in compressed trajectory and what point will be discarded.
Each algorithm make his own choice. Better algorithms choice the points not only by spatial characteristics (x, y) but using spatio-temporal characteristics (x,y,t).
Now I need a way to compare two compressed trajectories against the original to understand what compression algorithm better reduce a spatio-temporal (temporal component is really important) trajectory.
I've thinked of DTW algorithm to check trajectory similarity, but this probably don't care about temporal component. What algorithm can I use to make this control?
What is the best compression algorithm depends to a large extent on what you are trying to achieve with it, and is dependent on other external variables. Typically, were going to identify and remove spikes, and then remove redundant data. For example;
Known minimum and maximum velocity, acceleration, and ability to tuen will let you remove spikes. If we look at the join distance between a pair of points divided by the time where
velocity = sqrt((xb - xa)^2 + (yb - ya))/(tb-ta)
we can eliminate points where the distance couldn't be travelled in the elapsed time given the speed constraint. We can do the same with acceleration constraints, and change in direction constraints for a given velocity. These constraints change whether the GPS receiver is static, hand held, in a car, in an aeroplane etc...
We can remove redundant points using a moving window looking at three points, where if the an interpolated (x,y,t) for middle point can be compared with an observed point, and the observed point removed if it lies within a specified distance + time tolerance of the interpolated point. We can also curve fit the data and consider the distance to the curve rather than using a moving 3 point window.
The compression may also have differing goals based on the constraints given, e.g. to simply reduce the data size by removing redundant observations and spikes, or to smooth the data as well.
For the former, after checking for spikes based on defined constraints, we simply check the 3d distance of each point to the polyline connecting the compressed points. This is achieved by finding the pair of points before and after the point that has been removed, interpolating a position on the line connecting those points based on the observed time, and comparing the interpolated position with the observed position. The amount of points removed will increase as we allow this distance tolerance to increase.
For the latter we also have to consider how well the smoothed result models the data, the weights imposed by the constraints, and the design shape / curve parameters.
Hope this makes some sense.
Maybe you could use mean square distance between trajectories over time.
Probably, simply looking at distance at time 1s,2s,... will be enough, but you can also do it more precise between time stamps integrating, (x1(t)-x2(t))^2 + (y1(t)-y2(t))^2. Note that between 2 time stamps both trajectories will be straight line.
I've found what I need to compute spatio-temporal error.
As written in paper "Compression and Mining of GPS Trace Data:
New Techniques and Applications" by Lawson, Ravi & Hwang:
Synchronized Euclidean distance (sed) measures the distance between
two points at identical time stamps. In Figure 1, five time steps (t1
through t5) are shown. The simplified line (which can be thought of as
the compressed representation of the trace) is comprised of only two
points (P't1 and P't5); thereby, it does not include points P't2, P't3
and P't4. To quantify the error introduced by these missing points,
distance is measured at the identical time steps. Since three points
were removed between P't1 and P't5, the line is divided into four
equal sized line segments using the three points P't2, P't3 and P't4
for the purposes of measuring the error. The total error is measured
as the sum of the distance between all points at the synchronized time
instants, as shown below. (In the following expression, n represents
the total number of points considered.)

Randomly dividing a 2d complex enclosed region

First I will define:
Region: big stuff manually created I want to divide.
Zone: small stuff I want to generate.
I have a map. The world map in fact. And I want to divide it into small zones. The size of the zones will be dependent on what region the zone is in. For instance very small for Europe (maybe Europe will have like 200 zones) but only a couple of huge ones for the Atlantic Ocean.
I can manually create points to enclose a region. I will create regions for each big space I want it to have different size than other spaces. For instance I will create an enclosed region for Europe. So I got a butch of (latitude, longitude) points defining the limits of the Europe region. The shape is of course not regular and there are holes in the middle of it (I don't want to create small zones over the Mediterranean sea but a big one). So what we got is a huge 2D shape to be filled up with zones.
Zones themselves are n-sized polygons, number of sizes can be randomly chosen or subject to other constraints. The area of each zone is also limited random (like 50 plus/minus 40%) although this constraint again can be relaxed (as exception, not as rule). Zones can not overlap and the whole region must be divided.
The obvious question, any algorithm that look like can be used to solve this problem?
I even have problems to determine if a given point is inside or outside an enclosed region.
Me, I'd do it the other way round, put a point in the (approximate) centre of all the zones and compute the Voronoi Diagram of the resulting point set.
EDIT: in response to #Unreason's comments. I don't claim that computing the Voronoi diagram is an answer to the question asked. I do claim that computing the Voronoi diagram is a suitable method for dividing a planar map into zones which are defined by their closeness to a point. This may, or may not, satisfy OP's underlying requirement and OP is free to use or ignore my suggestion.
I implied the following, but will now make it explicit: OP, if taken with this suggestion, should define the points (lat,long) at the 'centres' of each zone required and run the algorithm. Voronoi diagrams are not computed iteratively, if OP doesn't like the solution then OP would have to shift the points around and re-compute. I guess it would be feasible to write a routine to do this; the hard part, as ever with computational cartography, is in defining a computable rule about how well a trial solution fits (quasi-)aesthetic requirements.
I wouldn't bother, I'd use country capital cities as the points for my zones (relatively densely packed in Europe, relatively sparse in the Atlantic) and let the algorithm run. Job done.
Perhaps OP might use the locations of all cities with populations over 5 x 10^5 (there are probably about 200 of those in Europe). Or some other points.
Oh, and computing the Voronoi diagram isn't random either, it's entirely deterministic. Again, this may or may not satisfy the underlying requirement.
To determine if a point is inside a polygon follow point in polygon in wikipedia or use some geometry framework.
The restrictions to divide a polygon into smaller polygons of loosely same size are not very limiting at all, for example if you
cut the big polygons with vertical and horizontal lines spaced such that on land you will get exactly the targeted are size, then for europe you will satisfy your criteria for most of the zones.
Inspect them all and for the ones that do not satisfy the criteria you can start modifying the borders with the neighbouring zones in such a way to reach desired size (as you have +/- 40% this should not be hard).
You can do this by moving the shared nodes or by adding points to the borders and moving only those lines.
Also, before the above, join the zones from the initial cut that are smaller then certain percentage (for example 20% of the target size; these could be islands and other small pieces).
The algorithm would work well for the large number of small zones, but will not work as well for regions that need to be cut into only a few zones (but it would work).

Resources