Find if 2 polygons have an intersection (Geoserver) - geoserver

I need to check if 2 polygons have at least one common point. What should be a proper CQL syntax for WFS request?
And another variant if my 1st is not possible: get a list of polygons that have at least one common point or intersection with the source polygon?

The answers to these two questions are related. I'll answer the 2nd (easier) one first. You can test for at least a point in common using the Intersects test so a CQL filter like:
intersects(POLYGON((.....)), the_geom)
This will return all the features that intersect the geometry stored in the the_geom attribute.
To limit it to a specific feature you can extend the filter to request a specific feature such as:
in (1) and intersects(POLYGON((.....)), the_geom)
to return a feature if it's id is 1 and it intersects.

Related

How to choose best algorithm for sorting

I'm newbie here.
I am currently trying to solve the problem regarding the sorting algorithm.
I will outline the situation:
we have 60 items. Variables of type A and B are written to these items. Variables A and B are stored randomly. Variables A and B have another parameter X, which indicates their material. (material may change during storage). Items are then taken one by one to another item with 10 elements, where we try to achieve the storage of 2 or 3 of the same types of variables A or B from the same material on one element. After saving the required number of variables with the same properties, they are subsequently removed from this item.
I tried to describe it as simply as possible, but maybe I should have described it with a real example.
It can be imagined as a warehouse that has 10 elements and takes from a conveyor that has a capacity of 60 elements. As soon as the warehouse has the same type of goods of the same material on one element, it dispatches the goods and releases its position.
So I want to remove the elements from the conveyor as efficiently as possible and sort them in stock according to requirements.
It occurred to me to sort by case for all options.
Thank you for all your ideas and comments. If it's not very clear, then I apologize and try to explain it differently. :)

Calculate surrounding index keys

I'm attempting to retrieve H3 index keys directly adjacent to my current location. I'm wondering if this can be done by mutating/calculating the coordinate directly or if I have to use the library bindings to do this?
Take this example:
./bin/geoToH3 --resolution 6 --latitude 43.6533055 --longitude -79.4018915
This would return the key 862b9bc77ffffff. I now want to retrieve all relevant 6 neighbors keys (the values of the kRing I believe is how to describe it?).
A tangent though equally curious question might render the above irrelevant: if I were attempting to query entries that have all 7 indexes is there a better way than using an OR statement seeking all 7 values out? Since the index is numeric I'm wondering if I could just check for a range within the numeric representation?
The short answer is that you need to use kRing (either through the bindings or the command-line tools) to get the neighbors. While there are some limited cases where you could get the neighbors through bit manipulation of the index, in many cases the numeric index of a neighbor might be distant. The basic rule is that while indexes that are numerically close are geographically close, the reverse is not necessarily true.
For the same reason, you generally can't use a range query to look for nearby hexagons. The general lookup pattern is to find the neighboring cells of interest in code, using kRing, then query for all of them in your database.

Algorithm for partitioning large area of coordinate space into smaller ones on-the-fly when certain conditions are met

I am working on a project of partitioning area of coordinate space. As I need your help, I've explained in detail. So it does not seem difficult to understand the problem.
Dividing large areas into small sections that fit the criteria is the key. The criteria are described as below. I think there are three approaches to solve specific problems that I'm trying to solve. If you think there is a better approach to this problem, please recommend.
First, when the intersection of lanes is called node, the large area of the coordinate space consists of numerous edges(= lanes) that join the nodes, and each edge has a specific value. I want to divide a large area into as many smaller sections as I want, where the edges within the segmented zone must be adjacent to each other and the sum of the values of the edges in divided zone is similar to other small divided zone.( = balancing)
<Data information>
[edge_id] : edge1, edge2, edge3 ,, etc
[edge_value] : 10, 5, 20,, etc
[edge_node1_id]: n1_1, n1_2, n1_3,, etc
[edge_node1_latitude] : 37.29188, 37.28342, 37.29563,, etc
[edge_node1_longitude] : 127.09838, 127.10327, 127.10221,, etc
[edge_node2_id]: n2_1, n2_2, n2_3,, etc
[edge_node2_latitude] : 37.29191, 37.28301, 37.29632,, etc
[edge_node2_longitude] : 127.09840, 127.10333, 127.10314,, etc
Second, it is similar to the above method, but if the above method is difficult, center point of the edge can be used instead of the edge. In other words, the values of edge can be replaced by the the value of center point of edge and rest of the process is similar to above. I would like to divide the zone by points that are close to each other instead of the first condition mentioned above(the adjacency between the edges). And I want to make sure that the sum of the values of each point within the divided zone is similar to each other zones. (= balancing)
<Data information>
[center_node_id]: c_1, c_2, c_3 ,,etc
[center_node_value]: 10, 5, 20 ,,etc
[center_node_latitude]: 37.25116, 37.25143, 37.25184 ,,etc
[center_node_longitude]: 127.11383, 127.10511, 127.10003 ,,etc
Third approach is the so-called multi-cut approach in graph or network theory. It would be much easier to understand if you look at the picture. Connecting to the center point of adjacent regions by line, we can create a graph. And by selecting the appropriate link, we can make cluster. we can divide the graph as many as we want. The constraint is the sum of the values of nodes that are connected in the same cluster is similar to sum of values of other connected nodes in other cluster. (=balancing)
<Data information>
[node_id]: n_1, n_2, n_3 ,, etc
[node_value]: 10, 5, 20,, etc
[node_adjacency_data] : Please refer to the attached picture.
[node_latitude]: 37.25201, 37.25211, 37.25219,, etc
[node_longitude]: 127.10195, 127.11321, 127.11377,, etc
I would appreciate it if you could give me some recommendations on what kind of algorithms can be used to solve any of the three parts that I mentioned above and the reason for recommendations. And the final goal of the project is to solve the problem by implementing code, so I would really appreciate it if you could give the source code or links of the algorithm that you recommend.
(I can use java or python) .Can you please help me with this? if you don't understand this problem or have questions please tell me.
[

Matching data based on parameters and constraints

I've been looking into the k nearest neighbors algorithm as I might be developing an application that matches fighters (boxers) in the near future.
The reason for my question, is to figure out which would be the best approach/algorithm to use when matching fighters based on multiple parameters and constraints depending on the rule-set.
The relevant properties of each fighter are the following:
Age (Fighters will be assigned to an agegroup (15, 17, 19, elite)
Weight
Amount of fights
Now there are some rulesets for what can be allowed when matching fighters:
A maximum of 2 years in between the fighters (unless it's elite)
A maximum of 3 kilo's difference in weight
Now obviously the perfect match, would be one where all the attendees gets matched with another boxer that fits within the ruleset.
And the main priority is to match as many fighters with each other as possible.
Is K-nn the way to go or is there a better approach?
If so which?
This is too long for a comment.
For best results with K-nn, I would suggest principal components. These allow you to use many more dimensions and do a pretty good job of spreading the data through the space, to get a good neighborhood.
As for incorporating existing rules, you have two choices. Probably, the best way is to build it into you distance function. Alternatively, you can take a large neighborhood and build it into the combination function.
I would go with k-Nearest Neighbor search. Since your dataset is in a low dimensional space (i.e. 3), I would use CGAL, in order to perform the task.
Now, the only thing you have to do, is to create a distance function like this:
float boxers_dist(Boxer a, Boxer b) {
if(abs(a.year - b.year) > 2 || abs(a.weight - b.weight) > e)
return inf;
// think how you should use the 3 dimensions you have, to compute distance
}
And you are done...now go fight!

How to group close addresses?

I need to group addresses given their distances.
Let's say I have a list of 8 addresses. 5 in NYC and 3 in New Jersey. From those 5 in NYC 3 are close to the MET and 2 to the WTC. Those 3 in NJ would form one group, those close to the MET another and also those close to the WTC.
I 'd like to send this address list and get the closest to each other, grouped. Is there any API from Google Maps or Bing Maps that would do that? If not, would you have any suggestions on how to solve this?
At the question below lots of ways to calculate distance are mentioned, but I wonder if is there something already created (and available) from these big companies. I wouldn't like to recalculate every address in the list every time a new one is added.
How to group latitude/longitude points that are 'close' to each other?
Also, there's another problem that was not addressed at the aforementioned question... One address can be close to a group and several other groups. For instance:
In this example I've highlighted at least 4 groups. B forms one "close group" with A/C, but also with C/F, A/E/G and E/F/D/H. So I'd also like to know those variables. To which group the address is closer, or at least I though about limiting groups by the amount of members. In my example, using my suggested approach, B would be part of either the RED or BLACK group.
You can try a quadkey and exploit it visit nearby points firstly, similar to a space filling curve. Treat the points as a binary and interleave it. Treat the index as base-4 number. Then sort the numbers.

Resources