WMS accepts a time criterion ([1]). Does WFS also accept a time criterion (I don't see anything in [2])? There is some discussion on [3] but it is not clear. I am especially interested in GeoServer supports it (if possible).
[1] http://docs.geoserver.org/stable/en/user/services/wms/time.html#wms-time
[2] http://docs.geoserver.org/stable/en/user/services/wms/time.html
[3] https://web.archive.org/web/20180318045748/http://www.ogcnetwork.net/node/178
WFS doesn't support dimensions as a separate concept, it does provide a well-documented filter capability that serves the same end.
Related
I'd like to do the following given a document:
create a summary using pre-existing topics
In the first scenario, documents are neatly organized in a uniform way.
For example, most Wikipedia movie articles have the following subtopics (ex: https://en.wikipedia.org/wiki/Between_Us_(2012_film))
Plot
Cast
Reception
other optional topics
In the second scenario, documents contain the same info as above; however, documents do NOT have clean organization. Documents may use the same or similar language but organized differently.
In both cases, given the subtopics, I'd like to extract this info from a document.
Are there any machine learning/natural language processing strategies/algorithms that I can use? Combination of algorithms is fine. Algorithms that mostly work are also fine.
Update: It looks like what I want is Information Extraction.
A possible way to go about this is to assign those topics to the sentences in each section [1]. As it seems that you have annotated data, you can train a "sentence topic/section model" with that. According to [1], even a multinomial naïve Bayes classifier does the job already pretty well.
As to the summarization aspect, unless you have training data, I would look into extractive summarization techniques [2] - that is, selecting the best sentences from the existing ones for the summary. The work of [2], LexRank, has a few implementations in the wild you can use. If you have summaries to learn from, you can look into abstractive techniques that generate new sentences from the existing ones [3], too. If you check [4], [3] has some sample implementations floating around.
[1] http://bioinformatics.oxfordjournals.org/content/25/23/3174.full
[2] http://jair.org/papers/paper1523.html
[3] http://arxiv.org/abs/1509.00685
[4] http://gitxiv.com/
The simplest approach I can think of is to pose this as a sequence classification problem where the classes are the sub-topics . Given a sentence (or maybe a paragraph) the classifier outputs the sub-topic probability. Training a LSTM classifier should be possible as you have a lot of labeled data (sentences,sub-topics)
The problem with this approach may be that the final out-put is non-coherent. Using paragraphs can help,or maybe conditioning on the previous classification probabilities.
I am working on a feature matching project and i am using OpenCV Python as the tool for developed the application.
According to the project requirement, my database have images of some objects like glass, ball,etc ....with their descriptions. User can send images to the back end of the application and back end is responsible for matching the sent image with images which are exist in the database and send the image description to the user.
I had done some research on the above scenario. Unfortunately still i could not find a algorithm for matching two images and identifying both are matching or not.
If any body have that kind of algorithm please send me.(I have to use OpenCV python or JavaCV)
Thank you
This is a very common problem in Computer Vision nowadays. A simple solution is really simple. But there are many, many variants for more sophisticated solutions.
Simple Solution
Feature Detector and Descriptor based.
The idea here being that you get a bunch of keypoints and their descriptors (search for SIFT/SURF/ORB). You can then find matches easily with tools provided in OpenCV. You would match the keypoints in your query image against all keypoints in the training dataset. Because of typical outliers, you would like to add a robust matching technique, like RanSaC. All of this is part of OpenCV.
Bag-of-Word model
If you want just the image that is as much the same as your query image, you can use Nearest-Neighbour search. Be aware that OpenCV comes with the much faster Approximated-Nearest-Neighbour (ANN) algorithm. Or you can use the BruteForceMatcher.
Advanced Solution
If you have many images (many==1 Million), you can look at Locality-Sensitive-Hashing (see Dean et al, 100,000 Object Categories).
If you do use Bag-of-Visual-Words, then you should probably build an Inverted Index.
Have a look at Fisher Vectors for improved accuracy as compared to BOW.
Suggestion
Start by using Bag-Of-Visual-Words. There are tutorials on how to train the dictionary for this
model.
Training:
Extract Local features (just pick SIFT, you can easily change this as OpenCV is very modular) from a subset of your training images. First detect features and then extract them. There are many tutorials on the web about this.
Train Dictionary. Helpful documentation with a reference to a sample implementation in Python (opencv_source_code/samples/python2/find_obj.py)!
Compute Histogram for each training image. (Also in the BOW documentation from previous step)
Put your image descriptors from the step above into a FLANN-Based-matcher.
Querying:
Compute features on your query image.
Use the dictionary from training to build a BOW histogram for your query image.
Use that feature to find the nearest neighbor(s).
I think you are talking about Content Based Image Retrieval
There are many research paper available on Internet.Get any one of them and Implement Best out of them according to your needs.Select Criteria according to your application like Texture based,color based,shape based image retrieval (This is best when you are working with image retrieval on internet for speed).
So you Need python Implementation, I would like to suggest you to go through Chapter 7, 8 of book Computer Vision Book . It Contains Working Example with code of what you are looking for
One question you may found useful : Are there any API's that'll let me search by image?
Is it possible by using the Zbar API, that one can check if the image consists of barcode or not?
This is as a backup measure, so that if the application is unable to get barcode value, let it check if it might contain a barcode, if so user can later manually verify it.
I have explored quite a bit but with no major success. If not ZBar, any other open source library that can do it well?
Thanks
What you need is a detector, i.e. the ability to locate the barcode (if any), and thus just return yes or no according to the detection result.
IMHO Zbar does not provide a versatile enough API to do so since it exposes a high-level scanner interface (zbar_scan_image) that combines detection & decoding on one hand, and a pure decoder interface on the other hand.
You should definitely refer to this paper: Robust 1D Barcode Recognition on Mobile Devices. It contains an entire section related to the detection step including pseudo-algorithms [1] - see 4. Locating the barcode. But there is no ready-to-use open source library: you would have to implement your own detector based on the described techniques.
At last, more pragmatic/simple techniques may be used depending on the kind of input images you plan to work with (is there any rotation? blur? is it about processing images or the video stream in real-time?).
[1] In addition I would say that it's a good idea to use a different kind of algorithm within this fallback step than the one used within the first step.
I have a list of tweets with their geo locations.
They are going to be displayed in a heatmap image transparently placed over Google Map.
The trick is to find groups of locations residing next to each other and display
them as a single heatmap circle/figure of a certain heat/color, based on cluster size.
Is there some library ready to grouping locations in a map into clusters?
Or I better should decide my clusterization params and build a custom algorithm?
I don't know if there is a 'library ready to grouping locations in a map into clusters', maybe it is, maybe it isn't. Anyways, I don't recommend you to build your custom clustering algorithm since there are a lot of libraries already implemented for this.
#recursive sent you a link with a php code for k-means (one clustering algorithm). There is also a huge Java library with other techniques (Java-ML) including k-means too, hierarchical clustering, k-means++ (to select the centroids), etc.
Finally I'd like to tell you that clustering is a non-supervised algorithm, which means that effectively, it will give you a set of clusters with data inside them, but at a first glance you don't know how the algorithm clustered your data. I mean, it may be clustered by locations as you want, but it can be clustered also by another characteristic you don't need so it's all about playing with the parameters of the algorithm and tune your solutions.
I'm interested in the final solution you could find to this problem :) Maybe you can share it in a comment when you end this project!
K means clustering is a technique often used for such problems
The basic idea is this:
Given an initial set of k means m1,…,mk, the
algorithm proceeds by alternating between two steps:
Assignment step: Assign each observation to the cluster with the closest mean
Update step: Calculate the new means to be the centroid of the observations in the cluster.
Here is some sample code for php.
heatmap.js is an HTML5 library for rendering heatmaps, and has a sample for doing it on top of the Google Maps API. It's pretty robust, but only works in browsers that support canvas:
The heatmap.js library is currently supported in Firefox 3.6+, Chrome
10, Safari 5, Opera 11 and IE 9+.
You can try my php class hilbert curve at phpclasses.org. It's a monster curve and reduces 2d complexity to 1d complexity. I use a quadkey to address a coordinate and it has 21 zoom levels like Google maps.
This isn't really a clustering problem. Head maps don't work by creating clusters. Instead they convolute the data with a gaussian kernel. If you're not familiar with image processing, think of it as using a normal or gaussian "stamp" and stamping it over each point. Since the overlays of the stamp will add up on top of each other, areas of high density will have higher values.
One simple alternative for heatmaps is to just round the lat/long to some decimals and group by that.
See this explanation about lat/long decimal accuracy.
1 decimal - 11km
2 decimals - 1.1km
3 decimals - 110m
etc.
For a low zoom level heatmap with lots of data, rounding to 1 or 2 decimals and grouping the results by that should do the trick.
I would like to know what algorithm is used to obtain an image and get the objects present in the image and process (give information about) it. And also, how is this done?
I agree with Sid Farkus, there is no simple answer to this question.
Maybe you can get started by checking out the Open Computer Vision Library. There is a Wiki page on object detection with links to a How-To and to papers.
You may find other examples and approaches (i.e. algorithms); it's likely that the algorithms differ by application (i.e. depending on what you actually want to detect).
There are many ways to do Object Detection and it still an open problem.
You can start with template matching, It is probably the simplest way to solve, which consists of making a convolution with the known image (IA) on the new image (IB). It is a fairly simple idea because it is like applying a filter on the signal, the filter will generate a maximum point in the image when it finds the object, as shown in the video. But that technique has several cons, does not handle variants in scale or rotation so it has no real application.
Also you can find another option more robust feature matching, which consist on create a dataset with features such as SIFT, SURF or ORB of different objects with this you can train a SVM to recognize objects
You can also check deformable part models. However, The state of the art object detection is based on deep-learning such as Faster R-CNN, Alexnet, which learn the features that will be used to detect/recognize the objects
Well this is hardly an answerable question, but for most computer vision applications a good starting point is the Hough Transform