Is it possible to get all postal codes in a given location inside the given radius?
What Google API should I use?
Example: I have a lat. and a long. and my radius is 15kms. How do I get the post codes of the areas inside the radius?
I'm kinda new to using API's and Google API's.
Thanks!
This is a process called "Reverse Geocoding" Google offers it here - https://developers.google.com/maps/documentation/javascript/examples/geocoding-reverse
However pretty sure that Google's API will try to give you one closest result rather than many.
You can do this with APIs (often paid for) however, be warned that you have to be quite careful with the radius element, setting it too small in rural areas will bring back 0 results and setting it too big in urban areas will bring back potentially thousands.
Related
Is it possible to train Google Speech API with sample data to help the recognition in my application?
What I mean is an approach like the one provided by wit.ai and described here (even though the example applies to nlp processing). Basically, if you can predict the interactions your users will have with your bot, you can train it to better perform. For instance, I know the subset of cities that will be used, eg: it seems I cannot make the bot understand me when I say Zurich, it becomes Syria or Siberia but I already know that is not possible. So if I, let's say, can upload a list of preferred words to be used first and then if no match is found there fallback to standard recognition or some similar approach I think it will be achieve better results.
Any idea if it is possible and how? I know those APIs are in beta stage and subject to change, but I would still like to give it a try.
I can upload some code sample of what I am currently doing, though it is just sending an audio and analyzing the result so far, so not really close to this problem.
In recognition config you can specify alternatives to return you with maxAlternatives field (up to 30). Once you have 30 alternatives with confidence you'll have Syria with confidence 0.5, Siberia with confidence 0.01 and Zurich with confidence 0.1. Usually the proper answer is present, although it might not be on the top. You can select best alternative according to your current status.
The current Google Cloud Speech-to-Text API allows the user to specify a list of words and phrases that provide hints to the speech recognition task.
From https://cloud.google.com/speech-to-text/docs/basics (mirror):
speechContext - (optional) contains additional contextual information for processing this audio. A context contains the following sub-field:
phrases - contains a list of words and phrases that provide hints to the speech recognition task.
For more details, see: https://cloud.google.com/speech-to-text/docs/basics#phrase-hints (mirror).
I am trying to work through this example in Bing Maps v7 API:
Create Driving Route Example
What I would like to do is to add a starting time to the example and have each direction have the time you should reach the point. For example 08:00 Start at Airport, 08:15, 32 miles Turn Right, 08:30 64 miles Finish at Hotel.
I have searched through the documentation but cannot find anything like this.
I have noticed that some distances in the directions get assigned times as well as the distance. How is this controlled?
If this isn't possible, can you tell me where I can find the documentation as to how to format each direction and control what is shown?
The Directions module in the Bing Maps V7 control does not have a method to take in a future date time for driving routes. However the Bing Maps REST routing service does support this for driving routes. The REST services are documented here: https://msdn.microsoft.com/en-us/library/ff701713.aspx You will want to use the dateTime parameter. The documentation says that this is require for transit, but doesn't highlight that this is also an option for driving. When set predictive traffic data is used to approximate the travel time. It won't tell you what time you will arrive somewhere, but will tell you how long it would take which you could easily add to your start time. If you want to use this with the JavaScript map control, information on how to use the REST routing service with Bing Maps V7 can be found here: https://msdn.microsoft.com/en-us/library/gg427607.aspx
The Bing Maps V8 map control was just released as a public preview a couple weeks ago. The directions module in there will support the ability to provide future date/times.
I'm in the concept development stage of an iOS app that is essentially a game. One of the things I want to do is to get information about the current location. I've not used the map kit so far, and after a quick read through various documentation, it looks like it is designed mainly as a display kit. What I'd like to do in addition to displaying a map is to query data that might be at the location. For example, if I provide latitude and longitude, I want to know whether that location represents land or water. If it's on land, how close is it to the nearest street? If it's not near a street, what other information might there be about the spot?
I realize there are vast amounts of data available that are geocoded, but is there any information that can be queried directly from the map kit? I would have thought things like elevation would be easily available, but I haven't seen anything like that yet. Am I just looking in the wrong place?
As far as I am aware there is no data that you can query directly from MapKit - i.e. you cannot ask MapKit if a location is on land or water.
You could use reverse geo-coding with the current longitude/latitude to find out details about the location, for example nearest street/town, or which country the location is in.
Check out the built in Apple Geocoding framework, or the Google Geocoding API
Hope this helps.
I am looking for some documents on how Google crawl and index content. I read many "light" papers and articles on what you need to do to improve your ranking and make sure your content is properly indexed but I am looking for some more advanced technical documents on how Google crawl and index content.
The things I would like to know more about:
What elements Google look for when it crawls: page content, URLs format, keywords, description etc...
How the index is updated?
Basically, I am trying to understand why some pages are indexed but not others even if the formats are similar. Why only 10% of my site's pages appear when I do a search on the entire domain even if I can see on my server logs that Google crawled every single link.
The answers to both things are closely-guarded trade secrets, ostensibly to prevent gaming the system.
Also keep in mind that Google makes over 400 algorithmic changes per year, making it close to impossible for an outsider to be accurate and up-to-date. Short of working for Google, you're likely not going to find an in-depth and accurate answer.
However, Matt Cutts, head of the web spam team, frequently provides the most accurate insights in how Google handles content, both on his blog and on the GoogleWebmasterHelp YouTube channel. It's worth going through his content to get a much better understanding of Google's methodology.
In order to provide a technical approach of how a webcrawler works I will suggest you to take a deep look into nutch.apache.org solution.
A typical webcrawler displays the following areas, a fetcher, a parser, and indexer and a searcher. To put it briefly a webcrawler fetch all urls available on a website and creates segments where its store up to 101kb per page. Those pages are parsed but typical words such as and-or-the are not stored but other words are analyzed using bayesian calculations in order to make a rank.
Search engine indexing collects, parses, and stores data to facilitate fast and accurate information retrieval. These tasks are mainly performed by storing a list of occurrences of each search critera, typically in the form of a hash table or binary tree using an inverted index.
As Mark stated Google´s calculations are mainly trade secrets but Patents issued by google could be a good start. Pagerank http://en.wikipedia.org/wiki/PageRank analyses backlinks mainly and the importance that websites pointing to your site have on people´s preferences. In my experience its important to offer an xml sitemap stating all your webpages at your site. On that sitemap you could define the crawl frequency for each page. gsitecrawler.com/ is an interesting possibility.
Google Website Optimizer will give you the chance to see what is google finding on your site, logs are ok but probably the robot finds problem and the best way to know that is with google´s website optimizer in order to display errors.
Finally most of your concerns are things that SEO´s specialist live for, I suggest you to check sites like seomoz.com and their tools... You will learn how to position your website better on organic results on search engines.
hope it helps!, sebastian.
"Yes" Google like fresh & unique content.
Use Google webmaster guideline "try this instead" H1 or H2 meta tag on your HTML programming under the head tag ....your keyword. Anchor have to must use your business related keywords in H1, H2, it can help your site search engine.
Also use for Rich snippets in this tag..!
It scans you web page very precisely and sensitively. Factors like you have javascript embedded or in different file matter, whether you are using frames in designing or using heavy graphics can reduce the ranking of your page. Keywords are obviously rank affecting entities. Broken links also bring your website ranking down.
Basically you can refer to http://www.tutorialspoint.com/seo/ to go through all the important points of google's crawler. This will take a maximum of 40 mins.
MapReduce: Simplified Data Processing on Large Clusters
I analysed the latest algorithm and found that now
Google gives more importance to CONTENT rather than LINKS.
So if your content is good enough with properly available tags, Google will automatically generate index for you. I would suggest H1 - H6 all to be used in good manner.
I would like to know as a newbie programmer what the benefits are of using for example google search API or newest buzz API for data content gathering instead of screen scraping; obviously apart from the legal aspects.
API's are less likely to change than a screen layout.
One big downside of screen scraping is that the screen can change and break your scraper. So you end up having to continually adjust your code to match theirs, and since you don't know about changes ahead of time, you suffer downtime/outages as a result.
Also, you may be violating their TOS, and they won't like it. If you have paying customers for your service, you can find yourself between a rock and a hard place pretty quickly.
Also, if you're simulating many users, you'll produce an unanticipated drag on the servers. So using a published/permitted API would be much more efficient for you, and for the web site serving up the source material.