Google Places API - can I save place_id? [closed] - google-places-api

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
We develop a platform for building travel itineraries.
The travel plan (=trip) is combined of places ordered by a user defined flow.
We want to use Google Places API for searching places. We would like to store a place_id, and use it for retrieving a trip info. The place_id will be used for fetching that place details from Google.
A place_id will be saved for future use only if a user decided to include that place within his trip itinerary.
Is it permitted according to the terms of use ?
Thanks!
Orit

Yes you can
Place IDs are exempt from the caching restrictions stated in Section 10.1.3 of the Google Maps APIs Terms of Service. You can therefore store place ID values indefinitely.
Referencing a Place with a Place ID

I am currently asking the exact same question to myself.
Reading throught the Google Places API documentation (as you also did I guess), I haven't found a clear/explicit answer to that question.
However, several parts of the documentation make me think that place_id can be saved and used later on to retrieve a place result.
In the "Place Details Results" section, it is said that the "id" property is deprecated and should be replaced by place_id. As "id" was said to be a stable identifier, I conclude that place_id is also a stable identifier.
Another interesting part is the one about the "alt_ids": it is said that a given place can see its place_id changing over the time: when the SCOPE changes, a new place_id is attributed to this place. So, I would said that:
a place_id is unique and stable for a given place and a given SCOPE (APP|GOOGLE), as long as the place exists.
a given place will remain searchable using any the place_id previously attributed to this place
using an APP scope place_id, there is no guaranty that the result sent in the response has the same place_id (it is not a problem, but it need to be kept in my mind, from a developing point of view)
At the end, unfortunately, I have no definitive answer. It is just my current assumptions. I hope somebody will help us with that question.
Regards
Philippe

Related

Gatsby + Shopify query page from shopisify is it possible? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed last year.
Improve this question
I try to build a site from Shopify API with Gatsby.
I manage to query information from product, but I don't find a way to query the page from Shopify. I don't find in graphQl something like allShopifyPages that can give the information, description, title or anything else about the page who has been created in Shopify. So my question is possible and if yes how ?
Below the examples of page content I want to query in graphql
I've opened a discussion on Gatsby but nobody answered from few months.
https://github.com/gatsbyjs/gatsby/discussions/33394
If you want to know pages, then you have to query the right endpoints. If you examine the RestAPI, you'll find:
https://shopify.dev/api/admin-rest/2022-01/resources/page#top
That gives you exactly what you are looking for. I am not sure what the analog is for GQL, there might not be one at this time, but you are free to search.

Do Google SEO Content Keywords matter and should I remove sidebar content?- [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I run a Magento based store website.
At the side of every product page we have delivery information.
Because of this, Google Webmaster tools picks up words such as 'delivery' 'orders' 'returns' as significant keywords - rather than more relevant 'industry specific' keywords.
Does it matter that he gives 'delivery' a higher significant rating?
Should I remove the delivery info from the side of each page?
Or is there a way to disavow keywords to tell Google that 'delivery' isn't relevant?
Or maybe turn the text info at the side into a graphic instead?
Many thanks!
Before SEO, you should always consider what is best for your user. If displaying shipping information in the sidebar is going to enhance the user's experience, leave it. If the information could be put on a page and a link can be added the sidebar, do that.
Having said that, I wouldn't worry about it. Unless you're trying to rank for the keywords 'return' or 'delivery', you're not likely to notice any sort of algorithm penalty that comes from having the words appear all over the website.
Furthermore, a keyword stuffing penalty is applied to each page individually. You should be careful with stuffing your keywords in tags on the side, as it will increase the keyword density.

A generic algorithm for extracting product data from web pages [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Preface: this might seem to be a very beginner-level question maybe stupid or ill-formulated. That's why I don't require a determined answer, but just a hint, a point, which I can start with.
I am thinking of script, which would allow me to parse product pages of different online retailers, such as Amazon, for instance. The following information is to be extracted from the product page:
product image
price
availability (in stock/out of stock)
The key point in the algorithm is that, once implemented, it should work for any retailer, for any product page. So it is pretty universal.
What techniques would allow implementation of such an algorithm? Is it even possible to write such a universal parser?
If the information on the product page is marked up in a structured, machine-readable way, e.g. using schema.org microdata, then you can just parse the page HTML into a DOM tree, traverse the tree to locate the microdata elements, and extract the data you want from them.
Unfortunately, many sites still don't use such structured data markup — they just present the information in a human-readable form, with no consideration given for machine parsing. In such cases, you'll need to customize your data extraction code for each site, so that it knows where the information you want is located on the page. Parsing the HTML and then working with the DOM is still often a good first step, but the rest will have to be site-specific (and may need to be updated whenever the site changes its design).
Of course, you can also try to come up with heuristic methods for locating relevant data, like, say, assuming that a number following a $ sign is probably a price. Of course, such methods are also likely to occasionally produce incorrect matches (like, say, mistaking the "$10" in "Order now and save $10!" for a price). You can adjust and refine your heuristics to be smarter about such things, but no matter how good you get at it, there will always be some new and unexpected cases that you haven't anticipated.

Ruby program to retrieve OpenStreetMap data using OSM API [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
How can I retrieve data from OpenStreetMap (OSM) using the OSM API (http://wiki.openstreetmap.org/wiki/API) and Ruby? Is there any ruby gem available which serves my purpose? I have been searching for a good solution for my purpose but nothing served me exactly what I need.
As for example : Given the country name as input, I need to get the list of all streets of that country etc.
Any kind of link/code sample or starting point is fine. I can then explore more to find out what I need exactly. Thanks!
As the question as posed is off topic for Stack Overflow, I will answer the question of "How to find something I can use" rather than give any kind of recommendation on a tool itself.
I am not familiar with any gems for OpenStreetMap.
So I do this command from the terminal:
gem list --remote | grep street
And my terminal answers me with this:
openstreetmap (0.2.1)
And then I pull up my trusty browser, and open up ruby-toolbox.org and search for openstreetmap.
This produces a page that shows 30 results. In there, I see the mentioned gem, but also I see Rosemary which seems promising, as it is an "OpenStreetMap API client for ruby" and it was last updated only 4 months ago.
So, hopefully this helps in future searches. You have a lot of tools available to get started on your search to get to the point you are asking for in this question, so that you can get down to the business of doing what you need.
The main API you want to use is not suitable for such queries. It is mainly for editing and retrieving small amounts of map data within a small region. For larger queries better use the Overpass API which is much faster and also allows very complex query conditions if needed.
The Overpass API uses XML as input and serves either XML or JSON as output format. So it should be rather easy to use in any common scripting language.

Auto-Completion [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How does Google or amazon implement the auto-suggestion at their search box. I am looking for the algorithm with technology stack.
PS: I have searched over the net and found this and this and many many more. But I am more interested in not what they do but how they do it. NoSQL database to store the phases? or is it sorted or hashed according to keyword's? So to rephrase the question: Given the list of different searches ignoring personalization, geographic-location etc, How do they store, manage and suggest it so well.
This comes under the domain of stastical language processing problems. Take a look at spelling suggestion article by Norvig. Auto - completion will use a similar mechanism.
The idea is that from past searches, you know the probability of phrases (or better called bigram, trigram, ngram). For each such phrase, the auto complete selects the one having max value of
P(phrase|word_typed) = P(word_typed|phrase) P(phrase) / P(word_typed)
P(phrase|word_typed) = Probability that phrase is right phrase if word typed
so far is word_typed
Norvig's article is a very accessible and great explanation of this concept.
Google takes your input and gives TOP4 RESULTS ACCORDING TO THE RANK IDs [if results are less it returns parameters as empty strings] given to different keywords which differ dynamically by the hit and miss count.
Then, they make a search query and returns 4 fields with url, title, and 2 more fields in Json, omnibox then populates the data using prepopulate functions in Chrome trunk.

Resources