Where is the documentation for the Google Suggest API? [closed] - google-suggest

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Is there any official documentation on the Google Suggest API?
All my searches for the Google suggest API show pages with either outdated info or non-working scripts.
For example, at google.com, as soon as you type in "app", Google suggests Apple, Applebees, etc.

As you can imagine, it's changed.
The newer URL is now http://clients1.google.com/complete/search?hl=en&output=toolbar&q=YOURSEARCHTERM
Or even more recent:
http://suggestqueries.google.com/complete/search?output=toolbar&hl=en&q=YOURSEARCHTERM

Summary of working examples:
From this question working example:
http://suggestqueries.google.com/complete/search?output=toolbar&hl=en&q=theory
From this question working example:
http://suggestqueries.google.com/complete/search?output=firefox&q=theory
From mhawksey comment above working example:
http://google.com/complete/search?client=chrome&q=theory
Here client=chrome can be changed to other browser client. For example for Firefox it will look like:
http://google.com/complete/search?client=firefox&q=theory
From mahoor13 comment above working example:
google.com/complete/search?output=toolbar&q=theory
From dhiraj-pandey answer "if you want country specific suggests, you need to add &gl= in the url". That only works with links for toolbar!
So for example working country specific example for India will be:
google.com/complete/search?output=toolbar&q=theory&gl=in
To separate words use %20 or + between them. For example:
http://suggestqueries.google.com/complete/search?output=toolbar&hl=en&q=a%20mykeyword
or
http://suggestqueries.google.com/complete/search?output=toolbar&hl=en&q=a+mykeyword
Also from here it possible to get two suggestions with YQL (first sugestion chuck norris, second steven seagal):
select * from xml where url in (
‘http://google.com/complete/search?output=toolbar&q=chuck+norris’,
‘http://google.com/complete/search?output=toolbar&q=steven+seagal’
)
Using above code gives:
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20xml%20where%20url%20in%20%28%27http%3A%2F%2Fgoogle.com%2Fcomplete%2Fsearch%3Foutput%3Dtoolbar%26q%3Dchuck%2Bnorris%27%2C%27http%3A%2F%2Fgoogle.com%2Fcomplete%2Fsearch%3Foutput%3Dtoolbar%26q%3Dsteven%2Bseagal%27%29&format=xml&diagnostics=false
Some info from google about suggestions: http://www.google.com/support/enterprise/static/gsa/docs/admin/70/gsa_doc_set/xml_reference/query_suggestion.html

Try http://google.com/complete/search?output=json&q=YOURSEARCHEDTERM or for XML output http://google.com/complete/search?output=toolbar&q=YOURSEARCHEDTERM
http://answers.oreilly.com/topic/1526-how-to-use-the-google-suggest-api-to-come-up-with-topics-for-answers/
I also found very interensting tool which use the Google Search API and it is based on Python and Flask ubersuggest and keysuggest's Googlealphabet soup method tool.

As #Harvest316 said, you can use those urls to get suggestions, but if you want country specific suggests, you need to add &gl= in the url. For example, if I search for India, it will be
http://suggestqueries.google.com/complete/search?output=toolbar&hl=en&q=YOURSEARCHTERM&gl=in

Hi I'm the author of Übersuggest the tool mentioned by JonnyPea. There is no official Google Suggest API: the URL I and other people use is just something we have found hacking around Google. Here's a couple of advice:
Have a look at my application source code on Bitbucket (beware:
I'm an hobbyist programmer so my code cold be improved a lot)
Do not call the API thousands times from the same IP or you will be banned.
[UPDATE]
Sorry the source code is no more available

There is a working API that pulls data from Google Suggest (along with YouTube, Bing and App Store): http://keywordtool.io/api
Using this API you wouldn't need to worry about the number of requests from the same IP etc.
Google doesn't have an official API to share autocomplete data, moreover it often hides keywords that appear in Google suggest from Google Keyword Planner.
Note this API is by paid subscription and starts at $280 / month.

Related

How to develop RSS Feeder [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I need to build a RSS feeder in Go and I guess I did not understand some key concepts. To make it clear, I ask that question.
Is there any standard for number of last fetched news in XML file?
Should RSS document needs to be generated when requested? I mean should the client get always the latest news?
Here is the Go part. I will use https://github.com/gorilla/feeds library. It basically generates RSS XML. But it does not provide a publishing way.
Should I serve RSS XML document from a REST endpoint? If I do, is it okay for RSS clients?
You may say that first I should search on the internet and I did. Most of the articles talks about parsing and fetching from a RSS Feeder.
Is there any standard for number of last fetched news in XML file?
No. And it also varies between feeds. This also makes sense since there are some sites which produces lots of new content and others only few.
Should RSS document needs to be generated when requested? I mean should the client get always the latest news?
That's completely up to the server. But in many cases it is likely more efficient if the server creates a static file whenever new news were added instead of dynamically creating the same output again and again for each client. This also makes it easy to provide caching information (i.e. ETag or similar) and let the client retrieve the full content only if changed.
Should I serve RSS XML document from a REST endpoint? If I do, is it okay for RSS clients?
This does not really matter. The URL for the RSS can be anything you want, but you have to publish it so that RSS readers know where to get the RSS.

How to disable AMP caching from Google Search? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
Some results on Google Search comes with AMP (Accelerated Mobile Pages) icon on theirs links, at least when using a mobile, as soon you click on the link instead of loading the site, google show you a cached version of it rather.
I want to disable this behaviour on my results, I see at least two good reasons for it:
When sharing the link it is a pain in the neck to have the huge google URL in place of the shorter one would be just with the original one.
Security: when you access any site and see a URL other than the site you wanted to load, you should distrust it, even if it looks like google (remember, you can get phished or even get caught in a trap hosted on gsites), Google should respect that instead of encouraging users to trust it just because the url looks like google! Even worst if combined with the first reason and you want to share the URL with a friend.
I have to remove the google AMP prefix ever and ever, there is no advanced search option or cookie that makes Google give the clean URL?
According to the AMP project FAQ you cannot:
By using the AMP format, content producers are making the content in AMP files available to be cached by third parties.
As a content producer I dislike Google adding their own URL, and branding around my content... From the consumer perspective looks like the content comes from Google. They say it is to improve speed, but you can see Google's intention behind this "free" technology.
A simple hack is to keep using AMP guidelines for the speed it provides to the page, but violate one rule (like add you own javascript that does noting).
Once pages have an error, google will not cache them.
By publishing AMP pages you let Google or any other AMP cache store and deliver your web page (which surprisingly seems to be legal):
Caching is a core part of the AMP ecosystem. Publishing a valid AMP document automatically opts it into cache delivery. (https://www.ampproject.org/docs/fundamentals/how_cached)
To stop AMP from caching, the project recommends to invalidate the format by removing the amp attribute from the <html> tag. I propose something else.
One thing I always disliked about AMP ist that it requires you to embed the JavaScript code directly from their server (https://cdn.ampproject.org/v0.js), effectively telling AMP about every single visitor to every AMP page. Embedding the code from your own server stops this privacy issue, disables caching, and still gives you the framework.
To do so you can build your own AMP framework using the source code:
https://github.com/ampproject/amphtml
But it's much simpler to just copy v0.js and all the scripts it fetches to your own server.
Odd because google says to remove the "amp" from the tag to not cache.
It said nothing about loading the js locally.
https://amp.dev/documentation/guides-and-tutorials/learn/amp-caches-and-cors/how_amp_pages_are_cached/
Is google wrong?

Ajax / Deep linking and Google indexing / SEO - Is it a bad idea? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm about to embark on building a music oriented website for a friend's band and I want to build something like this template. It uses ajax and deep linking.
My worry is that this site will not be crawlable by Google. Is there anything I can do or can code I can adjust to make it crawlable?
Many thanks in advance!
That template doesn't look crawlable to me. Googlebot will never find your content. If I go to the page for the template and view source, then search for "Gigs schedule with filter", I can't find it in the page source. That is because that particular content is loaded with AJAX and not part of the page source.
That template does not use Google's crawlable AJAX standard with #! in the url. https://developers.google.com/webmasters/ajax-crawling/ Googlebot will not be index the content on your site if you use that template.
Furthermore, there appear that there are some url issues. I see these two very similar URLS http://radykal.de/themeforest/stylico/features.html and http://radykal.de/themeforest/stylico/?page=features.html. As a user, if I visit that second url, I get the content, but I don't see the navigation. It seems likely that if googbot were to find the content, it would index that second url and use it as the landing page for your visitors. Missing navigation in that case would not be a good user experience, as users would not be able to navigate your site.

W3C validation for complete site [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I am working on a project where I have to validate the complete site, which has around 150 pages, through W3C Markup Validation. Is there a way to check W3C Markup Validation of an entire website?
The W3C doesn't offer this on w3.org.
http://validator.w3.org/docs/help.html#faq-batchvalidation
But you can use this tool and check "Validate entire site": (Also w3.org refers to this site!)
http://www.htmlhelp.com/tools/validator/
But you have a limit of 100 URLs to validate and will get this message when you reach 100 URLs:
Batch validation is limited to 100 URLs at one time. The remaining URLs were not checked.
Also there's a limit of errors displayed for each url.
The WDG offers two free solutions:
Validate entire site (select 'validate entire site')
Validate multiple URLs (batch)
You can run the validator yourself. As of 2018, W3C are using v.Nu for their validator, the code is at https://github.com/validator/validator/releases/latest and usage instructions are at https://validator.github.io/validator/#usage
For example, the following command will run it on all html files under the public_html directory:
java -jar vnu.jar --skip-non-html public_html
I use this tool bulk w3c html validator to validate my entire website
http://www.bulkseotools.com/bulk-w3c-validator.php
This tool uses W3c validator engine, you can check 500 urls at once.
I've used http://sitevalidator.com; I think it would be helpful to you.
I made this java app (Windows installer) in my spare time because I needed it at work:
https://gsoft.no/validator. It's free.
It uses either https://validator.w3.org/ or v.Nu running locally to validate an entire site.
It crawls a website and in the end makes a report with validator-links to all pages with warnings or errors. Because it crawls, all pages to be validated must be linked.
By running v.Nu locally you can validate an internal site (e.g. an intranet) which is not available online and therefore cannot be validated by online validators (unless you post the entire content of each page).

Hosting two sites within single Joomla cms [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is it possible to host multiple websites that all have one single/common CMS (Joomla)?
Thanks.
Joomla offers a CMS to run a website on. Joomla uses mysql databases that just hold the information that will be shown on the content pages at the front. The way it is supposed to be used you won't be able to run multiple sites on a single cms.
You can't run 2 websites with different content on that single cms, but you can create multiple front ends on one cms. You could for example store your data using joomla and get it shown at the front using your own code. This way you will be able to have two interfaces / websites on one cms, both running on the same data.
So from what I read in your question I think the answer will be NO, unless you want to just apply another presentation to your data.
My own experience: I have used Joomla to just hold news articles that my web-master will add. I just used php to get those news-articles out of the mysql database and did that to make sure i could get my own presentation for the data displayed.
I actually beg to differ with those people that were so quick to say "NO!!" As of joomla version 1.5.x there are some components that allow you to do just that, most of them being commercial but there's also http://www.janguo.de/lang-en/Downloads/func-finishdown/31/ which is free at the moment. As of joomla version 1.6.x multiple sites will be integrated into joomla.
If what you need is to have several domains that point to the same Joomla (and to the same content) the answer is YES (see #S.Mark's answer).
If you want to use the same Joomla installation for two different websites (with different content), the answer is NO.
An alternative is to use some Joomla extension, such as:
http://extensions.joomla.org/extensions/core-enhancements/multiple-sites/5550
Yes you can, we have done this before. What you need to do is to have two databases though. We have just written about running multiple Joomla websites on the same Joomla installation. Hope you'll find it useful...
With CNAME record, you could able to mirror a web site to 2+ domains.

Resources