When I check a URL in Google webmaster tool, I found this.
I have added Google Places javascript API in my webpage , but the calls were blocked when it was through Google bot. How can I handle this?
Click through the "robots.txt" link, and see what it says.
I think you'll see:
User-agent: *
Allow: /maps/api/js?
Allow: /maps/api/js/DirectionsService.Route
Allow: /maps/api/js/DistanceMatrixService.GetDistanceMatrix
Allow: /maps/api/js/ElevationService.GetElevationForLine
Allow: /maps/api/js/GeocodeService.Search
Allow: /maps/api/js/KmlOverlayService.GetFeature
Allow: /maps/api/js/KmlOverlayService.GetOverlays
Allow: /maps/api/js/LayersService.GetFeature
Disallow: /
... which means that the /maps-api-v3/... paths you're trying are indeed disallowed.
Related
Google released this blog post which says:
If you authorize download requests to the Drive API using the access
token in a query parameter, you will need to migrate your requests to
authenticate using an HTTP header instead. Starting January 1, 2020,
download calls to files.get, revisions.get and files.export endpoints
which authenticate using the access token in the query parameter will
no longer be supported, which means you’ll need to update your
authentication method.
and then says:
For file downloads, redirect to the webContentLink which will instruct
the browser to download the content. If the application wants to
display the file to the user, they can simply redirect to the
alternateLink in v2 or webViewLink in v3.
however if we use webContentLink then we will hit the 100mb virus page mentioned here.
I can see that the migration has been delayed, however sooner or later this will happen, and we want to future-proof the application.
How will we be able to download content without hitting the 100mb virus limit, after this change is implemented?
If you authorize download requests to the Drive API using the access token in a query parameter, you will need to migrate your requests to authenticate using an HTTP header instead.
Example query param:
GET https://www.googleapis.com/drive/v3/files/[FILEID]?access_token=[YOUR_ACCESS_TOKEN] HTTP/1.1
Accept: application/json
Example requests header:
GET https://www.googleapis.com/drive/v3/files/[FILEID] HTTP/1.1
Authorization: Bearer [YOUR_ACCESS_TOKEN]
Accept: application/json
Assuming that you can do the http header option then you should not have any issues with the download as mentioned. The issues with download only come into play if you cant add the authorization header. In which case i think you would need to go with option number two and export the files directly.
I am working on an Elixir Phoenix web project where I want to interact with Google's Indexing API.
Google uses OAuth2 to authenticate api requests and actually has a decent documentation on this.
But it only explains the process using one of the supported libraries in Python, Java, PHP or JS.
I would like to make the HTTP requests by myself to retrieve that access token. But the request format (including headers or parameters) is nowhere documented and I cannot even figure out from the libraries' source code.
I have tried requesting https://accounts.google.com/o/oauth2/token (also other eligible URLs) in Postman with the "OAuth 2.0" request type.
But it was all just guessing and trying. All the research did not help.
There are useful instructions including HTTP/Rest examples at Using OAuth 2.0 for Web Server Applications. Each step has the individual parameters fully documented. Here are some useful excerpts.
Send user to Google's OAuth 2.0 server. Example URL:
https://accounts.google.com/o/oauth2/v2/auth?
scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.metadata.readonly&
access_type=offline&
include_granted_scopes=true&
state=state_parameter_passthrough_value&
redirect_uri=http%3A%2F%2Foauth2.example.com%2Fcallback&
response_type=code&
client_id=client_id
Retreive authorization code (your domain). Example:
https://oauth2.example.com/auth?code=4/P7q7W91a-oMsCeLvIaQm6bTrgtp7
Request access token. Example:
POST /oauth2/v4/token HTTP/1.1
Host: www.googleapis.com
Content-Type: application/x-www-form-urlencoded
code=4/P7q7W91a-oMsCeLvIaQm6bTrgtp7&
client_id=your_client_id&
client_secret=your_client_secret&
redirect_uri=https://oauth2.example.com/code&
grant_type=authorization_code
Use API. Example:
GET /drive/v2/files HTTP/1.1
Authorization: Bearer <access_token>
Host: www.googleapis.com/
Dears
I have invested quite some time in trying to post tweets to my own twitter user account from my web site without using any ready-made solutions.
The web site is given read/write access on app.twitter.com and i have regenerated all keys.
I am following the Twitter API instructions referring to endpoint "POST /1.1/statuses/update.json"
I have double checked all my objects and am still getting
response =--- !ruby/object:Net::HTTPUnauthorized
http_version: '1.1'
code: '401'
message: Authorization Required
header:
connection:
- close
content-length:
- '89'
content-type:
- application/json; charset=utf-8
date:
- Wed, 18 Oct 2017 15:28:19 GMT
server:
- tsa_o
set-cookie:
- personalization_id="v1_pxDMvL5ZrViFDcn8AfFemw=="; Expires=Fri, 18 Oct 2019 15:28:19
UTC; Path=/; Domain=.twitter.com
- guest_id=v1%3A150834049962807489; Expires=Fri, 18 Oct 2019 15:28:19 UTC; Path=/;
Domain=.twitter.com
strict-transport-security:
- max-age=631138519
x-connection-hash:
- '05228a7a2026efc93a8a2d4b1a8c6460'
x-response-time:
- '142'
x-tsa-request-body-time:
- '1'
body: '{"errors":[{"code":32,"message":"Could not authenticate you."}]}'
read: true
uri:
decode_content: true
socket:
body_exist: true
I want to send a simple "Hello" from my web site to my twitter account and have double checked all parts which will be presented below.
Also, same logic is used to authenticate me on my web site using my twitter account. So I know authorization (3-legs) works properly.
for posting tweets with my rails app, I have tried both 1) posting the tweet using my app's consumer and access token pairs without going all the authorization steps as well as 2) guiding myself to twitter for explicitly re-authorizing my web site to post the tweet. Both scenarios lead to Error 401. Everything works, except the actual tweeting step.
Any help is very much appreciated. Please note, I am not interested in using a gem and have read thoroughly the associated API documentation.
Here all the constituents of my post request :
Parameter String:
include_entities=true&
oauth_consumer_key=Xffffffffffffffffffffffff&
oauth_nonce=1vGbvxCqsfGi47L7ecpRnwA33fEojFoy6J2hkRpa8&
oauth_signature_method=HMAC-SHA1&
oauth_timestamp=1508340584&
oauth_token=4444444444-GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG&
oauth_version=1.0&
status=Hello
signature base string:
POST&
https%3A%2F%2Fapi.twitter.com%2F1.1%2Fstatuses%2Fupdate.json&
include_entities%3Dtrue%26
oauth_consumer_key%3DXffffffffffffffffffffffff%26
oauth_nonce%3D1vGbvxCqsfGi47L7ecpRnwA33fEojFoy6J2hkRpa8%26
oauth_signature_method%3DHMAC-SHA1%26
oauth_timestamp%3D1508340584%26
oauth_token%3D4444444444-GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG%26
oauth_version%3D1.0%26
status%3DHello
signing key:
W6SzwsKSXwFpl8tb0UNJFoCTW6crf6p3JaS8GipJMErofZVLAA&ENxK6XHG8h2EI7dOeSL0fABJzqnzs7FhP6QirBbXvd0br
signature:
0zx68mHx/SxhHkoRpaqZmO8iC2s=
header string:
OAuth oauth_consumer_key="Xffffffffffffffffffffffff",
oauth_nonce="1vGbvxCqsfGi47L7ecpRnwA33fEojFoy6J2hkRpa8",
oauth_signature="0zx68mHx%2FSxhHkoRpaqZmO8iC2s%3D",
oauth_signature_method="HMAC-SHA1",
oauth_timestamp="1508340584",
oauth_token="4444444444-GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG",
oauth_version="1.0"
POST REQUEST
request =--- !ruby/object:Net::HTTP::Post
method: POST
request_has_body: true
response_has_body: true
uri:
path: "/1.1/statuses/update.json?include_entities=true"
decode_content: true
header:
content-type:
- application/x-www-form-urlencoded
authorization:
- OAuth oauth_consumer_key="Xffffffffffffffffffffffff", oauth_nonce="1vGbvxCqsfGi47L7ecpRnwA33fEojFoy6J2hkRpa8",
oauth_signature="0zx68mHx%2FSxhHkoRpaqZmO8iC2s%3D", oauth_signature_method="HMAC-SHA1",
oauth_timestamp="1508340584", oauth_token="4444444444-GGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG",
oauth_version="1.0"
host:
- api.twitter.com
accept-encoding:
- gzip;q=1.0,deflate;q=0.6,identity;q=0.3
accept:
- "*/*"
user-agent:
- Ruby
body: '{"status":"Hello"}'
body_stream:
body_data:
I finally got it to work. Several issues existed. Not with the signatures and the authorization headers. Instead issues existed in the time stamp used, which was not synchronized properly and also not in GMT as twitter is expecting. I synchronized my system clock against time.google.com and this part was done. Now, there was also an issue about the header which needed also sorting, contrary to twitter's own docs talking about sorting in the context of the signature base string only. I found out that also the extended header needs sorting. Extended because it contains the tweet itself which is not part of the signature calculation. once this part was built in posting the tweet was successful
I have been getting a lot of CPU spikes recently on my server and somehow I believe it's not the real traffic or some part of it isn't real. So I want to only allow Google bots, MSN and Yahoo for now. Please guide me if the following robots.txt file is correct for my requirement.
User-agent: Googlebot
User-agent: Slurp
User-agent: msnbot
User-agent: Mediapartners-Google*
User-agent: Googlebot-Image
User-agent: Yahoo-MMCrawler
Disallow:
User-agent: *
Disallow: /
Thanks.
Your robots.txt seems to be valid.
It is allowed to have several User-agent lines in a record.
Disallow: allows crawling everything.
The record starting with User-agent: * only applies to bots not matched by the previous record.
Disallow: / forbids crawling anything.
But note: Only nice bots follow the rules in robots.txt -- and it’s likely that nice bots don’t overdo common crawling frequencies. So either you need to work on your performance, or not-so-nice bots are to blame.
That first Disallow: should probably be:
Allow: /
if you want to, in fact, allow all those user agents to index your site.
I need to upload html files to google docs with Google Documents List API, but the server always response an error of "ServiceForbiddenException".
the header is:
POST /feeds/default/private/full HTTP/1.1
Host: docs.google.com
GData-Version: 3.0
Authorization: OAuth 1/VbdXxNS9HXN1Q3pe8D....
Content-Type: text/html; charset=UTF-8
Slug: test.html
Content-Length: 2109
....content.....
any idea?
Shouldn't that be an HTTPS URL? Can you show more of the URL that you are using?
I found the java/samples/DocumentList.java tutorial included in the gdata release very helpful in hunting down these things.