Fetch As Google Ajax is blocking - ajax

I am newbie Parse and I have a problem. I want to use parse classes for dynamic content such as blog posts. Everyting works as expected there is no problem ; but when I try to fetch as google in Google Webmaster Tools it says AJAX blocked. So google will not index this content anyway.
when I follow the link I saw this below.
this is what I see when follow class link
So google crawler try to get ajax content but it comes to it with a ConnectionFailed aka 100 error. (I tested it to show in a label on page what returns in parse query error callback. So I see what renders google)
Am I doing something wrong is this an expected behaviour ?
Anyone knows how to solve this ?
Btw: I am hosting this website on heroku with custom domain over https (with cloudflare dns redirected and free ssl)
I also deployed to Parse Cloud Hosting unfortunately the result is same :(
This is the full result of the Fetch as Google :
full page result of fetch as google

The page at https://api.parse.com/1/classes/GameScore is asking for authentication, and it's throwing a 401 Unauthorized status code for unauthorised requests. That's already a problem.
Besides that, the page at https://api.parse.com/robots.txt is currently showing
User-Agent: *
Disallow: /
Googlebot can't access that page because it's disallowed for crawling in the first place, but even if it could access it, it would run into an authentication gate which it wouldn't be able to pass.
If the content from that URL (https://api.parse.com/1/classes/GameScore) is essential for the page where its referenced/used, you would have to work with Parse to allow crawlers access those URLs.
If it's not essential, then you can safely ignore that warning.

Related

Github pages website is on HTTPS but Rest API is on HTTP

I have github pages website which makes request to a hosted server which is HTTP and my browser blocks it.
Its assignment for university so I don't really want to pay for HTTPS on the server and I can't use something else for the front-end as I have sent the url to my professor expecting that this is where my web app will be hosted.
Is there anything I can do, which doesn't involve paying that much money?
I encountered this same issue when making API calls. Here's an easy workaround:
Get your api endpoint (for this example, let’s say
http://api.catphotos.io/)
Prepend the CORS API link https://cors-anywhere.herokuapp.com/ to
your link
Make sure to leave the http:// on your endpoint.
Your new link should look like this:
https://cors-anywhere.herokuapp.com/http://api.catphotos.io/
Make your API calls with your new link!
Source: johnalcher.me
According to the GitHub Pages help docs:
HTTPS enforcement is required for GitHub Pages sites created after June 15, 2016 and using a github.io domain.
That means you can't disable the https:// redirect. You could use a custom domain which then leaves the https:// stuff up to you.
Or you could move the stuff from the other server to GitHub Pages, assuming it's not too big.

Facebook sharer.php 500 error

At the moment I am attempting to share a link on Facebook without the use of JavaScript or a Facebook app id.
Previously I could have a hyperlink to: https://www.facebook.com/sharer/sharer.php?u=urlhere and Facebook would scrape for og:tags and allow me to share a site.
At the moment I'm encountering a 500 error when I attempt to submit a link that has not previously been crawled by Facebook.
How to reproduce the bug
Find a new link that you're certain hasn't been crawled by FB. Good examples of this are tweets.
Attempt to visit https://www.facebook.com/sharer/sharer.php and paste in your link
Submit the form and see the post preview
Attempt to submit the post
If you've used a fresh URL there is a good chance you'll get a 500 error similar to: POST https://www.facebook.com/ajax/sharer/submit_page/ 500 (Internal Server Error)
If you refresh the page and attempt to submit the same URL again, it will post successfully.
Once the link has been crawled by Facebook, it works without problems
Anyone having similar problems with this method of sharing?
This is a valid Facebook bug, see https://developers.facebook.com/bugs/795945327148024/.

Google map places service is giving REQUEST

I am using google place api for places sugestions.
https://maps.googleapis.com/maps/api/place/textsearch/json?query=ari&sensor=false&key=your_api_key
I have valid api key and this URL is working fine when I am executing it from the browser.
The api return "OK" as status and places suggestion but when I am executing the same URL by cUrl or file_get_contents It returns "REQUEST_DENIED" as status and hence no place suggestions.
why this is behaving like this.
Is there any setting which I am missing.
Any suggestion would be a great help.
Thanks
Did you ever get your answer to this? As far as I am aware this is die to "cross-site-scripting" security limits. You can't go from the Places API directly to Google even though you can in a browsers address bar. You have to make the call back to your sever and have the server send the call to Google - then return those results back to your page/ web site.

Magento not sending HTTP 404, but HTTP 500 in header response but it’s incorrect

I am trying to optimise Magento for SEO. I have it included in Google Webmaster Tools, and I have noticed that there are a lot of pages that are labelled withing GWT as server errors, that is, mostly http 500 errors.
So I did some looking into it, and the URLS that google says are HTTP 500 errors, well, I dont think they are supposed to be. Reason I say this is because Magento is calling the 404.phtml template of “Whoops...."… but Google is right, the header sent is http 500.
How do I go about debugging this? Any ideas?
Many thanks for your help!
Ben
Try visiting the offending pages with a browser that can emulate Googlebot headers (e.g. Firefox with User Agent switcher). This will ensure that the site is responding with the same response that Google is seeing. Also, you will need to enable developer mode and enable error reporting, which you may want to do conditionally based on your IP - this can all be done in index.php.
Ensure that you are reviewing the server logs.

Google checkout callback can't seem to reach https server

I am trying to implement Google Check out (GCO) on a new server, the process seemed to work fine on the old server.
The error from GCO integration console is the timeout error you might expect if there is load on the server and/or the response takes longer than 3 seconds to respond.
To perform a test (not integrating with my database), I have set some code to send an email to me instead. If I hit the https url manually, I get the email and I can see an output to the screen. If I then leave it as that, Google still returns the Timeout error and I don't get an email. So I have doubts as to whether google is even able to hit the https url.
I did temporarily attempt to use the unsecure url for testing and indeed I received the email, however this solution isn't the route we've developed for, so the problem is something to do with the secure url specifically.
I have looked into the certificate which is a UTN-USERFirst-Hardware which is listed as accepted on http://checkout.google.com/support/sell/bin/answer.py?answer=57856 . I have also tried to temporarily disable the firewall with no joy. Does anyone have any sugestions?
Good to hear you figured out the problem.
I'm adding the links below to add a litle more context for future readers about how Google Checkout uses HTTP Basic Authentication:
http://code.google.com/apis/checkout/developer/Google_Checkout_XML_API.html#urls_for_posting
http://code.google.com/apis/checkout/developer/Google_Checkout_XML_API.html#https_auth_scheme
http://code.google.com/apis/checkout/developer/Google_Checkout_HTML_API_Notification_API.html#Receiving_and_Processing_Notifications

Resources