Facebook og:image gives different URL + wrong image - image

I've been having this problem for some days now and spend 2 hours debugging a simple meta tag without result.
The problem I'm facing is that the facebook debugger provides me with a whole different url from cloudfront and sets wrong meta image.
The url
The Image it should show
I also made a thread on facebook.com: https://developers.facebook.com/bugs/807177749469671/

The object debugger from Facebook for your given URL mentions that some redirects are occurring on the page, it's also mentioning when these redirections occur. It can either be an HTTP direction, meta-tag with og:url property or a link tag. Your page does contain one of these tags, on line 91 to be specific;
<link rel="dns-prefetch" href="//yummygum.com">
This redirection is why Facebook is trying to load an image with the name "meta-img.png" since this is the og:image the homepage is referring to. Try to remove that link redirection and see if it's loading the right image.

Related

FB OpenGraph og:image is not pulling images even if URL is rendering correctly

FB OpenGraph og:image is not pulling images even if URL is rendering correctly. But next time if we share the same URL, then it pickup the correct image. Also to add, we have added multiple og:image tags (4 tags to be specific) to ensure that one of the images is picked up. It is picking the last og:image tag but if we share the same URL link the correct Image is shown i.e. the first og:image tag. Please suggest.
For multiple og:image meta tags we are using http and https images URLs
You can trigger an update to the scraped information by hitting
POST /?id={object-instance-id or object-url}&scrape=true
if the content was changed. Also, you can use the URL debugger to see what Facebook scrapes for your URL
See
https://developers.facebook.com/docs/sharing/opengraph/using-objects#update
https://developers.facebook.com/tools/debug/

Why is my ajax content not being indexed by google

I have tried to set my site up ( http://www.diablo3values.com )according to the guidelines set out here : https://developers.google.com/webmasters/ajax-crawling/ However, it appears that Google has updated their indexes (because I see the revisions to the meta description tags) but the ajax content does not show up in the index.
I am trying to use the “Handle pages without hash fragments” option.
If you view either of the following:
http://www.diablo3values.com/?_escaped_fragment_=
http://www.diablo3values.com/about?_escaped_fragment_=
you will correctly see the HTML snap shot with my content. (those are the two pages I an most concerned about).
Any Ideas? Am I doing something wrong? How do you get google to correclty recognize the tag.
I'm typing this as an answer, since it got a little to long to be a comment.
First of all, your links seems to point to localhost:8080/about, and not /about, which probably is why google doesn't index it in the first place.
Second, here's my experience with pushstate urls and Google AJAX crawling:
My experience is that ajax crawling with pushstate urls is handled a little differently by google than with hashbang urls. Since google won't know that your url is a pushstate url (since it looks just like a regular url), you need to add <meta name="fragment" content="!"> to all your pages, not only the "root" page. And google doesn't seem to know that the pages are part of the same application, so it treats every page as a separate Ajax application. So the Google bot will never actually create a navigation structure inside _escaped_fragment_, like _escaped_fragment_=/about, as it would with a hashbang url (#!/about). Instead, it will request /about?_escaped_fragment_= (which you aparently already have set up). This goes for all your "deep links". Instead of /?_escaped_fragment_=/thelink, google will always request /thelink?_escaped_fragment_=.
But as said initially, the reason it doesn't work for you is probably because you have localhost:8080 urls in your _escaped_fragment_ generated html.
Googlebot only knows to crawl the escaped fragment if your urls conform to the hash bang standard. As users navigate your site, your urls need to be:
http://www.diablo3values.com/
http://www.diablo3values.com/#!contact
http://www.diablo3values.com/#!about
Googlebot actually needs to see these urls in the source code so that it can follow them. Then it knows to download the following urls:
http://www.diablo3values.com/?_escaped_fragment=contact
http://www.diablo3values.com/?_escaped_fragment=about
On your site you appear to be loading a new page on each click, and then loading the content of each page via AJAX too. This is not how I would expect an AJAX site to work. Usually the purpose of using AJAX is so that the user never has to load a whole new page. When the user clicks, the new content section is loaded and inserted into the page. You serve the navigation once and then you only serve escaped fragments of the content.

Facebook linter reports og:image is too small, when it is larger than the image it choose instead

Problem:
linter reports that specified og:image is too small. Image is 628x464.
linter instead picks a random image from the page which is 380x214, smaller than
the og:image!
What the linter shows me:
http://developers.facebook.com/tools/debug/og/object?q=futuremark.com
Background:
We have been happily using 130x110 og:images without problems for the last 9 months. I noticed in the last couple of weeks that pages were no longer sharing the correct image. Using the linter it seems that Facebook recently decided og:images should be at least 200x200. So I have been replacing our og:images with larger examples but the linter still says they are too small.
Any ideas how I can fix this, or is it a Facebook problem? Thanks.
Now I guess that Facebook does not find tags for height and width and considers them null. In my case, next tags fixed this issue:
<meta property="og:image:type" content="image/jpeg" />
<meta property="og:image:width" content="1280" />
<meta property="og:image:height" content="855" />
Did you change how big the image file at http://www.futuremark.com/images/facebook/futuremark-logo.png is without changing the URL specified in the og:image meta tag?
The image itself will be cached if the URL didn't change, so you need to change the URL (or add a cash-busting parameter like ?v=1 to the end)
I ran into this same issue, for me the problem was with the URL defined in the og:image not matching the URL being checked
for example my og:image tag had
<meta property="og:image" content="http://www.soundfuse.co.uk/public/images/logo_300px.png"/>
And the URL I was actually checking against was
http://soundfuse.co.uk
Notice the missing www. on the TLD? This caused a 301 redirect to occur from soundfuse.co.uk to www.soundfuse.co.uk, but once I matched both primary URL's up it worked as expected.
This issue is also triggered if you're enforcing no trailing slashes with .htaccess [301]. Facebook infers this slash if no og:url is present.
There is a useful workaround for this issue. If you use a URL shortener to create a new URL, the image seems to load without error.
For example, paste your Youtube URL into bitly.com's URL shortener, then paste the shortened URL into Facebook. The thumbnail image will then be displayed as intended.
Please see the link below for facebook open graph image cache problem:
https://developers.facebook.com/docs/sharing/webmasters/?locale=en_US#images
It says that:
URL for the image. To update an image after it's been published, use a
new URL for the new image. Images are cached based on the URL and
won't be updated unless the URL changes.
So, to sum up, if you already have an open graph image that is indexed by facebook and if you would like update it, you should change the URL.

og:image content is recognized as a valid URL but is unable to retrieve image

We recently moved servers and I've been having this problem since.
I tried parsing this url1 in the Facebook Debug tool, and the thumbnail retrieved using the content of the og:image tag is displayed fine, but it won't display when I 'like' the story and it appears in my profile . This was a story posted before switching servers.
Now when I try parsing this url2 in the Facebook Debug tool, the thumbnail is not retrieved and with previous case the thumbnail won't display in my profile when I like this story. This is a story posted after switching servers.
The funny thing I try repeating these actions and the thumbnail does pop up randomly.
How does Facebook retrieve the image from my server? Does the problem have something to do with how this is done?
At one point, I thought FB would not show a thumbnail on a second post. Or something. It seemed to show the thumbnail again, if I changed the URL a bit (put some fake parameters in the url).
YMMV
Remove every og meta tags from the page, debug it into the facebook debugger tool, when it shows nothing, put the correct og meta tags again and debug again with facebok debugger tool.
This is a weird problem with facebook. It caches the old incorrect values sometimes.

Ajax generated pages with different URLs

I couldn't really word the title very well, but here's my problem: I've got a webpage that reads from a database each time the user clicks a button, the content is then replaced for part of the page.
Because it is an ajax load, everything is done in the background, and so the URL stays the same. This wasn't be a problem at all until I realised that I will want to have a different Facebook comments box for each set of content that is loaded - so if someone comments, it is posted to their facebook profile, people click on the link and are then taken to different content.
So... what I need is some way of referencing each set of content, and I've found a site that does exactly that (I'm sure there are a lot of them).
Here's the link.
Each set of content has a different 'hash code' (because I don't know the actual name for it) which is appended to the URL - in this case the code is "#1922934", this allows people to post links to it that specific set of content on Facebook etc. - and also allows a different Facebook comment box for each set of content.
Does anyone know how such a set-up can be achieved or how these 'hash codes' work?
Here's a document from wikipedia on it.
[http://en.wikipedia.org/wiki/Fragment_identifier][1]
The main idea is that URI fragments are used because they don't cause a page reload. They also can be used to refer to anchors on a web page.
What I would do is on page load use JavaScript to read the URI fragment (location.hash) then make a request to your server to load the comments etc. The URI fragment cannot be read by a server and is only found through a client (browser)
Sounds like you want something like SammyJS.

Resources