So this is the error message I see on product pages next to the like button.
There were problems uploading
"http://www.palmercash.com/images/xxxxxxx.jpg" on behalf of your Open
Graph page. Here is the full error message our photo server reported:
"Error Processing File: Unable to process this photo. Please check
your photo's format and try again. We support these photo formats:
JPG, GIF, PNG, and TIFF."
The like button works fine, but the image doesn't show on facebook even though it is a correct URL.
I've done the linter and I just have a warning about og:url
but I've looked at other websites using the exact same code and the photos show fine.
Here is an example URL
http://www.palmercash.com/p-4440-mens-the-onion-mlk-t-shirt.aspx
I have checked the iis 6.0 logs and it appears facebook bots come and pull the images fine, as there is no error message there. I'm just wondering what could cause this to happen. I'm at a lose right now.
This is because image you using as og:image is always returned by your server as gzip compressed without respecting Accept-Encoding HTTP header (even if Accept-Encoding: identity header passed with request your server still uses result in Content-Encoding: gzip and using compression).
Facebook's crawler probably doesn't pass this header and try to use this as image directly without decompressing it first.
Related
I feel like I should be able to figure this out, but I can't.
This image attempts to download (in the browsers I tested: safari and chrome):
https://d3i71xaburhd42.cloudfront.net/9470b0dc3daccafa53ebe8f54d5bfed00afce2ce/29-Figure13-1.png
while this one (and most other images) automatically appears in the browser:
https://i.imgur.com/TvqM9Gp.png
These two images are completely arbitrary examples; there are obviously many more examples one could point to. In my experience most images display automatically in the browser, but occasionally one wants to download to a 'downloads' folder or another familiar location, and I'm not quite sure why that is.
What causes some images to download while others are automatically opened by and are viewable in the browser?
This is usually caused by either one of two HTTP response headers.
Content-Disposition
This header tells the browser that the content of the response should be displayed inline (the default value) or as an attachment. The latter will cause the browser to download the response.
Content-Type
This tells the browser what kind of content the response contains. Depending on the content type, the browser knows how a response should be handled. For example text/html will cause the browser to treat the response as HTML and will render it as such. text/plain will cause the response to be displayed as a simple text file, image/jpeg will cause the the response to be displayed as an image and binary/octet-stream will tell the browser, "this is binary data", which generally causes the browser to download the file. The list of MIME-types goes on and on.
If an image is downloaded instead of displayed in the browser and it doesn't have a Content-Disposition response header set to attachment, it usually means that the Content-Type isn't set correctly. For the first image you provided, the Content-Type is set to binary/octet-stream, so the browser will not treat it like an image.
In short, using CKEditor with the Upload Image plugin (http://ckeditor.com/addon/uploadimage). The URL is configured properly (/services/api/ticket/3/upload), and when an image is dragged and dropped, the file is uploaded. My server handles the upload, and then sends the response:
{uploaded:1,fileName:"steve.jpg",url:"/attachment/20.aspx"}
Which matches what's required on the documentation (http://docs.ckeditor.com/#!/guide/dev_file_upload).
So, at the completion of the upload, a green messagebar shows, saying 'File uploaded successfully!', but the image is a small black square - there's no subsequent request for the URL image. Now, as I was working on the server side, a few times I WAS able to get CKEditor to then re-request the given URL and display the image, but when I got the final server-side code into place, it stopped that, and I'm not sure what I would have changed to stop CKEditor from re-requesting the file.
So I was wondering if maybe a response header is incorrect, or am missing something in the return data. I can post response/request headers if needed.
Thanks in advance ya'll..
--Mike
Well, tracked it down.
Apparently the 'LoopIndex Track Changes' addon, when enabled, prevents CKEditor from re-loading the image and displaying properly.
(Would have just deleted the question, but then realized that someone else may run into this, so..)
I have a page which outputs a PDF file to the browser, and sets the following headers:
Content-Type: application/pdf
Content-Disposition: inline; filename="myFile.pdf"
So, the file should be viewed in the browser rather than downloading. This works as expected in Chrome for desktop, except that the "Save" button in the bottom right corner doesn't do anything.
Additionally, when opening on a mobile (where the file is automatically downloaded), the download fails as the file is <Untitled> - despite the presence of filename="myFile.pdf in the headers.
I thought this was a header issue, but have narrowed it down to the fact the page is under HTTPS. If I open the page under HTTP then everything works as expected (files saved successfully) on both desktop and mobile.
So, is there any way to get this working under HTTPS?
It seems that this issue is due to the SSL certificate under which the website is hosted being invalid.
See more here - Android 2.2 and 2.3 PDF download via HTTPS seems broken
When i use the object debugger, the scraper is not able to see my OG content on my page. The debugger says "Can't download: Could not retrieve data from URL.", even though it's a 200 OK and shows the correct fetched and canonical URL. I have a subdomain on it, and it work fine.So not sure what happen to my main domain.
When click on Scraped URL See exactly what our scraper sees for your URL , it just show blank page.
Your site seems to have some HTML errors: http://validator.w3.org/check?uri=http%3A%2F%2Fspandooly.de
You should fix them before attempting to validate your site.
Funny thing, I create a copy of your page, and it seems to validate with no changes in the HTML. Your webserver might be doing something weird (according to the headers, the charset is missing or none):
http://developers.facebook.com/tools/debug/og/object?q=http%3A%2F%2Fwww.webniraj.com%2Fspandooly.html
now i'm making application for facebook with javascript.but I don't know method to change my screen application to .jpg file.
So,I would like to know how to change my application and post it
Thank you for your help.
You cannot get the screenshot done client side, however you can grab the HTML code of the page being viewed and AJAX it up to your server, have your server component transform that HTML into an image.
Use this to get the HTML content of the page at the moment they want the screen capture document.getElementsByTagName('html')[0].innerHTML;
AJAX the HTML to your server
Have your server transform that HTML into an image (depending upon server-side technology you're using, there are solutions to this) (eg http://www.converthtmltoimage.com/)
two choice, store the image on your server to be the permanent place sending back the new URL for the image, or send the content back to the client.
Have the client HTTP Post the image content to Facebook for the post, or reference the URL
It's a big project, but I commend you for tackling something like this.