Images uploading, never being requested once finished - image

In short, using CKEditor with the Upload Image plugin (http://ckeditor.com/addon/uploadimage). The URL is configured properly (/services/api/ticket/3/upload), and when an image is dragged and dropped, the file is uploaded. My server handles the upload, and then sends the response:
{uploaded:1,fileName:"steve.jpg",url:"/attachment/20.aspx"}
Which matches what's required on the documentation (http://docs.ckeditor.com/#!/guide/dev_file_upload).
So, at the completion of the upload, a green messagebar shows, saying 'File uploaded successfully!', but the image is a small black square - there's no subsequent request for the URL image. Now, as I was working on the server side, a few times I WAS able to get CKEditor to then re-request the given URL and display the image, but when I got the final server-side code into place, it stopped that, and I'm not sure what I would have changed to stop CKEditor from re-requesting the file.
So I was wondering if maybe a response header is incorrect, or am missing something in the return data. I can post response/request headers if needed.
Thanks in advance ya'll..
--Mike

Well, tracked it down.
Apparently the 'LoopIndex Track Changes' addon, when enabled, prevents CKEditor from re-loading the image and displaying properly.
(Would have just deleted the question, but then realized that someone else may run into this, so..)

Related

Receiving old pdf when i upload a new one on server

I made a software that uploads a pdf to a different server that I am in (both servers are mine). The server2 receives the pdf that I send, does the calculations and return the result to the server1 (which I display on the browser). I also display this pdf on an iframe (which I send to server2), to compare the result with what it has on the body of that pdf.
When I upload a new pdf, and go to the screen where the answer is, the old pdf is still showing up. It takes some seconds or so to update to the new pdf. So, this is probably the chrome caching the old pdf.
I don't want this caching to happens.
I tried many ways to prevent this, without any success.
1) I tried to delete the old pdf from the server, then put the new one, and retrive this new one.
2) I tried to put some random numbers after the request (it is a ajax request to the 2ยบ server), to force to get new resource.
3) Tried to put headers on send, on receive, inside ajax, on the php running on server1 to send to server2, tried to resolve on the python server side (2)....
NONE of them worked for me. I already look on the internet for solutions, none of them worked.
I really think it's something related with cache inside chrome. So, how can i delete the cache that the browser has, in order to see the correct pdf?
The url that i send is:
http://server/dir1/dir2/" + pdfName + "-.pdf
EDIT: when i pick the url from the pdf on the server2, that is inside the iframe, and then press control+f5 or control+shift+r, the pdf is update to the new one.
If i give control+f5 or control+shift+r on the screen where the iframe is, the old pdf continues there.
EDIT2: I saw the network tab in the console (chrome). I find out that the request is caching from disk cache. when i try to make the ajax request to the server, it shows the message "Provisional headers are shown".

"Do you want to view only the webpage content that was delivered securely?" erroneously displayed

I have a site with all secured content. Everything is loaded using https. I have verified this using fiddler2, the built-in debugger, and the DebugBar plugin. Nothing is loaded using http. Nonetheless, I am still getting the "Do you want to view only the webpage content that was delivered securely?" when I try to load the page in IE8. My users are complaining and I don't have a clue how to fix this. They are not computer administrators and cannot change the security policy for IE on their machines.
I figured out the problem and figured I'd post it here in case anyone else ever comes across this issue. The problem is that IE8 was treating the CSS background property with a relative URL as unsecure. So I had something like this:
.SomeRule
{
background: url('/SomeFolder/SomeImage.png') 95% 50% no-repeat;
}
and I had to change it to this to make the warning go away:
.SomeRule
{
background: url('https://www.SomeSite.com/SomeFolder/SomeImage.png') 95% 50% no-repeat;
}
I had a similar problem with a WordPress site where I recently added SSL. Obviously, something was being loaded with HTTP protocol, but what?
First, I checked the obvious:
I checked embedded page and post images for fully qualified paths using http protocol.
Then I checked links relative to the root as #datadamnation suggested in his solution.
Next I looked in my CSS to see if a background image URL used the http protocol.
I checked my plugins and my plugins CSS.
I checked the content in the sidebar widgets.
I checked the images loaded in the carousel slider.
Finally, I checked the theme's header image. When I looked at it using Firebug, I could see that it was still using http. To correct it, I had to remove the WordPress header image, and then add it back again and save. Refresh the page, and now the mixed content warning message is gone! It would have saved me a couple of hours of trial and error if I had done this first, so maybe you'll read this and save yourself some time.

How can I capture a 404 response for a missing image in Coldfusion?

I'm using a custom written auto uploader to import images from users to Amazon S3. I'm building up a parallel image library in my database, so I know what images I can access on S3 to not waste any http-requests.
However, my uploader sometimes throws errors (e.g. source image missing) and although I'm validating, I'm sometimes ending up with entries in my media table and no matching image on S3.
To correct these, I'm thinking of creating a cfthread/cfschedule which clears my image database from faulty entries. What I'm not sure is how to capture 404 responses. Right now I'm having this on a page:
<img src="#variables.imageSrc#" alt="#some alt#" title="#some title#" class="ui-li-thumb ui-corner-all" />
which tries to load the image and returns a 404 if not successful.
Question
How would I capture this 404? I don't want to put anything in the markup, so I assume this should go to onrequestend or another Coldfusion event being monitored in my application.cfc. If so, is there an easy way to identify image request, because I would not want to run a big routine on every applicationr request.
Thanks for insights!
EDIT:
I don't think running isDefined on every image before displaying it is feasable, because it will be a double request to S3 and there is a lot of images. I want to take the 404 and then clean up my database, so next time the image will not be accessed anymore.
If you don't want to use cfhttp and test each image like Matt suggested, why not trigger an ajax call from the browser using the onError handler of the img tag. Something like this but instead of showing a custom graphic, trigger your ajax to set your flag, or maybe even delete the image since it's happening invisible to the user. jQuery/JavaScript to replace broken images
More info on Image onerror Event
I'm curious to see how you would use isdefined on an image. From your code posted isdefined("variables.imageSrc") wouldn't help you much.
If you're running a scheduled task to do this, why not use cfhttp to perform a GET request on the image asset? You can then check the status code in the response to validate the file existence on the server, and then update the database accordingly.

How image are handle by the browser and how to save them without reloading?

Just to be sure, if you load a page and let's say this page has 3 images. First refere to "/images/1.jpgn", the second to "/images/2.jpg" and the third to "/images/1.jpg" again. When the page sent to the browser, will the browser make a new request to the server and ask for the image? And if the image has already been request (like my "lets say", it has two time the same image) will it request it again or it will know that this image/url has already been loaded and will just retrieve it from the temp?
Which lead to my second question, is there a way to save with javascript/jquery this image on the computer (with the download box opening like if you were downloading a file) from the temp without having to request it again from the server?
I don't know if I am really clear but in short, I want to save an image of the page from the cache and not request a download to the server.
Browsers generally cache what they can, according to what the HTTP response headers say. That is, servers ultimately control what browsers can (or should) cache, so it's server configuration that usually controls such things.
This applies not only to images but all content: HTML pages, CSS, JavaScript, etc.
It is all on how the server sends the image the 1st time (with or without caching).
If you have caching enabled on your browser, the browser will usually check your cache before requesting the file from the server.
The browser should take care of. It won't continually re-request the same file.
Typically, the browser will see that two images have the same source and therefore only download it once.
However if the same image is requested again later, the browser will send an If-Not-Modified-Since header to the server. The server can then respond with 304 Not Modified, at which point the browser uses the local copy to "download instantly".

Facebook Like button can't upload images. Error processing file

So this is the error message I see on product pages next to the like button.
There were problems uploading
"http://www.palmercash.com/images/xxxxxxx.jpg" on behalf of your Open
Graph page. Here is the full error message our photo server reported:
"Error Processing File: Unable to process this photo. Please check
your photo's format and try again. We support these photo formats:
JPG, GIF, PNG, and TIFF."
The like button works fine, but the image doesn't show on facebook even though it is a correct URL.
I've done the linter and I just have a warning about og:url
but I've looked at other websites using the exact same code and the photos show fine.
Here is an example URL
http://www.palmercash.com/p-4440-mens-the-onion-mlk-t-shirt.aspx
I have checked the iis 6.0 logs and it appears facebook bots come and pull the images fine, as there is no error message there. I'm just wondering what could cause this to happen. I'm at a lose right now.
This is because image you using as og:image is always returned by your server as gzip compressed without respecting Accept-Encoding HTTP header (even if Accept-Encoding: identity header passed with request your server still uses result in Content-Encoding: gzip and using compression).
Facebook's crawler probably doesn't pass this header and try to use this as image directly without decompressing it first.

Resources