We need to display ~40 images in a page and not allow users to hot link those images. We are currently using <img src="..."> which points to a handler that checks the cgi.http_referer and display the images using cfcontent. However, some images will fail to load (~6 images out of 40), if I refresh the page, some other images will fail to load.
This problem seems to appear when I have to display more than 10 images. I suppose this is because I'm using cfcontent? If so, what should I use instead?
To find out exactly why those images are failing, you'll need to do a little more work. You should use something like Firebug in FireFox, or the console in Safari or Chrome, to find out what's happening with those requests that are failing. You can also use something like Fiddler on Windows for IE or Charles on the Mac, Windows, or Linux to see the full HTTP requests that are happening in the background, along with the full return values from your ColdFusion app server. Until you know exactly why they're failing, we can't come up with any sort of solution.
The other thing to remember, is that if you do this via ColdFusion, then for every page load, you're hitting your CF server with 40 more requests. So one page then results in 41 hits to your CF server for processing. Make sure that code is as tight as it can possibly be.
If I was going to go this route, I'd do it at the server level (IIS or Apache) using some sort of server-level filter to prevent the hotlinking. But just remember, that there will always be a way around it.
Related
So, there's this blog I visit often, and it has a lot of pictures.
But the thing is, they are all broken. The image is there, and I found that if I use a proxy server and access the blog, some pictures are there. Some are still broken, so then I have to right click the broken image, copy the image URL, open the URL in a different tab/windows, and once that picture finishes loading, I refresh the blog, and the picture I loaded on a different windows now shows....
Does anyone know why this happens, and how I can get it fixed?? Thanks.
In case you don't understand what I mean, This is the blog that makes this happen
http://wersierre.tumblr.com/
This applies to Firefox, chrome, IE in my case.
This is very likely to be the case that the target server is configured to prevent hotlinking. When accessing the images via the page you posted, the HTTP response code of the images is 403, it indicates the server can be reached and understood the HTTP request, but refuses to take any further action.
I have a Magento website and I have been noticing an increase in warnings from Catchpoint that various images, CSS files, and javascript files are taking longer than usual to load. We use Edgecast for our CDN and have all images, CSS, and JS files hosted there. I have been in contact with them and they determined that the delays happen when the cache for the resource has expired and it must contact the origin for an updated file. The problem is that I can't figure out why it would take longer than a second to return a small image file. If I load the offending image off our server (not from the CDN) in my browser it always returns quickly. I assume that if you call up an image file directly using the full URL to the image file (say a product image, for example), that would bypass any Magento logic or database access and simply return the image to you. This should happen quickly, and it normally does, but sometimes it doesn't.
We have a number of things in play that may have an effect. There are API calls to the server for various integrations, though they are directed at a secondary server and not the web frontend. We may also have a large number of stale images since Magento doesn't delete any images even if you replace them or delete the product.
I realize this is a fairly open ended question, and I'm sorry if it breaks SO protocol, but I'm grasping at straws here. If anyone has any ideas on where to look or what could cause small resource files, like images, to take upwards of 8 seconds to load, I'm all ears. As an eCommerce site, it's getting close to peak season, and I can feel the hot breath of management on my neck. Any help would be greatly appreciated.
Thanks!
Turns out we had stumbled upon some problems with the CDN that they were somewhat aware of and not quick to admit. They made some changes to our account to work around the issues and things are much better now.
We have deployed an MVC 3 website on an IIS6 box.
Everything runs fine, but the performance is abysmal.
Can anyone help me understand
why am I getting 20 second response times to get a script bundle?
why bundled scripts are not cached by IE even if the Expires header is set?
The site is several times faster in Chrome (I have noticed the cache behaviour is correct), but we cannot force customers to use it.
Any help would be great. I'm kind of wondering if it's a server-side setting that's forcing the bundle recompilation each request, or if it's just IE acting like usual.
Edit: as per comments request, I'm including also the bundle request headers:
If you have different download times for a full reload between the two browsers it could be that you are doing intense computations with a client side framework like angularjs (I have seen big performance differences from highly complex angularjs apps between the two browsers).
If both your browsers show the same download time, it is either a network issue, or a server issue.
The IE caching could be a separate issue, break your problem into two parts - look for the cause of the slow downloads first.
All I can do now is suggest an approach to finding the issue.
Summary of what you know
It looks like you have:
Server sends an Expires header one year from now
When you reload the page (i.e. you don't force a full refresh using Ctrl+F5)
IE doesn't take any notice of the cache header, and when it sends it's new request it doesn't use If-Modified-Since or If-None-Match
Chrome behaves differently and respects the Expires and/or ETag response headers (it doesn't even make the request again for the bundle).
EDIT 1: You also seem to be saying (though it would be good to see a timeline from chrome) that Chrome downloads the files faster, implying it is not a server-side problem. Your latest comment states that Chrome's downloads are also slow. (end edit)
And you also seem to be saying that this behaviour is consistent (i.e. 100 requests in IE, and 100 requests in Chrome show the above behaviour with no deviations).
Approach
You should break this problem into two parts:
Why is the download so slow?
Is there a server-side performance problem? Look for common download times in IE and Chrome, and Firefox (it could be due to bundling/minification/compression on the server).
Is there a network connectivity issue (dropped packets, for instance)? Look for inconsistent download times, Start times, Request times, between requests in a given browser and the same behaviour across all browsers.
Is a script slowing down IE, but not Chrome (this is not uncommon, I maintain legacy sites where the scripts don't run well in IE but do in Chrome) - look at different profile results between browsers.
Why is the javascript not being cached in IE? Troubleshoot (1) first, then worry about this.
It is possible that the two are related, but approaching them separately will be a start. Number 1 is far easier to diagnose that 2, the top references to caching javascript in IE on the web are to prevent it in order to help with development.
Root cause diagnosis
EDIT 1 The first thing to do is try the site from a browser on the server, or very close to the server to see if you have a network issue. (end edit)
Tools like Fiddler, the browser developer tools, timeline and script profiler, and YSlow are your friend. Compare each of the following between Chrome and IE (and see what happens in Firefox as well) and spot the difference. Note: you may need to clear the browser cache between tests.
browser developer tools -> script profile: see if you have a slow running script in IE compared to Chrome
similar analysis in a tool like YSlow (look for comparisons between the two browsers, not script improvements)
request and response headers, and timeline from a normal (i.e. not full reload) page load
request and response headers, and timeline from a full page reload (Ctrl+F5)
Start and Request durations for every js file for a given browser, and between browsers (this may point to network issues)? I note that the Start and Request alone are taking 0.6s and 1s each in IE - that is very very poor performance.
5 requests, and 5 full reloads with cache clearing between (that is, don't chase a ghost - be consistent in your test methodology)
Download times should be no different between Chrome and IE with no scripts actually running so also add a control test. Assuming that your bundle files don't "do anything" (i.e. they contain functions that the page calls rather than kicking off long processes by themselves) then create a blank page in your site which references exactly the same javascript files - not just the bundle, but every single js reference.
With the control test you can compare pure download times and caching behaviour in IE to Chrome, without any client side javascript running (use the developer tools profiler to verify no scripts are running). If your bundle files do kick off long running things, just temporarily disable those things by putting return statements at the top of the script and concentrate only on the download into the browser.
what do I have to do to add a ?_escaped_fragment_= support to my server? I want google to be able to crawl through my ajax site. My hashes are already in #! form
But I have no idea how to tell my server that when I enter mywebsite.com/?_escaped_fragment_=section to my browser so the url mywebsite.com/section and it would be equal to mywebsite.com/#!
thanks
Simple answer - my method (soon to be used for a site with ca. 50,000 AJAX-generated URLs) is to have a node.js server using a headless environment (try zombie, phantomjs, or any other) to load the site, making sure it's able to execute javascript and read the DOM - then at runtime, if it's google requesting the fragment, fire a request to the node.js server, which loads the site, executes the javascript, waits for the response, and delivers back the HTML, which is output to the browser.
If that sounds like a lot of work - I'm about 90% finished on the code that does it all for you, where you'd simply drop one line of (PHP) code at the top of your site/app and it does the rest for you, using a remote node.js server.
The code will be open source so if you want to set it up yourself on a node server, you can - or if it's a PITA to set it up yourself, I'll probably have a live server up and running which your app/website would fire ?_escaped_fragment_ requests to, and get the html snapshot back. It also implements caching so that these are only requested once every X days.
Watch this space - just got a few kinks to work out, and it'll be on my site (josscrowcroft.com) and I'll put it in a github repo too.
I use the Kohana3's Profiler class and its profiler/stats template to time my website. In a very clean page (no AJAX, no jQuery etc, only load a template and show some text message, no database access), it shows the request time is 0.070682 s("Requests" item in the "profiler/stats" template). Then I use two microtime() to time the duration from the first line of the index.php to the last line of index.php, it shows almost very fast result. (0.12622809410095 s). Very nice result.
But if i time the request time from the browser's point of view, it's totally different. I use Firefox + Temper data add-on, it shows the duration of the request is 3.345sec! And I noticed that from the time I click the link to enter the website (firefox starts the animated loading icon), to when the browser finish its work(the icon animation stops), it really takes 3-4 seconds!!
In my another website which is built with WikkaWiki, the time measured by Temper Data is only 2190ms - 2432ms, including several access to mysql database.
I tried a clean installation of kohana, and the default plain hello-world page also loads 3025ms.
All the website i mentioned here are tested in the same "localhost" PC, same setting. Actually they are just hosted in different directories in the same machine. Only Database module is enabled in the bootstrap.php for kohana website.
I'm wondering why the kohana website's overall response is such slow while the php code execution time is just 0.126 second?? Are there anything I should look into?
==Edit for additional information ==
Test result on standard phpinfo() page is 1100-1200ms (Temper data)
Profiler shows you execution time from Kohana initialization to Profiler render call. So, its not a full Kohana time. Some kind of actions (Kohana::shutdown_handler(), Session::_destroy() etc) may take a long time.
Since your post confirms Kohana is finishing in a 1/10th of a second and less, it's probably something else:
Have you tested something else other than Kohana? It sounds like the server is at fault, but you can't be sure unless you compare the response times with something else. Try a HTML and pure PHP page.
The firefox profiler could be taking external media into consideration. So if you have a slow connection and you load Google Analytics, then that could be another problem.
Maybe there is something related with this issue: Firefox and Chrome slow on localhost; known fix doesn't work on Windows 7
Although the issue happens in Windows 7, maybe it can help...