I have one specific domain that this issue is connected with. I have 10+ more domains from the same registrar. This one domain is on a on a different webhosting account that the rest of the domains (the same webhosting company though).
Whenever I make changes to CSS, the changes are not reflected until I change an IP address via VPN. And even then, it only refreshes once, then I need to change the IP again to see another change made. Sometimes not even that helps.
This happens on different internet networks.
The website runs on wordpress, but I have tested it with a separate set of files outside of WordPress.
Does anyone have a clue what it may be and how could it be resolved? Thank you!
I have tried broadband, mobile network but it's the same scenario for both. This makes me believe that it's not a router or device issue (local cache). It goes without saying that I have cleared cache and DNS multiple times.
One thing to mention is that all of my domains run through Cloudflare - yet only one is affected.
My webhosting company is not very helpful this time and only have checked whether my IP is blocked, which I think is a useless taken the above scenario.
All of my other 10+ domains reflect the changes immediately, even without clearing the cache.
Just in case anyone is experiencing something similar, it was due to Cloudflare. I have set the nameservers to point directly to the hosting provider and that fixed the issue.
Related
Does a web browser cache files based on what is shown in the URL bar, or by where the file actually come from?
Consider the following two Cloud Front Distributions.
distro1.cloudfront.net
distro2.cloudfront.net
A CName record points www.foo.com to distro1.cloudfront.net.
If I change the CName to point to distro2.cloudfront.net.
Since the source is changing, but the address is not...
Will browsers notice the different source and request a new file or just load the cached version. (assuming they have a cached version)
Thank You!!
-C
A browser should not notice that the IP address is different and decide the locally-cached object needs to be refreshed. If it does notice... that is a broken implementation.
A web site can have many, many different IP addresses, all at the same time, all with the same content... and, conversely, a single IP address can have many, many different web sites behind it. Either way, the underlying IP address, and any intermediate targets of CNAMEs is an implementation detail that the browser has to remain unaware of for caching purposes.
How would I go about setting up a backup for heroku downtimes set up on a vps like linode? (using nginx/unicorn)
Essentially very simply, but also with a whole world of hurt.
Simply create an instance of your application of said VPS.
Then you need to ensure that you're able to flip your DNS from Heroku to said VPS without waiting for a TTL to expire, or someway of letting the world know your application has moved.
Then figure out a reliable way of ensuring that the code on both environments is exactly the same, and works on both different server setups
Then figure out how you can keep the data up to date in both environments so that when you do need to flip, the data will be the same in both environments.
Then you need to figure out a way to remind yourself to keep this secondary VPS up to date from a server management point of view. Software updates, security patches etc etc.
Then you need to figure out a way that you can notified when Heroku is down 24/7
Then you need to hope that when Heroku is down that Linode isn't
... or just accept that any host will go down, and it can cost a hell of a lot of money to ensure that your site doesn't. To be honest, it's probably better for you to look at some sort of hosting setup that allows redundancy and failover across several locations (which won't be cheap)
There are third party services which provide the ability to keep your site (parts of) up if your server goes down - At least it appears to the user that your site is up but it's not working properly behind the scenes. CloudFlare is one such service. It sits in front of your site/application and performs magic (quite simply). It works with static/dynamic sites - and if your server goes offline then they are able to serve static parts of your site. See http://support.cloudflare.com/kb/what-do-the-various-cloudflare-settings-do/what-does-enabling-cloudflare-offline-browsing-do
I'm having intermittent 503 errors, out of the blue, on a Magento install. Occasionally the page will half load, without JS or CSS, or sometimes the images will not load, but the rest of it will.
I'm running 1.5 magento, and no settings have been changed before it started going awry.
My hosting guys (it's on shared hosting) have said:
Basically every time someone hits the site, you spawn about 10 - 20+
connections for every hit to:
/media/catalog/product/cache/1/....
For example:
/media/catalog/product/cache/1/image/9df78eab33525d08d6e5fb8d27136e95/2/0/200_2_series-max.jpg
And this is maxing out the capacity. I've disabled cache, and the problem persists. Is there a way I can check why this is happening, or am I at the whim of my hosting chaps.
Thanks in advance.
Not the cache you've disabled, turn that back on as it only causes more severe system loading as everything that would normally be cached has to be recompiled on every request.
/media/catalog/product/cache/1/
This is the image cache. Your system is being overloaded serving out images to your customers. Therefore your server is probably suboptimal and not able to properly handle the loads being placed on it. Typical symptom on a shared hosting plan when attempting to run Magento, this typically causes your website to fall flat on its face the first time Bing, Yahoo and Google decide to simultaneously index your website.
The first thing to do when you start noticing the website to get boggy is to go into "Customer - Online Customers" and see how many Magento reports are online. Sort by IP and see who's hogging resources and take that ip over to Bots VS Browsers to see if it's a known web indexer. The next step is to get your Web Server Access log and start viewing what's being requested to make sure you don't have some script kiddie on your site trying to break it.
One way of eliminating your image loading problem is to go over to Amazon Web Services, sign up for CloudFront and serve your images out through this CDN. Basically it serves as a proxy system so your images only get requested on the initial view and then get served out through CloudFront. Your server still probably is going to have problems with overloading, but it won't be image serving that causes most of it.
Your "shared hosting guys" guys don't have the decency to tell you that they're not capable of hosting a Magento installation for you. This,
you spawn about 10 - 20+ connections for every hit to:
/media/catalog/product/cache/1/....
while I'm sure describes something they've uncovered, doesn't accurately describe it.
Whatever the specifics of the problem you're running into are, the servers your on right now are designed to host a different type of application. You'll continue to run into problems. Move to a host that supports Magento.
I'm usually a great debugger when it comes to helping family members with their computer problems, I also would normally post this type of question here, but I'm hoping this community can help me get to the bottom of this.
A family member is having problems with certain websites not loading all of the resources, primarily images is what it appears. I have disabled her Symantec protection in case it was scanning or preventing stuff from loading and have also uninstalled and disabled startup applications she doesn't need.
One example of a file that is not loading on her system is:
http://static.ak.fbcdn.net/rsrc.php/v1/yp/r/kk8dc2UJYJ4.png
I'm assuming this loads for everyone else here.
Any thoughts would be much appreciated. Also she gets a similar issue in IE, Chrome, Firefox.
The first place I'd look is if there's a commercial ad-blocker installed, as I guess it can't be an add-in/extension as different browsers have their own settings.
And it may sound silly, but did you check the hosts file (system32/drivers/etc/hosts)? Is it possible static.ak.fbcdn.net is just being redirected? You might want to try opening the command prompt and just doing ping static.ak.fbcdn.net and confirming her computer's exact behavior.
In my case FB redirects me to a749.g.akamai.net (or 125.56.208.11) and everything works fine.
Minor edit: I'm a bit skeptical that's the cause, as FB serves other stuff from that domain (CSS, JS). Photos and profile pictures seem to come from a different domain. But I'd still be interested in whether the problem occurs when connecting to the resource or displaying it.
Thats probably because your DNS resolves the Akamai CDN server, used by facebook to fetch images, to an IP address that is not reachable from your network. You may want to get the IP address of facebook CDNs used by your computer at the time this happens and contact your network administrator to find the reason behind the IP blockage (may be because of firewall). Other than that, you can try changing your DNS in your system settings which might give you an IP address that works for your network.
PS: I ran into this issue a few weeks ago and have found my findings to be correct.
Most solutions I've read here for supporting subdomain-per-user at the DNS level are to point everything to one IP using *.domain.com.
It is an easy and simple solution, but what if I want to point first 1000 registered users to serverA, and next 1000 registered users to serverB? This is the preferred solution for us to keep our cost down in software and hardware for clustering.
alt text http://learn.iis.net/file.axd?i=1101
(diagram quoted from MS IIS site)
The most logical solution seems to have 1 x A-record per subdomain in Zone Datafiles. BIND doesn't seem to have any size limit on the Zone Datafiles, only restricted to memory available.
However, my team is worried about the latency of getting the new subdoamin up and ready, since creating a new subdomain consist of inserting a new A-record & restarting DNS server.
Is performance of restarting DNS server something we should worry about?
Thank you in advance.
UPDATE:
Seems like most of you suggest me to use a reverse proxy setup instead:
alt text http://learn.iis.net/file.axd?i=1102
(ARR is IIS7's reverse proxy solution)
However, here are the CONS I can see:
single point of failure
cannot strategically setup servers in different locations based on IP geolocation.
Use the wildcard DNS entry, then use load balancing to distribute the load between servers, regardless of what client they are.
While you're at it, skip the URL rewriting step and have your application determine which account it is based on the URL as entered (you can just as easily determine what X is in X.domain.com as in domain.com?user=X).
EDIT:
Based on your additional info, you may want to develop a "broker" that stores which clients are to access which servers. Make that public facing then draw from the resources associated with the client stored with the broker. Your front-end can be load balanced, then you can grab from the file/db servers based on who they are.
The front-end proxy with a wild-card DNS entry really is the way to go with this. It's how big sites like LiveJournal work.
Note that this is not just a TCP layer load-balancer - there are plenty of solutions that'll examine the host part of the URL to figure out which back-end server to forward the query too. You can easily do it with Apache running on a low-spec server with suitable configuration.
The proxy ensures that each user's session always goes to the right back-end server and most any session handling methods will just keep on working.
Also the proxy needn't be a single point of failure. It's perfectly possible and pretty easy to run two or more front-end proxies in a redundant configuration (to avoid failure) or even to have them share the load (to avoid stress).
I'd also second John Sheehan's suggestion that the application just look at the left-hand part of the URL to determine which user's content to display.
If using Apache for the back-end, see this post too for info about how to configure it.
If you use tinydns, you don't need to restart the nameserver if you modify its database and it should not be a bottleneck because it is generally very fast. I don't know whether it performs well with 10000+ entries though (it would surprise me if not).
http://cr.yp.to/djbdns.html