I want to clear DNS cache - caching

I have a problem with clearing DNS Cache.
When we publish any post on our website it was getting published in the Dailyhunt news portal.
But for the past few days, our posts are not getting posted in Dailyhunt. When we ask their technical team they told us to clear the DNS cache on regular basis. Because of cache, our posts are not getting published in Daily Hunt.
So how to clear the DNS cache on regular basis?

The response you received seems a little bit superficial. I can't find an immediate correlation between the DNS cache and your posts not being published on a portal. Especially if you are publishing content, and not provisioning new hosts.
If you need to clear the DNS cache locally you can follow these instructions on how to flush the DNS cache on various operating systems. However, while these commands will produce the expected outcome on your system, I do not expect them to solve your problem.
From what I read, I think the cache they referred to was not the DNS cache, but the site cache. It is possible your site has a caching mechanism that doesn't immediately publish new content. If you use a CMS or publishing platform, such as Wordpress, check your settings and plugins. It is very plausible that you have a setting that caches your published pages for a period of time to increase performance.

Related

Azure web role with co-located cache giving slow response

I have a web role with co-located cache. there are two instances of this role.
Even when there is a cache-hit, the turn-around time for our request measures to a few seconds. Upon analysis we found that the time taken by cache to get back with data is 1 second on average. However, IIS logs suggest that the overall servicing of the request takes about 4 seconds. there is no intermediate operation before or after data retrieval from cache.
What could be wrong here? What would be a good way to analyse the problem?
For what it's worth we were having a similar problem with caching in Redis in Azure and a RESTful API.
The problem turned out to be the serialization of data.
Some ways to debug the problem:
Download ANTS profile (it has a free trial) and profile your worker role locally.
Enable profiling for your worker role, deploy it, run it for a bit, then download the profile file in Visual Studio. (You can use Server Explorer to find your instance and download the log).
Download the Azure tool kit (http://blogs.msdn.com/b/kwill/archive/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team.aspx) on your instance. It has things like Process Explorer that can tell you how much memory your role is taking, how much CPU, what it's doing on the network etc.
You can contact Azure support and have them help you profile your application. We did that and got absolutely amazing support. They talked with us on the phone for hours and helped us profile our code.
You really should increase the log level for client and server refer In-Role Cache Troubleshooting and Diagnostics (Windows Azure Cache) and take a look at the performance counters. If read operations (GET) is taking long time then there can be paging in one of the instances or may be there is overload on the server. If you see any performance issue on the cache instances then you should take reassess the capacity using Capacity Planning Considerations for In-Role Cache (Windows Azure Cache) .
If this doesn't help then please open a support ticket.

windows azure website load time

Sometimes when I access my windows azure website, the initial response time is very slow. After the first page load the website is fast. Some background: The website is not that often visited at the moment. Further, I am using a keepalivecontroller to keep the website running and the website is running in shared mode. I am wondering: are websites that are not that active removed from memory in windows azure? Or is it just that background tasks on the operational level of windows azure are interfering sometimes? It is not transparent for me what is happening, so is there some sla of something for windows azure websites?
There is now a new feature available for Windows Azure Websites in 'Reserved' mode that will keep your website warm. You can now turn on "Always-on" under the "Configuration"-tab on your Azure Website. As explained in this blog post:
When the new “Always On” feature is is enabled on a site, “Windows
Azure will automatically ping your website regularly to ensure that
the website is always active and in a warm/running state,” Guthrie
writes. “This is useful to ensure that a site is always responsive
(and that the app domain or worker process has not paged out due to
lack of external HTTP requests).”
Easiest way to keep a website warm is to call it regularly using the Scheduler feature in Windows Azure Mobile Services.
You simply write a script in the Scheduler that pings your website every x minutes.
Here's a post covering how to do that: http://fabriccontroller.net/blog/posts/job-scheduling-in-windows-azure/
The Windows Azure Web Sites are still in preview, so there is currently no SLA with that service.
The Web Sites do idle out when in free or in Shared mode, which is likely what you are seeing. When the site idles out it actually is removed from memory, and indeed the IIS process host running the site is shut down. This is how they can get the density of hosting 100 sites on the same VM.
You can find a lot of info on the Channel9 site about why this is the case, or, as a shameless plug, here is an article that talks about how the process is handled.
Now, you mentioned that you were using a keepalivecontroller, but what exactly do you mean by that? I use pingdom.com to contantly request data for one of my websites, and that seems to do pretty well. It is still possible that a request doesn't come in and the idle time is met which then cycles the site. It is also possible that even if you always have the site running that the VM the site sites on needs to have the underlying OS updated, in which case Azure would then move the site process to another VM, which could also cause the slow start up on the next request.
I'd start logging your application start ups and then look through your logs to see how often that is happening.
If you only need to warm it up once (vs keeping it warm) and are mostly trying to prevent your customers experience page cold starts, I believe the correct tool is IIS Application Initialization. You can configure it with a list of urls to hit before it deems the app ready for action.
My site is suffering from page cold starts and that is severely magnified in Azure Websites (even on an S3), but it is absolutely speedy after its served that first time thanks to several layers of caching (our inefficient use of Umbraco's dynamic nodes query language creates a lot of database churn--which we're cleaning up opportunistically).
From what I've read and my own web.config attempts this is still not available in Azure Websites. I've asked Microsoft for it here: MS IDEA: Application Initialization to warm up specific pages when app pool starts. Please consider voting for it.
For each service/site you need to go to "Configure", then switch "Always On" to ON. Also make sure you click Save; it took my website about 2 minutes before noticing the change.
Why this is not the default is kind of mind boggling, because my setup on HostGator was running much faster than Azure. I guess Microsoft is figuring if nobody is accessing your site, it's okay if it has a long load time.

Joomla 1.5 site keeps getting blocked by host

I have a Joomla 1.5.26 site which I have had since Aug 2012. It has been in a stable condition since Aug, with no changes to components etc. It is firewalled with RS Firewall and all the other security precautions have been taken.
During the past few weeks the site has started to be blocked by the hosting company that holds the site, who claim that there are too many active connections. I have hunted through the sites, disabled various components etc and am still getting the same problems.
Has anyone experienced any similar issues? I am thinking of moving the site to a more reputable host for Joomla sites to see if it is more robust elsehwere. I just can't undertand why this keeps happening. The Hosts, are placing the IP address of any machines we use to administer the site if the connections get too many, and then we are locked out for about fifteen minutes. As I said previously, nothing has changed on the site, and I cannot find any evidence of the files or database being hacked.
Any ideas?
Much appreciated
James
If your host is telling you that there have been too many connection then it will most likely mean 1 of the following 3:
You site has reached the maximum monthly bandwidth allowance
Your site has too much traffic for your current host and might need to be moved to a VPS server or something more powerful
You host is plain crap
Check to if you have reached your maximum monthly bandwidth allowance (if you have one) else I would probably recommend transferring to a different host, if you site isn't an extremely popular site, generating thousands of users a day.

Windows Azure Caching (Preview) ErrorCode<ERRCA0017>:SubStatus<ES0006>:

I'm using the role-based caching feature for a windows azure web role.
Configured as co-located. I've followed the steps given by windows azure docs for caching (preview). I get the following error:
ErrorCode <ERRCA0017>:SubStatus<ES0006>:There is a temporary failure.
Please retry later. (One or more specified cache servers are
unavailable, which could be caused by busy network or servers. For
on-premises cache clusters, also verify the following conditions.
Ensure that security permission has been granted for this client
account, and check that the AppFabric Caching Service is allowed
through the firewall on all cache hosts. Also the MaxBufferSize on the
server must be greater than or equal to the serialized object size
sent from the client.). Additional Information : The client was trying
to communicate with the server: net.tcp://127.255.0.4:20010/.
I'm running everything as localhost, using the local development storage, my cache client is in the same role as the server. Changed many configuration attributes, but I always get that excpection or similar like "cannot connect to tcp....".
I'd appreciate some help. Thanks.
There are couple of things which could go wrong with your application.
Very first thing to make sure that you have SDK 1.7 in your machine even with Windows Azure Caching Services and then verify that you have reference set from Windows Azure Cache (not from Windows Server App Fabric SDK). I have seen such misconfiguration in past which lead to such errors.
Now have you changed your dataCacheClient, identifier to your ROLE Name as described in the documentation link here. If you follow the documentation as described to you should not hit any error so for the sake of checking what could be wrong, you can create exact same application as described in this link and see if that works or not.
To get more details error, please be sure to increase the DataCacheFactoryConfiguration.ChannelOpenTimeout value to longer i.e. 2 minutes then default 20 seconds as described here. This step will help you to get details about inner exception which may lead to actual root cause to your problem.
We use Azure co-located caching (not in preview anymore) as our session backer and have fairly regular outages. About once a month.
We tried using the Enterprise library Transient Fault Handling but our instances still hang when caching experiences problems. I think that the transient fault code would work for data caching, but for session backing there is some activity closer to the metal that we can't seem to code against.
The error codes have become more informative over the last year and go something like...
ErrorCode:SubStatus:The request timed out..
Additional Information : The client was trying to communicate with the
server: net.tcp://10.xx.xxx.xx:xxxxx/.
Our best guess so far from experimenting and MS support is that each, or at least one co-located cache role/instance needs to know about all the other instance's IPs, since Azure can destroy and re-up instances whenever they want, this sometimes fails to update the dependent instances. This is secret sauce for Azure, but it is not a secret when our site goes down. I'm looking for any more information on this and to see how others are working around this issue.
One possible work-around. One of our talented platform administrators found that resetting IIS on the instances and scaling up two more instances seem to help the problem. This makes sense to me because it gives caching another chance to gather all the required info about the other instances. This is NOT CONFIRMED to solve the problem but if we repeat this during the next outage it could be a valid work around.

How to do Continuous Integration with a live website without affecting users?

I have implemented Continuous Integration using TFS Version Control and TFS Build 2010. The compiled website project gets dropped in a shared folder with a version number.
Now I have a very basic question and may be a stupid question. When we normally deploy a website project from VS 2010 to a webserver it uploads App_Offline.htm file to the website folder so no requests are served to the user. After publish is completed that App_Offline.htm file is removed. During that period of time users see outage.
If we use CI on a live website then how can we eliminate that outage which appears to a user. I believe the whole point of CI is that users get to see newer features and the site is never down.
How is this accomplished? If we deploy website project to root folder then existing users will be affected and that is certainly no advisable.
I wanted to know what is the recommended practice with VS2010, TFS2010 Build & Version Control.
There's no real foolproof method for this, service up-time is never 100%, that's why people usually define it in 'nines'
But, if you had multiple web servers (Backup, fail-over, mirror etc.), you could roll out the update across them, so that as you update some servers, others will still be online (albeit with the old version) to serve users.
In general, only some of the largest websites have to worry so meticulously about being down for a few short minutes, so make sure you're focusing your energy in the right place ; )
Regarding taking down the site for the shortest time possible, the only way I've seen this done successfully is using multiple sites - either load balancing, or 2 sites on the same machine + swapping host headers after the release/warm up. But in most cases it's not worth the effort, releases shouldn't take down the site for more than a few seconds in which time there should be relatively few requests. You're better off trying a few things you can do to help your users live through a site release.
Move session out of proc.
If the users session lives in the app pool it will be lost when a new version is released, change the config to move it into a session server or the database.
Specify a machine key for the website
Viewstate (and cookies?) are encrypted using a key that is generated when a site starts, if a site restarts due to a release any users filling out a form will receive a invalid viewstate exception on postback. (Note: this may have other security implications)

Resources