How to automatically clear cache? - caching

So, I just want to ask something about how to handle cache properly?
Having a problem on when I deploy the application/site in our client server.
Once I successfully deployed it, the end users from the other side seems like failed to
get the exact changes from their end. We did some investigation, and found that it happens because of the cache loaded in their browser was still up and running...
The quick solution is to have them (client end users) manually clear their cache on their respective machines...
Unfortunately, this kind of solution is a little inconvenient given that the manually clearing of cache should be done each machine...
So, are there any way to have the cache clear automatically or something like that?
How did the deployment happen on your own company/clients? Do we do the same thing?
Happy to hear your thoughts about this, thanks!

Related

Stopping js ajax call from a specific user

I have done something silly and written a script for a website that does an ajax check every 2 seconds. In this case its using wordpress and its admin-ajax.php file every 2 seconds. This essentially burned up all the CPU power of the server, and made every site on the server run really slowly.
After a lot of detective work, i finally found the script and stopped it, so that it doesn't happen on new loads of that website. But looking at my apache log, i can see that it is still running in one browser somewhere.
Is there a way for me to stop that browser from requesting that ajax-call, or perhaps block it from my server? Or will I just have to wait until that browser is being refreshed or closed?
Try to use netstat or something similar through ssh to detect the IP and port of the unknown browser. Also you should try to reboot the server so it may will loose connection.
PS: It's pretty hard to give a clue or answer in the right direction without having any logs or evidence to ensure you answer to this question correctly.

close connection in LoadRunner

Practical Challenge:
I have a LR script that runs against an app being mocked and do not have a logout button (yet).
The test runs fine With stable response time for about 10 minutes, but after that the response time peaks and the server goes into 99% memory usage and transactions start to fail.
I suspect this is due to the script does not terminate the vusers after each run anf it builds up a lot of running sessions against the server wich is not terminated. But I might be wrong.
Anyays I want to programatically close each run after it has competed the business process.
I have red somewhere that web_set_sockets_option ("SHUTDOWN_MODE", "ABRUPT") could be used for this, but I want to be sure that this function actually does what I want and what does 'ABRUPT' means?
Are there better ways of closing sessions? Clicking the close browser during recording does not result in anything being captured in the script.
It's a server issue on session aging. Your server admin for your website can adjust the timeout values where no activity has taken place on a given session. By default most places have this set at 30 minutes. Trim it to what you need rather than taking the default value on the server.
Also, you may have hit a leak situation if resources are constantly accumulated on the server side but never released.
Based on your question I assume you're using the WEB/HTML protocol. I agree that the core issue is that your app's sessions should expire more elegantly and probably sooner. But, in order to get beyond this while testing you can try this. It isn't a guarantee, but it has worked sometimes for me in the past when dealing with similar situations. Try changing your Run-time Settings for the script:
Run-time Settings > Browser > Browser Emulation
Make sure you have the box checked for "Simulate a new user on each iteration". You can also try playing with the other settings here, like clearing the cache each iteration. This could cause a new connection setting with the web page for each iteration depending on the server's session settings. Again, this isn't 100%, but it has worked for me from time to time.
try this:
web_set_sockets_option("CLOSE_KEEPALIVE_CONNECTIONS", "1");

How to avoid passing slow Application_Start times to the end users in ASP.NET

I have quite a slow Application_Start due to having a lot of IoC stuff happen at start up.
The problem I'm trying to solve is, how do I avoid passing that start up time to the end user?
Assumptions
My apps are hosted on AppHarbor so I have no access to IIS. However even if I did, my understudying is that it's best practice to let the app pool recycle, so there's no way to avoid having the Application_Start run regularly (I think it's every 20 minutes on AppHarbor).
My idea to solve it
Initially I thought I'd hit it every minute or something, but that seems too brute force and it may not even stop a user from experiencing the slow start up.
My current solution is to handle the Application_End event, and then immediately hit the App so that it starts up again, thus hopefully not impacting any users.
Is there a better way to solve this issue?
Unfortunately, a longer session timeout will not prevent an IIS app pool recycle when you're using InProcess session state.
Have you considered lazy loading (some of) your dependencies? SimpleInjector has documentation on how to do this, which should be adaptable to most other IoCs:
Simple Injector \ Documentation \ How To \ Register Factory Delegates \ Working With Lazy Factories
In my understanding, to prevent the propagation of startup time to users, you should avoid recycling the App Pool, for which you can use IIS App pool timeout settings,these can be tuned through web.config, not just through IIS console. Additionally you can read more of it here on this SO qurestion. You might not need Application_End hacks to achieve this.
Update :
I found another interesting thing that may help you on this, check this IIS Application Initialization Extension that can be used to preload dependencies as soon as worker process starts. It may help you improve customer experience. Check it out.

Ability to reload change in Magento site's configuration without clearing cache

today I dealt with a task to load a module's configuration into running Magento site under heavy load. I copied config.xml file of new module and everything to fix some issue.
Our Magento runs with memcached caching backend.
To have a module running I had to clear cache completly and that had an impack on performance of the site, we had 500 of concurent users . So I'm looking for solution how to deploy changes in of configuration without clearing cache.
Is there any?
Thanks for any thoughts and ideas.
Jaro.
Here is a method of updating the config cache rather than clearing it, thus avoiding race-conditions.
https://gist.github.com/2715268
You don't have to clear the entire cache to load a module's configuration. You can install the module by using the Flush Magento Cache* option. Eventually you'll need to clear the cache to see your front-end changes if any were made. The best thing to do to minimize performance impact is to clear it during off-peak or low-usage times.
*edited - Thanks Fiasco Labs
it's probably best practice to put the system in maintenance mode, make sure all admin sessions are logged out, check that everyone's out and then manually delete var/cache/mage--? folders. You then log back in on one admin session, let it run till you see an admin session has started, log back out and then back into Admin to start checking the site for full function of the freshly installed module.
You will always have to flush cache when installing a module or changing its configurations. This is necessary to force rereading of configurations, to empty out incompatible opcode and force Magento to re-read application code and templates for the changes you have just made.
Yes, it has a momentary impact on your site's performance, but can cause some really interesting issues if you don't.
I've had situations where using the button in Admin wasn't enough, for module installs, it's probably best practice to put the system in maintenance mode, make sure all admin sessions are logged out, check that everyone's out and then manually delete var/cache/mage--? folders. You then log back in on one admin session, let it run till you see an admin session has started, log back out and then back into Admin to start checking the site for full function of the freshly installed module.
This is of course overkill for simple config changes where a cache flush is sufficient.
More info on clearing the cache in Magento

AppFabric velocity as a state server

Is anybody using Windows AppFabric Server for out of process state management?
Any feedback, advice would be appreciated.
Using AppFabric Caching. We tried this and it appeared to work, was easy to setup etc. There are some very strange settings when setting up the cache about peristance which need to be read carefully.
Our issue was on two server we installed IIS and Appfabric Caching and told app to try the local one first. When we went into production is just started to fail. It appears that with only two servers there is a lead server which if it goes down things stop working, we read that if needed to scale to 3 or more servers to get the behaviour we wanted. Not an option when just gone live and not working so we switched to SQL server for now while we look at nCache and ScaleOut and Memchached
The other issue is that caching and session state are not the same animal, if you loose your cache it should not be the end of the world just put it back together, we need to keep session state for the lotted time period at all costs.

Resources