Our application is using JCS for caching frequently used data.
I wanted to know if JCS maintains(or can generate) any statistics having information like cache usage, cache misses, etc?
We recently decided to parallelise some job using multithreading. Although the threads will be working on different data, but they would be sharing the same cache (this I figured as they would be running in same JVM, and the JCS cache is a singleton class, please correct me if I am missing something here..).
So I need to analyse if we need to change the cache configuration for the added load.
Thanks a for your help!
The following web site gives a detail introduction,scroll down to the bottom of the page
Click here
If u want everything in a single go then use compCache.getStatistics(),refer to the API documentation for more information.
There is a JSP available that shows you some of the information you are looking for like cache hits, misses, total memory used, etc. You may have to edit it a little to fit into your project, but it's handy.
http://svn.apache.org/repos/asf/commons/proper/jcs/trunk/src/java/org/apache/jcs/admin/JCSAdmin.jsp
Related
This question is directed towards Jeroen and is a follow-up to this answer: https://stackoverflow.com/a/12482918/177984
Jeroen wrote "the server does caching" .. "so if enough memory is available it will automatically be available from memory."
How can I confirm if an object is cached 'in-memory' or not? From what I can tell (by performance) all of my objects are being read from disk. I'd like to have things read from memory to speed up data load times. Is there a way to view what's in the in-memory cache? Is there a way to force caching objects in-memory?
Thanks for your help.
The OpenCPU project is rapidly evolving. Things have changed in OpenCPU 1.0. Have a look at the website for the latest information: http://www.opencpu.org.
The answer that you cited is outdated. Currently indeed all the caching is done on disk. In a previous version, OpenCPU used Varnish to do caching, which is completely in-memory. However this turned out to make things more complicated (especially https), and performance was a bit disappointing (especially in comparison with fast disks these days). So now we're back at nginx which caches on disk, but is much more mature and configurable as a web server, and has other performance benefits.
I'm working on ASP.net MVC3 Web application that is facing scalability issue.
For improving performance I want to store dynamically generated pages in html and serve them from generated html directly rather then querying database for each page request.
I'm sure this will dramatically increase performance.
Can any one share any hint / example / tutorial on how to do it? And what are challenges?
I would also like to know how others are handling performance issue for large e-commerce sites with at-least thousand categories and 200k products with at least 200-500 concurrent visitors? What are the best approaches?
Thanks in advance.
You shoult have a look at Improving Performance with Output Caching.
It provides several ways to cache the output of your controllers like this:
[OutputCache(Duration=10, VaryByParam="none")]
public ActionResult Index()
{
return View();
}
You don't need to do that, just enable the output cache. It will be the same, instead of hitting all your logic for creating the pages you will be retrieving a static one, but from the cache instead of disk.
I would look at implementing a cache system for commonly viewed pages. The .Net platform has some really nice cache libraries that can be used that would allow you to manage the cache in real time.
Cache Best Practices
msdn.microsoft.com/en-us/library/aa478965.aspx
I might also take a look a writing a restful API that can be load balanced across multiple nodes of a cluster.
Do you know where your capacity bottleneck is?
My guess is your DB is the bottleneck, but unless you measure to find out where the bottleneck is you're likely to spend a lot of time optimising things that may not make much difference.
First things to do are get hold of a copy of PAL and monitor the web server and DB to see what it tells you.
Also run SQL queries to diagnose the most expensive and frequent queries.
Measure you're actually page generation times and follow it down the stack to see what calls are being made and which are expensive and can be cached.
Rather than output caching, I'd generally introduce a caching layer infront of the webservers, and caching layer between the app server and the DB but without measuring it's very hard to judge.
If you want to know more about caching in general, I'd read this answer I wrote a while back How to get started with web caching, CDNs, and proxy servers?
I am new to caching
What should I cache
eg. Do I cache user info? eg. since they are frequently used throughout the application (like in the header saying "welcome {username}")?
But most things should be used quite frequently anyways? eg. Users have projects. These projects don't belong to everyone, but they will be frequently used by specific users do I cache them too? Won't I be caching nearly everything then?
Also regarding CRUD, with doctrine queries, I can just use $query->useResultCache(true) but what happens when I update/delete an entity? I need to somehow update my cache too? how?
The basic principle of caching is to hold frequently used data that doesnt change often in memory to reduce database work.
Its more convenient to use the php session variables to hold basic things like username.
In case of projects, if they dont change often, and retrieved by users frequently, it would be a good idea to cache them. How long a project info stays cached depends on the change frequency.
Also note that if the info you present to users is vital or time important, you should use caching cautiously.
Check this reference page for basic information on caching http://www.doctrine-project.org/docs/orm/2.0/en/reference/dql-doctrine-query-language.html#cache-related-api
Or check http://www.doctrine-project.org/docs/orm/2.0/en/reference/caching.html for detailed explanation.
We have 3 front-end servers each running multiple web applications. Each web application has an in memory cache.
Recreating the cache is very expensive (>1 min). Therefore we repopulate it using a web service call to each web application on each front-end server every 5 minutes.
The main problem with this setup is maintaining the target list for updating and the cost of creating the cache several times every few minutes.
We are considering using AppFabric or something similar but I am unsure how time consuming it is to get up and running. Also we really need the easiest solution.
How would you update an expensive in memory cache across multiple front-end servers?
The problem with memory caching is that it's unique to the server. I'm going with the idea that this is why you want to use AppFabric. I'm also assuming that you're re-creating the cache every few minutes to keep the in memory caches in sync across all servers. With all this work, I can well appreciate that caching is expensive for you.
It sounds like you're doing a lot of work that probably isn't necessary. This article has some detail about the caching mechanisms available within SharePoint. You may be interested in the output cache discussed near the top of the article. You may also want to read the linked TechNet article and the linked article called "Custom Caching Overview".
The only SharePoint way to do that is to use Service Application infrastructure. The only problem is that it requires some time to understand how it works. Also it's too complicated to do it from scratch. You might consider downloading one of existing applications and rename classes/GUIDs to match your naming conventions. I used this one: http://www.parago.de/2011/09/paragoservices-a-sharepoint-2010-service-application-sample/. In this case you can have single cache per N front-end servers.
I'm currently storing generated HTML pages in a memcached in-memory cache. This works great, however I am wanting to increase the storage capacity of the cache beyond available memory. What I would really like is:
memcached semantics (i.e. not reliable, just a cache)
memcached api preferred (but not required)
large in-memory first level cache (MRU)
huge on-disk second level cache (main)
evicted from on-disk cache at maximum storage using LRU or LFU
proven implementation
In searching for a solution I've found the following solutions but they all miss my marks in some way. Does anyone know of either:
other options that I haven't considered
a way to make memcachedb do evictions
Already considered are:
memcachedb
best fit but doesn't do evictions: explicitly "not a cache"
can't see any way to do evictions (either manual or automatic)
tugela cache
abandoned, no support
don't want to recommend it to customers
nmdb
doesn't use memcache api
new and unproven
don't want to recommend it to customers
Tokyo Cabinet/Tokyo Tyrant?
Seems that later versions of memcachedb can be cleaned up manually if desired using the rget command and storing the expiry time in the data record. Of course, this means that I pound both the server and network with requests for the entire data block even though I only want the expiry time. Not the best solution but seemingly the only one currently available.
I worked with EhCache and it works very good. It has in memory cache and disk storage with differents eviction policies. It's a mature library a with good support. There is a memcached api that wraps EhCache, specially developed for GAE support.
Regards,
Jonathan.