I've setup an ehcache that uses a disk store to cache some files. This is working, and I can see the cache file created on disk, but I want to have this behaviour also in a terracotta server, so the cache can be accessed by multiple clients.
I've setup the terracotta server, tweaked the ehcache configuration, I can see that the cache is working, but I'm not sure if it is using memory or disk. I only want to use disk for this cache.
I also get some warnings like this one: WARN - Asking for a large amount of memory: 26179740 bytes
Terracotta config:
<servers>
<mirror-group>
<server host="localhost" name="localhost" >
<data>/opt/terracotta/data</data>
<tsa-port>9510</tsa-port>
<management-port>9540</management-port>
<tsa-group-port>9530</tsa-group-port>
<dataStorage size="2g">
<offheap size="100m"/>
<hybrid/>
</dataStorage>
<logs>stdout:</logs>
</server>
</mirror-group>
I'm configuring ehcache programmatically, and I'm certain the following config is wrong, but maybe close to what is needed.
TerracottaConfiguration config = new TerracottaConfiguration()
.clustered(true)
.compressionEnabled(true);
Cache httpCache = new Cache(new CacheConfiguration()
.name(HTTP_CACHE)
.maxEntriesLocalHeap(1)
.memoryStoreEvictionPolicy(MemoryStoreEvictionPolicy.LRU)
.diskExpiryThreadIntervalSeconds(Properties.CACHE_HTTP_EXPIRY)
.persistence(new PersistenceConfiguration().strategy(PersistenceConfiguration.Strategy.DISTRIBUTED))
.terracotta(config));
Given the configuration and version information given in comments:
Open source Terracotta server only uses in-memory storage.
<dataStorage size="2g">
<offheap size="2g"/>
</dataStorage>
In this example, you have 2Gb of data storage, all using offheap.
And of course there will be no on disk content.
This means that if the server is shut down, all data is lost.
Of course you can have two servers in a single mirror group to gain high availability.
With Enterprise feature, you can effectively have data persisted on disk to enable restartability.
<dataStorage size="2g">
<offheap size="200m"/>
<hybrid/>
</dataStorage>
The example above declares 2Gb of Storage of which 200Mb will be served from memory and the rest from disk.
Note that in order to have full server restartability, you need to enable it through: <restartable enabled="true"/> in each server element.
For more details on all this, please refer to the product documentation.
Note also that you should use the same versions for the client and the server. While the 4.3 line supports different client and server version, it is aimed at rolling upgrades and is not a recommended long running setup.
Related
I have ActiveMQ Artemis cluster (2 nodes) with active-backup HA (shared-store mode), 2.17.0.
Shared-store is setup with NFS, mounted on $ARTEMIS_INSTANCE/data. In broker.xml I specified the following settings - pretty standard:
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
According to this official documentation page, there is login.conf file in etc directory which specifies users & roles files. I have the following contents:
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="artemis-users.properties"
org.apache.activemq.jaas.properties.role="artemis-roles.properties";
};
Well, everything seem to work fine with it, but I noticed that every time I want to create a new user/role, I have to create twice, in each node separately. If I have replication HA mode and 6 nodes, I would need to create the same user/role 6 times (for each node).
Am I not missing anything here?
Then I've come up with an idea to literally move artemis-users.properties and artemis-roles.properties to a $ARTEMIS_INSTANCE/data directory and modify login.conf file accordingly, so I can create user/role only once, and created entries will be accessible from other node(s):
activemq {
org.apache.activemq.artemis.spi.core.security.jaas.PropertiesLoginModule required
debug=false
reload=true
org.apache.activemq.jaas.properties.user="../data/artemis-users.properties"
org.apache.activemq.jaas.properties.role="../data/artemis-roles.properties";
};
Since this is shared store, it kind of makes sense for me to store this way. I did quite some testing and everything seems to work fine, I do not think there are going to be any race conditions this way.
Again, am I not missing anything? Any suggestions/better workarounds?
The PropertiesLoginModule is provided by default because it is simple and straight-forward to configure for basic use-cases. However, it's not really designed for production use across a cluster. Typically you'd use an LDAP server (or some equivalent) which is a central store for all your user & role data. As the documentation states:
In general, using properties files and broker-centric user management for anything other than very basic use-cases is not recommended. The broker is designed to deal with messages. It's not in the business of managing users, although that functionality is provided at a limited level for convenience. LDAP is recommended for enterprise level production use-cases.
You are, of course, free to use the PropertiesLoginModule in more complex use-cases (e.g. like yours). I think putting the properties files on shared storage is fine and not likely to lead to problems.
Is there a need to have Redis snapshots when only using it as a session replication service with Spring Session? I read about Redis Persistence but it seems "persistence" only means backups, and that it is not fully required.
I have a problem in my application that no matter how many times I will call FLUSHALL, it will keep reloading old sessions somehow. I suspect from my RDB file.
Can I just define everything as memory only? Is there any reliability/performance benefits to use an RDB file at all?
I have a problem in my application that no matte how many times I will call FLUSHALL, it will keep reloading old sessions somehow. I suspect from my RDB file.
FLUSHALL will also remove data from RDB file. I think the data might be written by other processes after you called FLUSHALL.
Can I just define everything as memory only?
Yes, you can disable saving db to disk with config file. By default, Redis will save data to disk with some save directives in the config file. For example:
save 900 1
save 300 10
save 60 10000
To disable saving, just comment these save directives or add an empty save directive (i.e. save "").
# save 900 1
# save 300 10
# save 60 10000
Also you can disable AOF persistence logs with config file:
appendonly no
Is there any reliability/performance benefits to use an RDB file at all?
Reliability: Since Redis saves your data to disk, you can recover the data, when you restart Redis. However, if Redis is down accidently, you might lose some data, depends on your save directives in config file.
Performance: There's a performance penalty, especially when you are using RDB persistent with big data in memory.
You can get more details on Redis Persistence from its website.
I am using redis for pub/sub as well as for server side cache. I mean my app server has redis server running as one process (functioning as a cache as well) . I have several thin clients (running redis client) connected to this app server in pub/sub mode. I would like to know where redis stores the cache data ? in server alone or there will be a copy in the clients as well. Also is it a good idea to use Redis in this fashion if there are close to 100 redis clients connected to server through pub/sub channel.
Thanks
Redis is a (sort of) in-memory noSQL database; but I found that my copy (running on linux) dumps to /var/lib/redis/dump.rdb
Redis can manage really big numbers of connections, by default its in-memory store (thanks to storing stuff in RAM it can be so fast).
But in the same time it can be configured as a persistent store, so dumping cached data (every x time or every x updated keys) to disk.
So it can be configured depending on your needs, have a look here.
All the cache data will be stored in the memory of the server provided to the config of running redis server.
The clients do not hold any data, they only access the data stored by the redis server.
I just installed redis on mac via homebrew. Without any configuration, I
found the dump.rdb is in my working directory (where I launched redis-server).
You can figure that out with the config command.
redis-cli config get dir
However as far as I know pub/sub data is volatile and not stored nor cached in redis at all. If you need that, you should look for a dedicated message broker like for example RabbitMQ.
On my Ubuntu, it was at /var/lib/redis/dump.rdb. On my macOS (installed via brew), it was at /usr/local/var/db/redis/dump.rdb.
Default location
/var/lib/redis/
Redis save all data in memory of server and rarely save date to disk.
For server<>client flow - all data transport with server.
Redis can processing number of clients ... default limit - 10.000
If you need less .. you must reconfigure OS, Server Settings etc. - http://redis.io/topics/clients
As I understood about the question your concern is about the Radis server memory and the client (application) memory.
I would like to know where redis stores the cache data ? in server alone or there will be a copy in the clients as well.
The Redis 6's client-side caching is what you actually looking for. There server and application both stores copies an keep in sync through a protocol communication. Eventhough they have implemented few ways to accomplish it following example (picked from the docs) mechanism will help you to understand it well.
Client 1 -> Server: CLIENT TRACKING ON
Client 1 -> Server: GET foo
(The server remembers that Client 1 may have the key "foo" cached)
(Client 1 may remember the value of "foo" inside its local memory)
Client 2 -> Server: SET foo SomeOtherValue
Server -> Client 1: INVALIDATE "foo"
Hope this helps. See that nice docs for more elaboration.
I am using memcache.
I want to understand what is stored in Magento cache and how?
Do magento stores cache variable with website scope or store scope?
I have googled and greped the code but couldnt conclude anything,
Please if someone can direct me to correct links and path
Thanks & Regards,
Saurabh
If you go to the Cache Management section of the admin area you can see what it caches (configuration, layout configuration, block html output, translations, eav types, etc). I am no expert on Magento's caching mechanisms but here are a few random tidbits that might be helpful (maybe). (Also note that I am only familiar with Magento 1.3.x, not 1.4.x so things could have changed).
The caching is actually stored in the var/cache directory. There are a ton of directories in there (mage--0, mage--1, mage--2) and each directory has the cache files. Do a ls var/cache/mage*/* to see all the files.
Configuration - This source for the configuration is varied. Your app/etc/local.xml, and all of the config.xml files (that are in each module's etc dir) are combined together to make one big configuration object. Then Magento reads from the core_config_data table to update the configuration object. Then the configuration is written to a cache file so that next time a request is made it doesn't need to open a ton of config files and hit the database. Somehow this info gets stored in a bunch of files under var/cache. For some insight do a ls var/cache/mage*/*CONF*.
Layout - This is a lot like the configuration... there are a bunch of xml files in the app/design/frontendOrAdminhtml/yournamespace/layout/ directory and all these are merged into one layout configuration object, then cached in the cache directory.
Block HTML - The actual html generated by a block is cached. Each block is able to decide how long it is going to be cached.
Lastly, to (not really) answer your question about if the cache is per website or store, I can't really say since I haven't had the need to setup a multi-website/multi-store shop yet. It looks like there may be some store/website-specific files, but I can't see that they are really organized in a logical way. For example, in one of my instances I see a var/cache/mage--f/mage---LAYOUT_FRONTEND_STORE0_DEFAULT_BLANK_SEO file and a var/cache/mage--f/mage---LAYOUT_FRONTEND_STORE1_DEFAULT_BLANK_SEO... but then again, I only have one store configured and those two files have the same contents. Good luck with that!
You could also use some of the very great memcached analysis and reporting tools available
http://code.google.com/p/memcached/wiki/Tools
The best solution I have come up with is to use a two level cache.
Consult app/etc/local.xml.additional to see how to put memcached server nodes in there. Note that within the <servers> tag you will have to have tags like <server1> and <server2> encapsulating each memcached node's settings.
<cache>
<backend>memcached</backend>
<slow_backend>database</slow_backend>
</cache>
In this way all cache is shared.
To clear it the way I do it is to:
1. shut down apache
2. connect to mysql and connect to the magento db and run truncate core_cache; truncate core_cache_tag.
3. I then bounce the memcached nodes.
4. I restart apache but I keep it out of the load balancer until I have hit it at least once to generate the APC opcode cache. Otherwise the load can shot up through the roof.
This all seems extreme but I have found it works for me. Clearing cache using the backend is REALLY slow. I have around 100k entries in the core_cache table and close to 1 million entries in core_cache_tag. If I don't do it this way sometimes I get strange behavior.
Your Memcache configuration in ./app/etc/local/xml will dictate what Memcache is actually caching.
If you are only using a the single-level cache (without ), then Magento will store its cache (in its entirety) in Memcache.
HOWEVER without the slow_backend defined - it is caching content, without cache_tags - ie. without the ability to differentiate cache items
Eg. configuration, block, layouts, translations etc.
So, without the defined, you cannot refresh caches individually, in-fact, you'll almost always have to rely on "Flush Cache Storage" to actually see updates take effect.
We wrote a nice article here which covers your very issue - http://www.sonassi.com/knowledge-base/magento-knowledge-base/what-is-memcache-actually-caching-in-magento/
Memcached is a distributed memory caching system. It speeds up websites having large dynamic databases by storing database objects in Dynamic Memory to reduce the pressure on a server whenever an external data source requests a read. A Memcached layer reduces the number of times database requests are made.
The caching is actually stored in the var/cache directory. There are a ton of directories in there (mage--0, mage--1, mage--2) and each directory has the cache files. Do a ls var/cache/mage*/* to see all the files.
Configure Memcache Magento 2
Magento 2 also supports Memcached for caching objects but it isn’t enabled by default. You need to make simple changes to the $Magento2Root/app/etc/env.php file to enable it.
In env.php, you will see a large number of PHP arrays with different settings and configurations. Open the file in your favorite code editor and locate the following code:
array (
session' =>
'save' => 'files',
),
Modify this chunk as:
'session' =>
array (
'save' => 'memcached',
'save_path' => '<memcache ip or host>:<memcache port>'
),
Note that the default value for memcache ip is 127.0.0.1:11211. Similarly, the default value for memcache port is 11211.
For complete manual please look into it:
https://www.cloudways.com/blog/magento-2-memcached/
https://devdocs.magento.com/guides/v2.4/config-guide/memcache/memcache_magento.html
Let's say I have a bunch of users who all access the same set of files, that have permission system:anyuser. User1 logs in and accesses some files, and then logs out. When User2 logs in and tries to access the same files, will the cache serve the files, or will it be cleared between users?
The cache should serve the files (in the example above).
How long a file will persist in the OpenAFS cache manager depends on how the client is configured, variables include the configured size of the cache, whether or not the memcache feature is enabled, and how "busy" the client is.
If OpenAFS memcache (cache chunks stored in RAM) is enabled, then the cache is cleared upon reboot. With the more traditional disk cache, the cache can persist across reboots. Aside from that key difference files persist in the cache following the same basic rules. The cache is a fixed size stack, recently accessed files stay in the cache and older files are purged as needed when newer files are requested.
More details are available in the OpenAFS wiki:
http://wiki.openafs.org/