Using Spring Session Redis without rdb snapshots - spring

Is there a need to have Redis snapshots when only using it as a session replication service with Spring Session? I read about Redis Persistence but it seems "persistence" only means backups, and that it is not fully required.
I have a problem in my application that no matter how many times I will call FLUSHALL, it will keep reloading old sessions somehow. I suspect from my RDB file.
Can I just define everything as memory only? Is there any reliability/performance benefits to use an RDB file at all?

I have a problem in my application that no matte how many times I will call FLUSHALL, it will keep reloading old sessions somehow. I suspect from my RDB file.
FLUSHALL will also remove data from RDB file. I think the data might be written by other processes after you called FLUSHALL.
Can I just define everything as memory only?
Yes, you can disable saving db to disk with config file. By default, Redis will save data to disk with some save directives in the config file. For example:
save 900 1
save 300 10
save 60 10000
To disable saving, just comment these save directives or add an empty save directive (i.e. save "").
# save 900 1
# save 300 10
# save 60 10000
Also you can disable AOF persistence logs with config file:
appendonly no
Is there any reliability/performance benefits to use an RDB file at all?
Reliability: Since Redis saves your data to disk, you can recover the data, when you restart Redis. However, if Redis is down accidently, you might lose some data, depends on your save directives in config file.
Performance: There's a performance penalty, especially when you are using RDB persistent with big data in memory.
You can get more details on Redis Persistence from its website.

Related

Why would GlobalKTable with in-memory key-value store not be filled out at restart?

I am trying to figure out how GlobalKTable is working and noticed that my in memory key value store is not filled in case a restart. However the documentation sounds that it is filled in case a restart since the whole data is duplicated on clients.
When I debug my application see that there is a file on /tmp/kafka-streams/category-client-1/global/.checkpoint and it is including an offset about my topic. This might be maybe necessary for stores which are persisting their data and improve restarts however since there is an offset in this file, my application skips restoring its state.
How can I be sure that each restart or fresh start includes whole data of my topic?
Because you are using in-memory store I assume that you are hitting this bug: https://issues.apache.org/jira/browse/KAFKA-6711
As a workaround, you can delete the local checkpoint file for the global store -- this will trigger the bootstrapping on restart. Or you switch back to default RocksDB store.

Where does Redis store the data

I am using redis for pub/sub as well as for server side cache. I mean my app server has redis server running as one process (functioning as a cache as well) . I have several thin clients (running redis client) connected to this app server in pub/sub mode. I would like to know where redis stores the cache data ? in server alone or there will be a copy in the clients as well. Also is it a good idea to use Redis in this fashion if there are close to 100 redis clients connected to server through pub/sub channel.
Thanks
Redis is a (sort of) in-memory noSQL database; but I found that my copy (running on linux) dumps to /var/lib/redis/dump.rdb
Redis can manage really big numbers of connections, by default its in-memory store (thanks to storing stuff in RAM it can be so fast).
But in the same time it can be configured as a persistent store, so dumping cached data (every x time or every x updated keys) to disk.
So it can be configured depending on your needs, have a look here.
All the cache data will be stored in the memory of the server provided to the config of running redis server.
The clients do not hold any data, they only access the data stored by the redis server.
I just installed redis on mac via homebrew. Without any configuration, I
found the dump.rdb is in my working directory (where I launched redis-server).
You can figure that out with the config command.
redis-cli config get dir
However as far as I know pub/sub data is volatile and not stored nor cached in redis at all. If you need that, you should look for a dedicated message broker like for example RabbitMQ.
On my Ubuntu, it was at /var/lib/redis/dump.rdb. On my macOS (installed via brew), it was at /usr/local/var/db/redis/dump.rdb.
Default location
/var/lib/redis/
Redis save all data in memory of server and rarely save date to disk.
For server<>client flow - all data transport with server.
Redis can processing number of clients ... default limit - 10.000
If you need less .. you must reconfigure OS, Server Settings etc. - http://redis.io/topics/clients
As I understood about the question your concern is about the Radis server memory and the client (application) memory.
I would like to know where redis stores the cache data ? in server alone or there will be a copy in the clients as well.
The Redis 6's client-side caching is what you actually looking for. There server and application both stores copies an keep in sync through a protocol communication. Eventhough they have implemented few ways to accomplish it following example (picked from the docs) mechanism will help you to understand it well.
Client 1 -> Server: CLIENT TRACKING ON
Client 1 -> Server: GET foo
(The server remembers that Client 1 may have the key "foo" cached)
(Client 1 may remember the value of "foo" inside its local memory)
Client 2 -> Server: SET foo SomeOtherValue
Server -> Client 1: INVALIDATE "foo"
Hope this helps. See that nice docs for more elaboration.

Grails why is my cache getting invalidated?

I have cached some domain instances and queries in my grails application. I would expect that after the first time the queries are fired to db, the subsequent calls will only hit cache.
But, I am seeing that periodically the queries hit db (after every 5-6 times). No data is getting updated/inserted into database in the mean time. I am using p6spy to check out all the queries that are logged and do not see any updates or inserts happening.
Is there any additional settings I need to tweek?
Currently domain class has
static mapping = {
cach true
}
Queries like findBy* have the [cache:true] set.
If you don't configure Ehcache, your caches will use the default timeout of 120 seconds. See http://ehcache.org/ehcache.xml for a well-commented example file that's the same as the default file that's in the Ehcache jar.
You can configure timeouts, max elements in memory, whether to spill over to disk, etc. by creating an ehcache.xml in your application. Put it in src/java and it will be copied into the classpath, and Ehcache will see it and use yours instead of its defaults.

Magento - Magento Cache

I am using memcache.
I want to understand what is stored in Magento cache and how?
Do magento stores cache variable with website scope or store scope?
I have googled and greped the code but couldnt conclude anything,
Please if someone can direct me to correct links and path
Thanks & Regards,
Saurabh
If you go to the Cache Management section of the admin area you can see what it caches (configuration, layout configuration, block html output, translations, eav types, etc). I am no expert on Magento's caching mechanisms but here are a few random tidbits that might be helpful (maybe). (Also note that I am only familiar with Magento 1.3.x, not 1.4.x so things could have changed).
The caching is actually stored in the var/cache directory. There are a ton of directories in there (mage--0, mage--1, mage--2) and each directory has the cache files. Do a ls var/cache/mage*/* to see all the files.
Configuration - This source for the configuration is varied. Your app/etc/local.xml, and all of the config.xml files (that are in each module's etc dir) are combined together to make one big configuration object. Then Magento reads from the core_config_data table to update the configuration object. Then the configuration is written to a cache file so that next time a request is made it doesn't need to open a ton of config files and hit the database. Somehow this info gets stored in a bunch of files under var/cache. For some insight do a ls var/cache/mage*/*CONF*.
Layout - This is a lot like the configuration... there are a bunch of xml files in the app/design/frontendOrAdminhtml/yournamespace/layout/ directory and all these are merged into one layout configuration object, then cached in the cache directory.
Block HTML - The actual html generated by a block is cached. Each block is able to decide how long it is going to be cached.
Lastly, to (not really) answer your question about if the cache is per website or store, I can't really say since I haven't had the need to setup a multi-website/multi-store shop yet. It looks like there may be some store/website-specific files, but I can't see that they are really organized in a logical way. For example, in one of my instances I see a var/cache/mage--f/mage---LAYOUT_FRONTEND_STORE0_DEFAULT_BLANK_SEO file and a var/cache/mage--f/mage---LAYOUT_FRONTEND_STORE1_DEFAULT_BLANK_SEO... but then again, I only have one store configured and those two files have the same contents. Good luck with that!
You could also use some of the very great memcached analysis and reporting tools available
http://code.google.com/p/memcached/wiki/Tools
The best solution I have come up with is to use a two level cache.
Consult app/etc/local.xml.additional to see how to put memcached server nodes in there. Note that within the <servers> tag you will have to have tags like <server1> and <server2> encapsulating each memcached node's settings.
<cache>
<backend>memcached</backend>
<slow_backend>database</slow_backend>
</cache>
In this way all cache is shared.
To clear it the way I do it is to:
1. shut down apache
2. connect to mysql and connect to the magento db and run truncate core_cache; truncate core_cache_tag.
3. I then bounce the memcached nodes.
4. I restart apache but I keep it out of the load balancer until I have hit it at least once to generate the APC opcode cache. Otherwise the load can shot up through the roof.
This all seems extreme but I have found it works for me. Clearing cache using the backend is REALLY slow. I have around 100k entries in the core_cache table and close to 1 million entries in core_cache_tag. If I don't do it this way sometimes I get strange behavior.
Your Memcache configuration in ./app/etc/local/xml will dictate what Memcache is actually caching.
If you are only using a the single-level cache (without ), then Magento will store its cache (in its entirety) in Memcache.
HOWEVER without the slow_backend defined - it is caching content, without cache_tags - ie. without the ability to differentiate cache items
Eg. configuration, block, layouts, translations etc.
So, without the defined, you cannot refresh caches individually, in-fact, you'll almost always have to rely on "Flush Cache Storage" to actually see updates take effect.
We wrote a nice article here which covers your very issue - http://www.sonassi.com/knowledge-base/magento-knowledge-base/what-is-memcache-actually-caching-in-magento/
Memcached is a distributed memory caching system. It speeds up websites having large dynamic databases by storing database objects in Dynamic Memory to reduce the pressure on a server whenever an external data source requests a read. A Memcached layer reduces the number of times database requests are made.
The caching is actually stored in the var/cache directory. There are a ton of directories in there (mage--0, mage--1, mage--2) and each directory has the cache files. Do a ls var/cache/mage*/* to see all the files.
Configure Memcache Magento 2
Magento 2 also supports Memcached for caching objects but it isn’t enabled by default. You need to make simple changes to the $Magento2Root/app/etc/env.php file to enable it.
In env.php, you will see a large number of PHP arrays with different settings and configurations. Open the file in your favorite code editor and locate the following code:
array (
session' =>
'save' => 'files',
),
Modify this chunk as:
'session' =>
array (
'save' => 'memcached',
'save_path' => '<memcache ip or host>:<memcache port>'
),
Note that the default value for memcache ip is 127.0.0.1:11211. Similarly, the default value for memcache port is 11211.
For complete manual please look into it:
https://www.cloudways.com/blog/magento-2-memcached/
https://devdocs.magento.com/guides/v2.4/config-guide/memcache/memcache_magento.html

When is OpenAFS cache cleared?

Let's say I have a bunch of users who all access the same set of files, that have permission system:anyuser. User1 logs in and accesses some files, and then logs out. When User2 logs in and tries to access the same files, will the cache serve the files, or will it be cleared between users?
The cache should serve the files (in the example above).
How long a file will persist in the OpenAFS cache manager depends on how the client is configured, variables include the configured size of the cache, whether or not the memcache feature is enabled, and how "busy" the client is.
If OpenAFS memcache (cache chunks stored in RAM) is enabled, then the cache is cleared upon reboot. With the more traditional disk cache, the cache can persist across reboots. Aside from that key difference files persist in the cache following the same basic rules. The cache is a fixed size stack, recently accessed files stay in the cache and older files are purged as needed when newer files are requested.
More details are available in the OpenAFS wiki:
http://wiki.openafs.org/

Resources