I am developing the django-backend of a ios app. I will use cached-session using redis. Once a user logs in, I will save his session in the redis-cache (backed up by mysql), I just want to know (for the long run), can I use redis replication to keep copy of the cached session incase I scale the redis server in a master-slave format in the future. Or I should always access cache value from one particular redis server?
It makes sense to keep copy with replication of redis in master/slave format since there isn't the possibility of sharding like in mongodb for redis yet (AFAIK). So you have to get your session from one particual redis server until if you dont want to control several redis-server manually.
Related
Now we have java project with PostgreSQL database on spring boot 2 with Spring Data JPA (Hibernate).
Requirements to new architecture:
On N computers we have workplace. Each workplace use the same program with different configuration (configured client for redistributed database).
Computers count is not big - amount 10/20 PCs. Database must be scalable (a lot of data can be stored at the disk ~1/2 Tb).
Every day up to 1 million rows can be inserted into database from one workplace.
Each workplace works with redistributed database - it means, that each node must be able to read/write data, modified by each other. And make some decision based on data, modified by another workplace at runtime(Transactional).
Datastore(disk database archive) must be able to archived and copied as backup snapshot.
Project must be portable to new architecture with Spring Data JPA 2 and database backups with liquibase. Works on windows/ Linux.
The quick overview shows me that the most popular redistributed FREE database at now are:
1) Redis
2) Apache Ignite
3) Hazelcast
I need help in understanding way to architect described system.
First of all, I'm tried to use redis and ignite. Redis start easily - but it works like simple IMDG(in memory data grid). But I need to store all the data in persistent database(at disk, like ignite persistence). There is a way to use redis with existing PostgreSQL database? Postgres synchronized with all nodes and Redis use in memory cache with fresh data, produced by each workplace. Each 10 minutes data flushed at disk.
1) This is possible? How?
Also I'm tried to use Ignite - but my project works on spring boot 2. Spring data 2. And Ignite last released version is 2.6 and spring data 2 support will appears only in apache ignite 2.7!
2) I have to download 2.7 version nightly build, but how can I use it in my project? (need to install to local Maven repository?)
3) And after all, what will be the best architecture in that case? Datastore provider stores persistent data at disk, synchronized with each workspace In-memory cache and persist in-memory data to disk by timeout?
What will be the best solution and which database I should to choose?
(may be something works with existing PostgreSQL?)
Thx)
Your use case sounds like a common one with Hazelcast. You can store your data in memory (i.e. in an Hazelcast IMap), use a MapStore/MapLoader to persist changes to your database, or read from database. Persisting changes can be done in a write-through or write-behind manner based on your configuration. Also there is spring boot and spring-jpa integration available.
Also the amount of data you want to store is pretty big for 10-20 machines, so you might want to look into hazelcast High-Density Memory Store option to be able to store large amounts of data in commodity hardware without having GC problems.
Following links should give you further idea:
https://opencredo.com/spring-booting-hazelcast/
https://docs.hazelcast.org//docs/3.11/manual/html-single/index.html#loading-and-storing-persistent-data
https://hazelcast.com/products/high-density-memory-store/
Ignite is not suitable for that options, because JPA 1 supports only.
Redis isn't supports SQL queries.
Our choiŃe is plain PostgreSQL master with slave replication. May be cockroachDB applies also.
Thx for help))
We have a memcached cluster running in production as a cache layer on top of MySQL. Now we are considering to replace memcached with Couchbase to avoid the cold cache issue (in case of crash) and have this nice feature of managed cache cluster.
At the same time, we want to minimize the changes to migrate to Couchbase. One way we can try is to maintain the libmemcached API and set up a http proxy to direct all request to Couchbase. This way nothing is changed in the application code. If I understand correctly, this way Couchbase is basically a managed memcache cluster. We didn't take advantage of the persistence cache items. We can't do something like flagging a certain cached item to be persistent:
# connect to couchbase like connecting to memcached
$ telnet localhost 11211
SET foo "foo value" # How can we make this item persistent in Couchbase?
I assume this is because all items are stored in memcached bucket. So the question becomes:
Can we control which item to be stored in Couchbase bucket or
memcache bucket? To do so, do we have to change libmemcached API and all the application code related to that?
Thanks!
I think you should look into running Moxi, which is a memcached proxy to couchbase. You can configure Moxi with the destination couchbase bucket.
A couchbase cluster automatically spins up a cluster-aware moxi gateway, which you could point your web/application servers to. This is what couchbase calls "server-side moxi".
Alternatively, you can either install moxi on each your web/app servers, so they simply connect to localhost:11211. Moxi handles the persistent connection the couchbase cluster. This is what couchbase calls "client-side moxi".
Problem: I want to cache user information such that all my applications can read the data quickly, but I want only one specific application to be able to write to this cache.
I am on AWS, so one solution that occurred to me was a version of memcached with two ports: one port that accepts read commands only and one that accepts reads and writes. I could then use security groups to control access.
Since I'm on AWS, if there are solutions that use out-of-the box memcached or redis, that'd be great.
I suggest you use ElastiCache with one open port at 11211(Memcached)then create an EC2 instance, set your security group so only this server can access to your ElastiCache cluster. Use this server to filter your applications, so only one specific application can write to it. You control the access with security group, script or iptable. If you are not using VPC, then you can use cache security group.
I believe you can accomplish this using Redis (instead of Memcached) which is also available via ElastiCache. Once the instance has been created, you will want to create a replication group and associate it to the cache cluster you already launched.
You can then add instances to the replication group. Instances within the replication group are simply replicated from the Master Cache Cluster (single Redis instance) and so are (by default) read-only.
So, in this setup, you have a master node (single endpoint) that you can write to and as many read nodes (multiple endpoints) as you would like.
You can take security a step further and assign different routing rules to the replication group (via the VPC) so the applications reading data does not have access to the master node (the only one that can write data).
I'm deploying my Grails (2.3.6) app with the Grails Standalone App Runner plugin, like so:
grails -Dgrails.env=prod build-standalone myapp.jar --tomcat
Then, my CI build places myapp.jar onto my app server, say, myapp01.
I now want to cluster app sessions when myapp is running on multiple nodes. So if myapp gets deployed to myapp01, myapp02 and myapp03, and one of those instances starts a new session with a user, I want all 3 to be aware of the same session. This is obviously so I can put all the nodes behind a load balanced URL (http://myapp.example.com, etc.) and it doesn't matter what node you get routed to: all nodes share the same sessions.
I googled "grails session clustering" and see a bunch of articles that seem to require terracotta, but I also heard that Grails has built-in session clustering facilities. But any searches I do come back empty-handed.
So I ask: How can I achieve this kind of session clustering with an embedded Tomcat?
Besides the seesion-cookie plugin that #injecteer proposed, there are several other plugins allowing to keep sessions in a shared storage (DB, mongodb, redis, memcached) that can be accessed by any of your tomcat instances. Take a look at these:
http://grails.org/plugin/database-session
http://grails.org/plugin/mongodb-session
http://grails.org/plugin/redis-database-session
http://grails.org/plugin/standalone-tomcat-redis
http://grails.org/plugin/standalone-tomcat-memcached
I never heard of something like this out-of-box. I would give 2 options a try:
Use a session-cookie plugin, with which you decouple your clients from storing the sessions in tomcat
Use or implement persistent sessions, which are stored in some sort of DB and are not bound to any tomcat instance.
You could achieve this by using the tomcat build-in functionality. Tomcat instance node could replicate session from others, then all the session get shared between nodes.
You can do this in at least three ways:
Session Replication by using Muilcast between instance nodes.
Session Replication just between primary and secondary node backup.
Session Replication between Static Memberships, this one is useful when the multicast cannot be enabled or supported such as in AWS EC2 Env.
Reference:
http://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html
http://khaidoan.wikidot.com/tomcat-cluster-session-replication
I am considering using Amazon ElastiCache Redis. However, I would like to be in control of my replication, and so I would like to know if it's possible to set up redis-server on a VPS (non-Amazon) or on an EC2 Amazon to be the slave of the ElastiCache Redis instance.
If not, then is ElastiCache Redis worth using when you want to use Redis as an in-memory data storage with reliable persistency, and not only for mere "caching" of data?
Thank you,
As of Amazon's updates for Redis 2.8.22 you can no longer use non-ElastiCache replication nodes. The SYNC and PSYNC commands will be unrecognized. This change appears to affect all Redis versions, so you can't circumvent it by using a pre-2.8.22 Redis instance.
An alternative would be to use an EC2 instance as a master node, however you would lose the management benefits ElastiCache provides, needing to set up and maintain everything by yourself.
Yes, it is possible to do so. The replication protocol works on the same redis connection. So if you can connect to elastic cache from the VPS or EC2, you will also be able to install a slave on that machine.