I am using Laravel 5.5 in my application hosted by AWS; for caching I'm using Redis on ElastiCache. After some research I was able to configure it (using cluster), it works fine however Laravel is unable to flush in redis-cluster:
Cannot use 'FLUSHDB' with redis-cluster
After some digging I learned there is a bug in Laravel that does not allow flush in redis-cluster. I'm wondering: is there a way to use Redis in ElastiCache in a "non-cluster" way?
When I created the Redis instance I did not select the Cluster Mode enabled but apparently it still create as cluster.
If you don't want to use Clusters, configure your config/database.php file such that there's no clusters key in the redis connection
Check out the docs to learn how: https://laravel.com/docs/5.5/redis#configuration
Related
We are currently setting up AWS hosting for our Web Application.
This Laravel Web Application will have a Schema per company that registers, meaning it will have a large sized mySql server.
I have gone through the motions of setting up a VPC with EC2 instances and and RDS for this mySql server.
However we are currently looking at using Laravel Forge as a tool to host.
What Forge does differently is that it includes the mySql Server on the EC2 instance not on an RDS.
The question I have come to ask here is, what are the implications if any of having the mySql server on the EC2 instance rather then an RDS.
Would there be performance issues?
Is it better practice to have an RDS?
Or is Forges out the box way of packaging this all together on an EC2 server fine?
By running this on an EC2 instance you will taking more of the responsibility of managing the database, not just installation but also patching, backups, recovery. Harder to maintain functionality such as replication and HA will also be on you to implement and monitor.
By running on RDS AWS is going to take the heavy lifting of this and implement a best practice version of MySQL which offers the flexibility of allowing you to run a MySQL stack in the cloud without having to really think about the implementation details under the hood other than deciding do you want it to be HA and how many replicas do you want.
In saying this by using RDS you're also giving up the ability to run it however you want, you are limited to the versions of the database that RDS supports (although this is now quite soon after release). In addition not all plugins or extensions will be active so check this functionality before deciding.
I am wondering, if in the future I would like to handle high volume traffic by having my Laravel application under a Load Balancer. The process will be as follow?
1 Load balancer to distribute traffic into:
2 VPS. Each one of these, with an identical Laravel application.
Each web server can connect with:
1 VPS for MYSQL
And here is my doubt, should I also separate Redis such as 1 VPS for Redis and maintaining my Jobs with Laravel Queues?
Or should Redis & Laravel Queues daemon still be on each one of the 2 identical web servers?
I think the better way is to have the redis installed and set up on one of your ECS instances.
After that, you should set your Redis config (REDIS_HOST,REDIS_PORT and REDIS_PASSWORD) in the .env to connect that ECS instance, on both ECS instances. So it will become as below:
A ECS -- A ECS redis service
B ECS -- A ECS redis service
In this way, you does not have to set up an extra ECS which might cost you more and during the same time you can achieve what you want. A drawback for this method is you might need to set up 2 images for your ECS, one with Redis configured and one without.
We have a memcached cluster running in production as a cache layer on top of MySQL. Now we are considering to replace memcached with Couchbase to avoid the cold cache issue (in case of crash) and have this nice feature of managed cache cluster.
At the same time, we want to minimize the changes to migrate to Couchbase. One way we can try is to maintain the libmemcached API and set up a http proxy to direct all request to Couchbase. This way nothing is changed in the application code. If I understand correctly, this way Couchbase is basically a managed memcache cluster. We didn't take advantage of the persistence cache items. We can't do something like flagging a certain cached item to be persistent:
# connect to couchbase like connecting to memcached
$ telnet localhost 11211
SET foo "foo value" # How can we make this item persistent in Couchbase?
I assume this is because all items are stored in memcached bucket. So the question becomes:
Can we control which item to be stored in Couchbase bucket or
memcache bucket? To do so, do we have to change libmemcached API and all the application code related to that?
Thanks!
I think you should look into running Moxi, which is a memcached proxy to couchbase. You can configure Moxi with the destination couchbase bucket.
A couchbase cluster automatically spins up a cluster-aware moxi gateway, which you could point your web/application servers to. This is what couchbase calls "server-side moxi".
Alternatively, you can either install moxi on each your web/app servers, so they simply connect to localhost:11211. Moxi handles the persistent connection the couchbase cluster. This is what couchbase calls "client-side moxi".
I am considering using Amazon ElastiCache Redis. However, I would like to be in control of my replication, and so I would like to know if it's possible to set up redis-server on a VPS (non-Amazon) or on an EC2 Amazon to be the slave of the ElastiCache Redis instance.
If not, then is ElastiCache Redis worth using when you want to use Redis as an in-memory data storage with reliable persistency, and not only for mere "caching" of data?
Thank you,
As of Amazon's updates for Redis 2.8.22 you can no longer use non-ElastiCache replication nodes. The SYNC and PSYNC commands will be unrecognized. This change appears to affect all Redis versions, so you can't circumvent it by using a pre-2.8.22 Redis instance.
An alternative would be to use an EC2 instance as a master node, however you would lose the management benefits ElastiCache provides, needing to set up and maintain everything by yourself.
Yes, it is possible to do so. The replication protocol works on the same redis connection. So if you can connect to elastic cache from the VPS or EC2, you will also be able to install a slave on that machine.
I am developing the django-backend of a ios app. I will use cached-session using redis. Once a user logs in, I will save his session in the redis-cache (backed up by mysql), I just want to know (for the long run), can I use redis replication to keep copy of the cached session incase I scale the redis server in a master-slave format in the future. Or I should always access cache value from one particular redis server?
It makes sense to keep copy with replication of redis in master/slave format since there isn't the possibility of sharding like in mongodb for redis yet (AFAIK). So you have to get your session from one particual redis server until if you dont want to control several redis-server manually.