Where does Redis store the data - caching

I am using redis for pub/sub as well as for server side cache. I mean my app server has redis server running as one process (functioning as a cache as well) . I have several thin clients (running redis client) connected to this app server in pub/sub mode. I would like to know where redis stores the cache data ? in server alone or there will be a copy in the clients as well. Also is it a good idea to use Redis in this fashion if there are close to 100 redis clients connected to server through pub/sub channel.
Thanks

Redis is a (sort of) in-memory noSQL database; but I found that my copy (running on linux) dumps to /var/lib/redis/dump.rdb

Redis can manage really big numbers of connections, by default its in-memory store (thanks to storing stuff in RAM it can be so fast).
But in the same time it can be configured as a persistent store, so dumping cached data (every x time or every x updated keys) to disk.
So it can be configured depending on your needs, have a look here.

All the cache data will be stored in the memory of the server provided to the config of running redis server.
The clients do not hold any data, they only access the data stored by the redis server.

I just installed redis on mac via homebrew. Without any configuration, I
found the dump.rdb is in my working directory (where I launched redis-server).

You can figure that out with the config command.
redis-cli config get dir
However as far as I know pub/sub data is volatile and not stored nor cached in redis at all. If you need that, you should look for a dedicated message broker like for example RabbitMQ.

On my Ubuntu, it was at /var/lib/redis/dump.rdb. On my macOS (installed via brew), it was at /usr/local/var/db/redis/dump.rdb.

Default location
/var/lib/redis/

Redis save all data in memory of server and rarely save date to disk.
For server<>client flow - all data transport with server.
Redis can processing number of clients ... default limit - 10.000
If you need less .. you must reconfigure OS, Server Settings etc. - http://redis.io/topics/clients

As I understood about the question your concern is about the Radis server memory and the client (application) memory.
I would like to know where redis stores the cache data ? in server alone or there will be a copy in the clients as well.
The Redis 6's client-side caching is what you actually looking for. There server and application both stores copies an keep in sync through a protocol communication. Eventhough they have implemented few ways to accomplish it following example (picked from the docs) mechanism will help you to understand it well.
Client 1 -> Server: CLIENT TRACKING ON
Client 1 -> Server: GET foo
(The server remembers that Client 1 may have the key "foo" cached)
(Client 1 may remember the value of "foo" inside its local memory)
Client 2 -> Server: SET foo SomeOtherValue
Server -> Client 1: INVALIDATE "foo"
Hope this helps. See that nice docs for more elaboration.

Related

Mark standalone redis as read-only

I want to mark a standalone Redis server (not a Redis-Cluster, not a Redis-Sentinel) as read-only. I have been googling for this for quite sometime but I don't seem to find a definite answer (Almost all answers point to Clustering or Sentinel). I was looking out for some config modification (CONFIG SET something).
NOTE: config set replica-read-only yes does not make the current redis-server read-only, but only its replicas.
My use-case basically is I am doing a migration wherein at some point I want to make the redis-server read-only. My application code can handle failures whenever a write call happens so that's not an issue.
Also, if this is not directly possible from redis server, is there something that I can do in the client code that'll have the same effect (I am using redis-py as the client library)? (Although this is less than ideal)
Things that I've tried
Played around with config set replica-read-only yes and other configs. They don't seem to be applying the current redis-server.
Tried marking a redis-server as a replica of itself (This was illogical, but just wanted to see if this worked), but turns out it deleted all keys in my local redis, so not something I can do.
Once the writes are done and you want to switch the node to read-only, couple of ways to do that:
Modify the redis.conf to have "min-replicas-to-write 3". Since you don't have 3 replicas your node will stop accepting writes but will continue to serve reads, as shown below:
However, please note that after modifying redis.conf, you will have to restart your redis node for the changes to take effect.
Another way is when you want to switch to readonly mode, at that time you create a replica and let it sync with the master and then kill the master node. Then replica will exist as read only.
There're several solution you can try:
You can use the rename-command config to disable write commands. If you only want to disable small number of commands, that's a good solution. However, since there're too many write commands, you might need to have too many configuration, and easy to miss some of them.
If you're using Redis 6.0, you can use Redis ACL to disable write commands for specific users.
You can setup a read-only Redis replica for your master, and ask clients to read from the replica.

Redis server log file rotation on Windows

We are using Redis 3.2 (64 bit) (https://github.com/MSOpenTech/redis/releases) on Windows Server 2012 R2 Standard, for in-memory data caching.
Have been able to write redis-server logs by setting the logfile parameter in redis.conf file but, could not specify log file max size & subsequent rollover.
Would want to know whether there is a way to specify log file rotation in Redis conf / probably pass that as a parameter when starting the Redis server daemon process.
It would really help to get any suggestions on this front.
Thanks & regards,
Surjit
Although this is an old question... MS Open Tech Redis project has been abandoned.
You can checkout Memurai, which derives from it link.
Disclaimer: I work in Memurai.

Postgres: After importing production database (with replication) to my local machine, I notice network packets being sent and received from macbook

I've been a MySQL guy, and now I'm working with Postgres so I am learning. Wondering if someone can tell me why my postgres process on my macbook is sending and receiving data over my network. I am just noticing this is happening for the first time - so maybe it's been going on before this and I just never noticed postgres does this.
What has me a bit nervous, is that I pulled down a production datadump from our server which is set up with replication and I imported it to my local postgres db. The settings in my postgresql.conf don't indicate replication is turned on. So it shouldn't be streaming out to anything, right?
If someone has some insight into what may be happening, or why postgres is sending/receiving packets, I'd love to hear the easy answer (and the complex one if there's more to what's happening).
This is a postgres install via Homebrew on MacOSX.
Thanks in advance!
Some final thoughts: It's entirely possible, I guess, that Mac's activity monitor also shows local 'network' traffic stats. Maybe this isn't going out to the internets.....
In short, I would not expect replication to be enabled for a DB that was dumped from a server that had it if the server to which it was restored had no replication configured at all.
More detail:
Normally, to get a local copy of a database in Postgres, one would do a pg_dump of the remote database (this could be done from your laptop, pointing at your server), followed by a createdb on your laptop to create the database stub and then a pg_restore pointed at the dump to populate its contents. [Edit: Re-reading your post, it seems like you may perhaps have done this, but meant that the dump you used had replication enabled.)]
That would be entirely local (assuming no connections into the DB from off-box), so long as you didn't explicitly setup any replication or anything else that would go off-box. Can you elaborate on what exactly you mean by importing with replication?
Also, if you're concerned about remote traffic coming from Postgres, try running this command a few times over the period of a minute or two (when you are seeing the traffic):
netstat | grep postgres
In general, replication in Postgres in configured at a server level, and has to do with things such as the master server shipping WAL files to the standby server (for streaming replication). You would have almost certainly have had to setup entries in postgresql.conf and pg_hba.conf to ensure that the standby server had access (such as a replication entry in the latter conf file). Assuming you didn't do steps such as this, I think it can pretty safely be concluded that there's no replication going on (especially in conjunction with double-checking via netstat).
You might also double-check the Postgres log to see if it's doing anything replication related. In a default install, that'd probably be in /var/log/postgresql (although I'm not 100% sure if Homebrew installs put it somewhere else).
If it's UDP traffic, to and from a high port, it's likely to be PostgreSQL's internal statistics collector.
These are pre-bound to prevent interference and should not be accessible outside of PostgreSQL.

Local mongo server with mongolab mirror & fallback

How to set up a local mongodb with mirror on mongolab (propagate all writes from local to mongolab, so they are always synchronized - I don't care about atomic, just that it syncs in a reasonable time frame)
How to use mongolab as a fallback if local server stops working (Ruby/Rails, mongo driver and mongoid).
Background: I used to have a local mongo server but it kept crashing occasionally and all my apps stopped working + I had to "repair" the DB to restart it. Then I switched to mongolab which I am very satisfied with, but it's generating a lot of traffic which I'd like to avoid by having a local "cache", but without having to worry about my local cache crashing causing all my apps to stop working. The DBs are relatively small so size is not an issue. I'm not trying to eliminate the traffic overhead of communicating to mongolab, just lower it a bit.
I'm assuming you don't want to have the mongolab instance just be part of a replica set (or perhaps that is not offered). The easiest way would be to add the remote mongod instance as a hidden member (priority 0) and just have it replicate data from your local instance.
An alternative immediate solution you could use is mongooplog which can be used to poll the oplog on one server and then apply it to another. Essentially replication on demand (you would need to seed one instance appropriately etc. and would need to manage any failures). More information here:
http://docs.mongodb.org/manual/reference/mongooplog/
The last option would be to write something yourself using a tailable cursor in your language of choice to feed the oplog data into the remote instance.

Why is Symfony Session data encrypted on my production server?

I want to share a single authentificaition method for to Symfony websites sharing the same top-domain.
I use a cookie valid on all subdomains and sfPDOSessionStorage for keeping session data.
factories.yml is set up like this on both projects:
all:
storage:
class: sfPDOSessionStorage
param:
database: doctrine
db_table: sessions
session_name: myauth
db_id_col: id
db_data_col: sess_data
db_time_col: time
session_cookie_domain: ".mydomain.net"
session_cookie_lifetime: 86400
session_cookie_path: /
On my development machine and on my co-workers's machine this mechanism is working fine but on the server it does not (I'm asked for credentials when I switch sub-domains). The only difference I see between the two environments is the format in which the data is stored, the data seems to be encrypted on the prod server but appears in clear text on my machine. There's no sensitive data here so I can post an example :
Dev environment sess_data:
symfony/user/sfUser/lastRequest|i:1295349567;symfony/user/sfUser/authenticated|b:0;symfony/user/sfUser/credentials|a:0:{}symfony/user/sfUser/attributes|a:1:{s:30:"symfony/user/sfUser/attributes";a:1:{s:7:"referer";s:0:"";}}symfony/user/sfUser/culture|s:2:"fr";
Production server sess_data:
BB7HBTsQg75NNGvb9Z8sexldqbS79YzDgrztQzSFhsUpEk2EeCOtKw8FQbm31vLIRyr3ZP_klwZFXywnkdem27naIWjIVBP_WwpwNRg4IMj1J0fIfxJN_UOw2RbCWh91L5ryCD_7_ynN2UtxfuJwUWnxoGuUvqD8YQxNdczQipmktPVFk1mVfKE1-BsrdHHLIXH_gi44-Bos3f-EshE5skuQpachnY1FkgvvvOuXEj7zxPflgA3xtGoqJxkDijT-uKnQCH4TrimhvkIRGCt0oVuOdsAJzuWW6ijgPCD3X767mSIzm_lQmJoSGxDB7fAgFihB7Ljoq0tsysC62BqTYFB6dTnuZoj3KON8lXlyNJZVyLgTWZ3EYoObtc8jCKYNDonSjEqzTvwg4NJRVoB5ePx61iTqbDd9qFlkryzj9J8.
I haven't got a clue which encryption type is used to store information in the database, nor am I sure that this is the root of my problems but as this is the only difference I can spot, I don't see any other explanation. (PHP and MySQL versions are identical, with Ubuntu 10.10 on my side and Debian Squeeze server-side).
I think there's some module installed on your production server responsible for encrypting the session's data.
For example, suhosin patch adds such a feature to PHP: http://www.hardened-php.net/suhosin/configuration.html
It's activated by suhosin.session.encrypt configuration option in php.ini.

Resources