Recently, I'm using Redis to cache token for OpenStack Keystone. The function is fine, but some expired cache data still in Redis.
my Keystone config:
[cache]
enabled=true
backend=dogpile.cache.redis
backend_argument=url:redis://127.0.0.1:6379
[token]
provider = uuid
caching=true
cache_time= 3600
driver = kvs
expiration = 3600
but some expired data in Redis:
Data was over expiration time, but still in here, because the TTL is -1.
My question:
How can I change settings to stop this rubbish data created?
Is some gracefully way to clean it up?
I was trying to use command 'keystone-manage token_flush', but after reading code, I realized this command just clean up the expired tokens in Mysql
I hope this question still relevant.
I'm trying to do the same thing as you are, and for now the only option I found working is the argument on dogpile.cache.redis: redis_expiration_time.
Checkout the backend dogpile.redis API or source code.
http://dogpilecache.readthedocs.io/en/latest/api.html#dogpile.cache.backends.redis.RedisBackend.params.redis_expiration_time
The only problem with this argument is that it does not let you choose a different TTL for different categories, for example you want tokens for 10 minutes and catalog for 24 hours or so. The other parameters on keystone.conf just don't work from my experience (expiration_time and cache_time on each category)... Anyway this problem isn't relevant if you are using redis to store only keystone tokens.
[cache]
enabled=true
backend=dogpile.cache.redis
backend_argument=url:redis://127.0.0.1:6379
// Add this line
backend_argument=redis_expiration_time:[TTL]
Just replace the [TTL] with your wanted ttl and you'll start noticing keys with ttl in redis and after a while you will see that they are no more.
about the second question:
This is maybe not the best answer you'll see, but you can use OBJECT idletime [key] command on redis-cli to see how much time the specific key wasn't used (even GET reset idletime). You can delete the keys that have bigger idletime than your token revocation using a simple script.
Remember that the data on Redis isn't persistent data, meaning you can always use FLUSHALL and your OpenStack and keystone will work as usual, but ofc the first authentications will take longer.
Related
I am using AWS S3 with Rails 7 to store images via Active Storage. I'm presenting my data to the view by querying Elasticsearch (using the elasticsearch-model gem).
While this works great for my other data, the expiration of the signed AWS URL becomes an issue after a little while and the images are of course no longer accessible.
class MyClass
has_one_attached :image
end
I'd like to be able to have a fresh URL and still use Elasticsearch so that I don't need to make a trip to the database every time I want to see the image.
I have looked up whether I can just remove the expiration however I've read that it's unsafe and mostly unsupported. I know that Elasticsearch::Model callbacks exists but I'm not clear on whether that could be applied to ActiveStorage::Blob, especially since nothing changes in the DB when the expiration occurs.
I've also thought about just changing the URLs to expire at 1 week via passing in the expires_in param to the url method on the attachement and then performing a chon job to update the image once a week. Seems hacky though.
I'm sure there are many ways to approach this but what worked for me was using the save callback on an async job when the model that contains the Elasticsearch::Model. When this particular attribute was updated, I called a job with a delay just before the maximum signed_url time allowed by s3 which is 7 days.
after_save :set_refresh_url_job, if: Proc.new { logo_url? }
def set_refresh_url_job
RefreshLogoUrlJob.
set(wait: MyModel::LOGO_EXPIRTY_REFRESH).
perform_later(self)
end
I have a question about Redis caching.
I wrote some code to cache some information and it works fine, but is there a way to check what's being stored inside of it through redis-cli? I just want to make sure that the value gets deleted after 10 minutes.
how I store stuff:
Cache::store('redis')->put('inventory_' . $this->user->steamid64, $items, 15);
Since you are using laravel's Cache class instead of Redis - then you need to check for the prefix. By default it ships like this in config/cache.php
'prefix' => env('CACHE_PREFIX', 'laravel')
If there is a prefix(and it is laravel) it is going to be like this (if there is no prefix then you may discard laravel: from the key names)
127.0.0.1:6379> get laravel:inventory_1
"somevalue"
127.0.0.1:6379> ttl laravel:inventory_1
(integer) 885
127.0.0.1:6379>
For the "development" purpose you may also use monitor command to see which commands are executed in redis. It is going to print like this;
127.0.0.1:6379> monitor
OK
1591091085.487083 [0 127.0.0.1:51828] "set" "myinventory_1" "myvalue" "EX" "900"
1591091091.143395 [0 127.0.0.1:51828] "get" "myinventory_1"
1591091095.280724 [0 127.0.0.1:51828] "ttl" "myinventory_1"
Side note: You don't need to call store method if your default cache driver is already redis.
Enter the redis-cli and use:
keys * to see list of all keys
TTL my_key_name to see remaining expire time of the key
You can execute any of Redis commands inside of the redis-cli. It's good tool for development.
I am trying to implement session management, where we store jwt token to redis. Now I want remove the key if the object idle time is more than 8 hours. Pls help
There is no good reason that comes to my mind for using IDLETIME instead of using the much simpler pattern of issuing a GET followed by an EXPIRE apart from very trivial memory requirements for key expiry.
Recommended Way: GET and EXPIRE
GET the key you want.
Issue an EXPIRE <key> 28800.
Way using OBJECT IDLETIME, DEL and some application logic:
GET the key you want.
Call OBJECT IDLETIME <key>.
Check in your application code if the idletime > 8h.
If condition 3 is met, then issue a DEL command.
The second way is more cumbersome and introduces network latency since you need three round trips to your redis server while the first solution just does it in one round trip if you use a pipeline or two round trips without any app server time at worst.
This is what I did using Jedis. I am fetching 1000 records at a time. You can add a loop to fetch all records in a batch.
Jedis jedis = new Jedis("addURLHere");
ScanParams scanParams = new ScanParams().count(1000);
ScanResult<String> scanResult = jedis.scan(ScanParams.SCAN_POINTER_START, scanParams);
List<String> result = scanResult.getResult();
result.stream().forEach((key) -> {
if (jedis.objectIdletime(key) > 8 * 60 * 60) { // more than 5 days
//your functionality here
}
});`
I have Neo4j server running inside a virtual machine using Ubuntu 13.10 and I am accessing via REST using Cypher queries. The virtual machine has 4 GB of memory allocated to it.
I've changed the open file count to 40000, set the initial JVM heap to 1G and my neo4j.properties file is as follows:
neostore.nodestore.db.mapped_memory=250M
neostore.relationshipstore.db.mapped_memory=100M
neostore.propertystore.db.mapped_memory=100M
neostore.propertystore.db.strings.mapped_memory=100M
neostore.propertystore.db.arrays.mapped_memory=100M
keep_logical_logs=3 days
node_auto_indexing=true
node_keys_indexable=id
I've also updated sysctl based on the Neo4j Linux tuning guide:
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
Since I am testing queries, the basic routine is to run my suite of tests and then delete all of the nodes and run them all again. At the start of each test run, the database has 0 nodes in it. My suite of tests of about 100 queries is taking 22 seconds to run. Basic parameterized creates such as:
CREATE (x:user { email: {param0},
name: {param1},
displayname: {param2},
id: {param3},
href: {param4},
object: {param5} })
CREATE x-[:LOGIN]->(:login { password: {param6},
salt: {param7} } )
are currently taking over 170ms to execute (and that's the average, first query time is 700ms). During a test run, the CPU in the VM never exceeds 50% and memory usage is at a steady 1.4Gb.
Why would creating a single node in an empty database take 170ms? At this point unit testing is becoming almost impossible since it is so slow. This is my first time trying to tune Neo4j so I'm not really sure how to figure out where the problem is or what changes should be made.
Additional Details
I'm using Go 1.2 to make REST calls to the cypher endpoint (http://localhost:7474/db/data/cypher) of a locally installed Neo4j instance. I'm setting the request headers for content-type to "application/json", accept to "application/json" and "X-Stream" to true. I always return either an array of maps or nothing depending on the query.
It seems like the creates are the problem and are taking forever. For example:
2014/01/15 11:35:51 NewUser took 123.314938ms
2014/01/15 11:35:51 NewUser took 156.101784ms
2014/01/15 11:35:52 NewUser took 167.439442ms
2014/01/15 11:35:52 ValidatePassword took 4.287416ms
NewUser creates two new nodes and one relationship and is taking 167ms, while ValidatePassword is a read-only operation and it completes in 4ms. Also note that the three calls to NewUser are identical parameterized queries. While the creates are the big problem, I'm also a little concerned that Neo4j is taking 4ms to just find a labeled node when there are only 100 nodes in the database.
I do not restart the server in between test runs or delete the database. I issue a single delete all nodes query MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r at the end of the test run. Running the same test suite multiple times back to back does not improve the query times.
Are your 100 queries all the same only with different parameters, or actually 100 different queries?
What you see is actually setup work. The parser has to load the parsing rules initially that takes a few ms. Also new queries that have not been seen are compiled, planned and put in the query cache.
So the first query always takes a bit longer. But as you parametrize all subsequent ones should be fast.
Can you confirm that?
I think you see the transactional overhead of flushing the transaction to disk.
Did you try to batch more requests into one? I.e. with the transactional endpoint? Or the /db/data/batch (but I'd rather use the new tx-endpoint /db/data/transaction).
Did you create an index for your lookup property for your validate query?
Can you do me a favor and test your create query without a label? I found some perf issues when testing that myself earlier this week.
Just ran a test with curl
for i in `seq 1 10`; do time curl -i -H content-type:application/json -H accept:application/json -H X-Stream:true -d #perf_test.json http://localhost:7474/db/data/cypher; done
I'm getting between 16 and 30ms per request externally including starting curl
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8; stream=true
Access-Control-Allow-Origin: *
Transfer-Encoding: chunked
Server: Jetty(9.0.5.v20130815)
{"columns":[],"data":[]}
real 0m0.016s
user 0m0.005s
sys 0m0.005s
Perhaps it is rather the VM (disk or network) or the cross-vm communication?
Did another test with ab and 1000 requests for both endpoints, got a mean of about 5 ms both times.
https://gist.github.com/jexp/8452037
Using the devel module I can see a lot of calls to cache_get() and cache_set(). After how long does a cached value need to be refreshed? Does the cache get invalidated every few minutes?
The module that is using cache_set sets the expiration in the call. Some things have explicit durations, others have permanent or semi-permanent lifetimes, based on the situation.
Caches get explicitly cleared when you invoke the method through the admin interface (or drush), or otherwise through the use of drupal_flush_all_caches or cache_clear_all.
Lately, I have been using a hook_cron to clear certain cache tables each night.
EDIT to answer comment:
To see which cache, I usually put this in a separate script somewhere:
require_once './includes/bootstrap.inc';
drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL);
header("Content-Type: text/plain; encoding=utf-8");
$user = user_load(1);
print "Modules implementing hook_cron:\n" . implode("\n", module_implements('cron'));
To see expirations, examine the various cache tables in the database and look at the expire column. Modules can set expirations on each individual call to cache_set, so it can vary entry by entry.