Adding keys to Spring Boot vault - spring-boot

I am implementing Spring Boot vault. Whenever I try to add more than one key, only the last one is saved. For example, at this page, https://www.javainuse.com/spring/cloud-vault, they have this example
But when I then query the vault, I see
c:\vault>vault kv get secret/javainuseapp
======= Data =======
Key Value
--- -----
dbpassword root
If I set both keys at the same time, it seems to work
c:\vault>vault kv put secret/javainuseapp dbusername=root dbpassword=root
Success! Data written to: secret/javainuseapp
c:\vault>vault kv get secret/javainuseapp
======= Data =======
Key Value
--- -----
dbpassword root
dbusername root
How does one add additional keys?

This is standard usage for the Vault API, and therefore also for the CLI which is a wrapper around the Golang bindings around the REST API. If you want to overwrite a key value pair with the Vault CLI and retain the former key value pairs, then you must additionally specify them like you did in the final example:
kv put secret/javainuseapp dbusername=root dbpassword=root
All key value pairs specified during the command for a specific path will be stored at that secret version (the version corresponding to an integer equal to the number of writes at that path, unless previous versions are deleted). All key value pairs are still stored, but at the previous secret version. When you execute the command vault kv get secret/javainuseapp, you are retrieving the secret at the current version corresponding to the most recent write.
However, note that if the Vault policy or policies support patch operations on the secret path for the associated role/user/etc., then you can also execute a patch subcommand to only update one key value pair while retaining the others in the newest version of the secret:
vault kv patch secret/javainuseapp dbusername=root
and in that situation the dbpassword key will be retained in the newest secret version.

Related

How are sorted sets stored in redis when set using Spring + Jedis?

I have a Spring 4.3.9 app w/ spring-data-redis (1.8.7) and jedis (2.9.0). I can easily set and retrieve a ZSET using the code as below:
// Commented out -- but below line works fine too
// redisTemplate.opsForZSet().remove("score", userId);
Double scoreInRedis = redisTemplate.opsForZSet().score("score", userId);
redisTemplate.opsForZSet().add("score", userId, (double) score);
However, when I go to the redis CLI and try to retrieve the ZSET using the "score" key, I get nothing returned. So I've tried the following commands:
ZCARD "score" <-- this should give number of items wi
(integer) 0
ZSCORE "score" userId <--> I use the actual number here for the userId
(nil)
Other commands like ZREVRANGE or ZREVRANGEBYSCORE all return (nil).
I know that my key is being set because "info keyspace" shows a difference between keys and expires of exactly 1 -- which is my score ZSET. If I delete my ZSET from my Spring app, the number of keys and expiring keys is the same. So I know that my key is there somewhere.
Dude, where's my ZSET?? And how can I access it via the CLI? I can easily keep developing w/o accessing via the CLI but I'd like to understand where I'm off.
It turns out I was using incorrectly using RedisTemplate<String, Long>. I switched to a StringRedisTemplate based bean & magically my key is now visible on the CLI.
I'm still not sure where my key was hiding when using the other bean.
Incidentally, I was following some of the guidance here when working on this: https://medium.com/#smoothed9/redis-sorted-sets-with-spring-boot-and-redistemplate-66931e2e1b86

What is the difference between the key and hash key parameters used in a Redis put() statement?

I have a Java Spring Boot app that uses Redis for storage. I have done a fair amount of web searching but I can't find an easy to digest text that explains in detail the ramifications of what values to use/choose for the key parameter vs. the hash key parameter in a Redis put(key, hash key, object) statement. I am using the Redis store to store a short-lived session management objects that are particular to a specific user ID, and that user ID is guaranteed to be unique. The object value is a JSON encoded string for a particular class object:
// String format template for storing objects of this class.
public static final String STORE_MULTI_SELECT_CHOICES = "%s:store:multi_select_choices"
// Store it in Redis for access during the next interaction with the user.
// The key is the hash key prefix for the MultiSelectChoices class, followed
// by the user ID.
String key = String.format(MultiSelectChoices.STORE_MULTI_SELECT_CHOICES, userId)
// The hash key is just the user ID for now.
String hashKey = userId
// Serialize the multi-select session management object to a JSON string.
Gson gson = new Gson();
String jsonRedisValue = gson.toJson(multiSelect);
redisTemplate.opsForHash().put(key, hashKey, jsonRedisValue)
What is the difference between these two parameters and do you know of a document that explains the performance and value collision ramifications of different choices? Also, given the nature of my storage operation, do I need to worry about Redis shards or other expert level details or can I reasonably ignore them for now? The app once put in production, will face a high traffic load.
basically in your scenario:
your key is: userid:store:multi_select_choices
your hashkey is: userid
and your options objects serialized into jsonRedisValue
in this case, you don't need to use:
redisTemplate.opsForHash().put(key, hashKey, jsonRedisValue)
instead you should use:
redisTemplate.opsForValue().put(key, jsonRedisValue)
here is a very good example for you to understand the scenario where opsForHash making sense:
first you must understand that hashes in redis is perfect representation for objects, so you don't need to serialize the object, but just store the object in the format of multiple key-value pairs, like for a userid=1000, the object has properties: username/password/age, you can simply store it on redis like this:
redisTemplate.opsForHash().put("userid:1000", "username", "Liu Yue")
redisTemplate.opsForHash().put("userid:1000", "password", "123456")
redisTemplate.opsForHash().put("userid:1000", "age", "32")
later on if you want to change the password, just do this:
redisTemplate.opsForHash().put("userid:1000", "password", "654321")
and the corresponding cmd using redis-cli:
HMSET userid:1000 username 'Liu Yue' password '123456' age 32
HGETALL userid:1000
1) "username"
2) "Liu Yue"
3) "password"
4) "123456"
5) "age"
6) "32"
HSET userid:1000 password '654321'
HGETALL userid:1000
1) "username"
2) "Liu Yue"
3) "password"
4) "654321"
5) "age"
6) "32"
I haven't explore too much the fundamental of how it implement hashes operation, but I think the difference between key and hashkey is quite obvious based on the documentation, key is just like the other redis key, normal string, hashkey is for the purpose of optimize the storage of the mutliple key-value pairs, so I guess there must be some kind of hash algorithm behind to ensure optimal memory storage and faster query and update.
and it's well documented here:
https://redis.io/topics/data-types
https://redis.io/topics/data-types-intro
You are basically talking about two different redis operations, I don't know the specific answer for spring boot, but talking about redis, the hashkey is needed for a HMSET operation, that basically is a two-keyed key-value store, while the regular SET operation is the singl-eyed key-value.
Check the operations in REDIS commands

How to collect a HashMap using jmx fetchlet

I'm using OEM cloud control 12.1c. I have a java process which is instrumented to collect some metrics in my application. One of the jmx attributes is a Map (java.util.Map).
Now, I want to create a OEM plugin which collects this Map periodically. I tried using jmxcli utility to generate the target metadata, but the tool asks me to enter the keys of the map. The keys are dynamically generated, so these cant be entered while creating the target metadata.
Did any of you face this problem? How do I solve this? I dont want to hardcode the keys, need the complete Map to be displayed in my plugin home page.
Here is a snippet from the console when I selected the Map:
JavaBean (of type Map) is : TypeDistributionMap
0: empty
1: ** User defined Name Values **
Select one or more items as comma separated indices: 1
*** Getting values for User Defined properties
Looping through all user defined Keys. Enter '..' to exit loop.
Enter the key: [This key is dynamic, what should I enter here?]
Enter the DATATYPE of the value: [java.lang.String]
Instead of using a HashMap (or a Map), it would be better to return TabularData, it is a JMX best practice (and I guess the JMX fetchlet doesn't support Maps). I was able to generate the metadata using jmxcli utility when I used TabularData.

g-wan kv store KV_INCR_KEY

How to use the KV_INCR_KEY?
I found a useful feature in gwan api, but without any sample.
I want to add items to the KV store with this as primary key.
Also, how to get the value of this key?
The KV_INCR_KEY value is a flag intended to be passed to k_add().
You get the newly inserted key's value by checking the return value of k_add(). The documentation states:
kv_add(): add/update a value associated to a key
return: 0:out of memory, else:pointer on existing/inserted kv_item struct
This was derived from an idea discussed on the G-WAN forum. And, like for some other flags (timestamp or persistence, for example), it has not not been implemented yet (KV_NO_UPDATE is functional).
Since what follows the next version (focussed on new scripted languages) is a kind of zero-configuration mapReduce, the KV store will get more attention soon.

Memcached dependent items

I'm using memcahced (specifically the Enyim memcached client) and I would like to able to make a keys in the cache dependant on other keys, i.e. if Key A is dependent on Key B, then whenever Key B is deleted or changed, Key A is also invalidated.
If possible I would also like to make sure that data integrity is maintained in the case of a node in the cluster fails, i.e. if Key B is at some point unavailable, Key A should still be invalid if Key B should become invalid.
Based on this post I believe that this is possible, but I'm struggling to understand the algorithm enough to convince myself how / why this works.
Can anyone help me out?
I've been using memcached quite a bit lately and I'm sure what you're trying to do with depencies isn't possible with memcached "as is" but would need to be handled from client side. Also that the data replication should happen server side and not from the client, these are 2 different domains. (With memcached at least, seeing its lack of data storage logic. The point of memcached though is just that, extreme minimalism for bettter performance)
For the data replication (protection against a physical failing cluster node) you should check out membased http://www.couchbase.org/get/couchbase/current instead.
For the deps algorithm, I could see something like this in a client: For any given key there is a suspected additional key holding the list/array of dependant keys.
# - delete a key, recursive:
function deleteKey( keyname ):
deps = client.getDeps( keyname ) #
foreach ( deps as dep ):
deleteKey( dep )
memcached.delete( dep )
endeach
memcached.delete( keyname )
endfunction
# return the list of keynames or an empty list if the key doesnt exist
function client.getDeps( keyname ):
return memcached.get( key_name + "_deps" ) or array()
endfunction
# Key "demokey1" and its counterpart "demokey1_deps". In the list of keys stored in
# "demokey1_deps" there is "demokey2" and "demokey3".
deleteKey( "demokey1" );
# this would first perform a memcached get on "demokey1_deps" then with the
# value returned as a list of keys ("demokey2" and "demokey3") run deleteKey()
# on each of them.
Cheers
I don't think it's a direct solution but try creating a system of namespaces in your memcache keys, e.g. http://www.cakemail.com/namespacing-in-memcached/. In short, the keys are generated and contain the current values of other memcached keys. In the namespacing problem the idea is to invalidate a whole range of keys who are within a certain namespace. This is achieved by something like incrementing the value of the namespace key, and any keys referencing the previous namespace value will not match when the key is regenerated.
Your problem looks a little different, but I think that by setting up Key A to be in the Key B "namespace, if a node B was unavailable then calculating Key A's full namespaced key e.g.
"Key A|Key B:<whatever Key B value is>"
will return false, thus allowing you to determine that B is unavailable and invalidate the cache lookup for Key A.

Resources