How to check if keyname already exists in EC2 - amazon-ec2

I need to check if keypair already exists or not. Currently I am using create_key_pair but it throws an error if the key already exists. What is the function to check if keyname already exists or not? Similary with security groups.

Use describe_key_pairs. You can optionally filter for your key-name in the call. See if your key pair is in the result list.

Related

Adding keys to Spring Boot vault

I am implementing Spring Boot vault. Whenever I try to add more than one key, only the last one is saved. For example, at this page, https://www.javainuse.com/spring/cloud-vault, they have this example
But when I then query the vault, I see
c:\vault>vault kv get secret/javainuseapp
======= Data =======
Key Value
--- -----
dbpassword root
If I set both keys at the same time, it seems to work
c:\vault>vault kv put secret/javainuseapp dbusername=root dbpassword=root
Success! Data written to: secret/javainuseapp
c:\vault>vault kv get secret/javainuseapp
======= Data =======
Key Value
--- -----
dbpassword root
dbusername root
How does one add additional keys?
This is standard usage for the Vault API, and therefore also for the CLI which is a wrapper around the Golang bindings around the REST API. If you want to overwrite a key value pair with the Vault CLI and retain the former key value pairs, then you must additionally specify them like you did in the final example:
kv put secret/javainuseapp dbusername=root dbpassword=root
All key value pairs specified during the command for a specific path will be stored at that secret version (the version corresponding to an integer equal to the number of writes at that path, unless previous versions are deleted). All key value pairs are still stored, but at the previous secret version. When you execute the command vault kv get secret/javainuseapp, you are retrieving the secret at the current version corresponding to the most recent write.
However, note that if the Vault policy or policies support patch operations on the secret path for the associated role/user/etc., then you can also execute a patch subcommand to only update one key value pair while retaining the others in the newest version of the secret:
vault kv patch secret/javainuseapp dbusername=root
and in that situation the dbpassword key will be retained in the newest secret version.

GetStateByPartialCompositeKey by a specific key not working

Currently I'm working with Hyperledger chaincode,
I have a problem with the method "GetStateByPartialCompositeKey".
They index consists of 3 parts (key1~key2~key3).
If i try GetStateByPartialCompositeKey(index, key1) , it works perfectly.
But If I try to search for another key, like GetStateByPartialCompositeKey(index, key3), nothing is returned. Although the key is actually saved. How do I solve this problem?
Refer: https://godoc.org/github.com/hyperledger/fabric/core/chaincode/shim#ChaincodeStub.GetStateByPartialCompositeKey
As mentioned in the description of the method, "This function returns an iterator which can be used to iterate over all composite keys whose prefix matches the given partial composite key."
This method needs to have the prefix i.e. the first half of the composite key to match. Even though the method name may state partial key, it only works with the prefix of the composite key and not any part of it.

dexie.js difference between Table.bulkAdd() and Table.bulkPut()

The Dexie.js documentation for Table.bulkAdd()
https://dexie.org/docs/Table/Table.bulkAdd()#remarks
says: Add all given objects to the store.
The Dexie.js documentation for Table.bulkPut()
https://dexie.org/docs/Table/Table.bulkPut()#remarks
says: Add all given objects to the store.
Why are there two function if they both do the same thing, i.e. create a new records? I would have expected bulkPut() to execute updates on existing records.
Am I missing something?
It's a documentation issue. Will update docs. The difference between add and put is better described in the docs of Table.put() at https://dexie.org/docs/Table/Table.put() where it explains "Adds new or replaces existing object in the object store." and "If an object with the same primary key already exists, it will be replaced with the given object. If it does not exist, it will be added."

ruby aws sdk s3 deletion of objects in folders

I'm using the aws sdk to delete an object (or objects) from a bucket, the problem is that keys that don't exist still get counted as successfully deleted, shouldn't the SDK raise an error that the key doesn't exist?
The other problem is that an object corresponding to a key that does exist isn't being removed but is returning as being successfully deleted.
EDIT:
The second problem only seems to be when the object to be deleted is inside of a folder, in the root it gets deleted fine.
The DELETE object operation for Amazon S3 intentionally returns a 200 OK even when the target object did not exist. This is because it is idempotent by design. For this reason, the aws-sdk gem will return a successful response in the same situation.
A quick clarification on the forward-slash. You can have any number of '/' characters at the beginning of your key, but an object with a preceding '/' is different from the object without. For example:
# public urls for two different objects
http://bucket-name.s3-amazonaws.com/key
http://bucket-name.s3-amazonaws.com//key
Just be consistent on whether you choose to use a slash or not.
Turns out what you can't have have '/' at the beginning of the key, which I didn't realise, not sure why it was there but it was screwing up the key.

Memcached dependent items

I'm using memcahced (specifically the Enyim memcached client) and I would like to able to make a keys in the cache dependant on other keys, i.e. if Key A is dependent on Key B, then whenever Key B is deleted or changed, Key A is also invalidated.
If possible I would also like to make sure that data integrity is maintained in the case of a node in the cluster fails, i.e. if Key B is at some point unavailable, Key A should still be invalid if Key B should become invalid.
Based on this post I believe that this is possible, but I'm struggling to understand the algorithm enough to convince myself how / why this works.
Can anyone help me out?
I've been using memcached quite a bit lately and I'm sure what you're trying to do with depencies isn't possible with memcached "as is" but would need to be handled from client side. Also that the data replication should happen server side and not from the client, these are 2 different domains. (With memcached at least, seeing its lack of data storage logic. The point of memcached though is just that, extreme minimalism for bettter performance)
For the data replication (protection against a physical failing cluster node) you should check out membased http://www.couchbase.org/get/couchbase/current instead.
For the deps algorithm, I could see something like this in a client: For any given key there is a suspected additional key holding the list/array of dependant keys.
# - delete a key, recursive:
function deleteKey( keyname ):
deps = client.getDeps( keyname ) #
foreach ( deps as dep ):
deleteKey( dep )
memcached.delete( dep )
endeach
memcached.delete( keyname )
endfunction
# return the list of keynames or an empty list if the key doesnt exist
function client.getDeps( keyname ):
return memcached.get( key_name + "_deps" ) or array()
endfunction
# Key "demokey1" and its counterpart "demokey1_deps". In the list of keys stored in
# "demokey1_deps" there is "demokey2" and "demokey3".
deleteKey( "demokey1" );
# this would first perform a memcached get on "demokey1_deps" then with the
# value returned as a list of keys ("demokey2" and "demokey3") run deleteKey()
# on each of them.
Cheers
I don't think it's a direct solution but try creating a system of namespaces in your memcache keys, e.g. http://www.cakemail.com/namespacing-in-memcached/. In short, the keys are generated and contain the current values of other memcached keys. In the namespacing problem the idea is to invalidate a whole range of keys who are within a certain namespace. This is achieved by something like incrementing the value of the namespace key, and any keys referencing the previous namespace value will not match when the key is regenerated.
Your problem looks a little different, but I think that by setting up Key A to be in the Key B "namespace, if a node B was unavailable then calculating Key A's full namespaced key e.g.
"Key A|Key B:<whatever Key B value is>"
will return false, thus allowing you to determine that B is unavailable and invalidate the cache lookup for Key A.

Resources