From consul UI,I can see that vault sealed status changing pretty frequently which is strange. It appears to be Sealed from Consul GUI because of which it shows the node to be in critical state. Which actually should be UnSealed
But while on the vault node ,the sealed status appears to be false as expected but somehow the status is not getting replicated the same
.
Can someone let me know what could be possible issue and also help me in fixing this issue ?
I've hit this before and the reason is because the unseal hasn't taken place on all nodes of the Vault cluster. If you SSH into another node in the Vault cluster and run a vault status, you will see that it is still sealed.
This is the reason for the 'Sealed Status' to flash as green in Consul.
For the initial unseal, all 3 nodes must be unsealed. If the Vault is sealed after this initial unseal, the unseal will only need to take place from 1 of the Vault cluster.
Steps to solve:
vault0:
export VAULT_ADDR=http://127.0.0.1:8200
(assuming https isn't active, this points Vault to the local Vault API)
vault unseal {unseal key here} (xN for the initial unseal)
vault1:
Repeat^^
vault2:
Repeat^^
The initial unseal is important, after this, it's not required to unseal all nodes. Also, think long and hard before automating the unseal, these unseal keys are the key to everything so handle with care.
Related
I'm running janusgraph server backed by Cassandra. It doesn't allow me to use custom vertex ids.
I do see the following log when the janusgraph gremlin server is starting.
Local setting graph.set-vertex-id=true (Type: FIXED) is overridden by globally managed value (false). Use the ManagementSystem interface instead of the local configuration to control this setting
Even tried to set this property via management API and still no luck.
gremlin> mgmt = graph.openManagement()
gremlin> mgmt.set('graph.set-vertex-id', true)
As the log message already states, this config option has the mutability FIXED which means that it is a global configuration option. Global configuration is described in this section of the JanusGraph documentation.
It states that:
Global configuration options apply to all instances in a cluster.
JanusGraph stores these configuration options in its storage backend which is Cassandra in your case. This ensures that all JanusGraph instances have the same values for these configuration values. Any changes that are made to these options in a local file are ignored because of this. Instead, you have to use the management API to change them which will update them in the storage backend.
But that is already what you tried with mgmt.set(). This doesn't work in this case however because this specific config option has the mutability level FIXED. The JanusGraph documentation describes this as:
FIXED: Like GLOBAL, but the value cannot be changed once the JanusGraph cluster is initialized.
So, this value can really not be changed in an existing JanusGraph cluster. Your only option is to start with a new cluster if you really need to change this value.
It is of course unfortunate that the error message suggested to use the management API even though it doesn't work in this case. I have created an issue with the JanusGraph project to improve this error message to avoid such confusion in the future: https://github.com/JanusGraph/janusgraph/issues/3206
I have a simple etcd server running and I am using this github project called etcd-keeper to visualize the data in the etcd.
you can find the etcd-keeper project here: https://github.com/evildecay/etcdkeeper
I have created the root using etcdctl and everything works fine.
And I needed to create a another user that has limited view access. So, I created another test-user user and added read-only role with relevant persmissions.
Everything is good but, when I try to access the etcd server using etcd-keeper it doesn't allow me to log in with the test-user credentials unless I signed in with root user first
I don't need to share the root user credentials with the person logs with test-user. Otherwise no point in creating a new user noh.
I get this warning as below:
Can someone please help me to fix this problem? Is this error from etcd servr side? Anyone has used this etcd-keeper before?
Thank you.
I have a spring boot app which loads a yaml file at startup containing an encryption key that it needs to decrypt properties it receives from spring config.
Said yaml file is mounted as a k8s secret file at etc/config/springconfig.yaml
If my springboot is running I can still sh and view the yaml file with "docker exec -it 123456 sh" How can I prevent anyone from being able to view the encryption key?
You need to restrict access to the Docker daemon. If you are running a Kubernetes cluster the access to the nodes where one could execute docker exec ... should be heavily restricted.
You can delete that file, once your process fully gets started. Given your app doesnt need to read from that again.
OR,
You can set those properties via --env-file, and your app should read from environment then. But, still if you have possibility of someone logging-in to that container, he can read environment variables too.
OR,
Set those properties into JVM rather than system environment, by using -D. Spring can read properties from JVM environment too.
In general, the problem is even worse than just simple access to Docker daemon. Even if you prohibit SSH to worker nodes and no one can use Docker daemon directly - there is still possibility to read secret.
If anyone in namespace has access to create pods (which means ability to create deployments/statefulsets/daemonsets/jobs/cronjobs and so on) - it can easily create pod and mount secret inside it and simply read it. Even if someone have only ability to patch pods/deployments and so on - he potentially can read all secrets in namespace. There is no way how you can escape that.
For me that's the biggest security flaw in Kubernetes. And that's why you must very carefully give access to create and patch pods/deployments and so on. Always limit access to namespace, always exclude secrets from RBAC rules and always try to avoid giving pod creation capability.
A possibility is to use sysdig falco (https://sysdig.com/opensource/falco/). This tool will look at pod event, and can take action when a shell is started in your container. Typical action would be to immediately kill the container, so reading secret cannot occurs. And kubernetes will restart the container to avoid service interruption.
Note that you must forbids access to the node itself to avoid docker daemon access.
You can try mounting the secret as an environment variable. Once your application grabs the secret on startup, the application can then unset that variable rendering the secret inaccessible thereon.
In the consul ui demo (https://demo.consul.io/ui/) each datacenter has a key called "global/time" which appears to show the current time, and is automatically updated.
Is this a standard feature of consul (couldn't find it in the docs)? If not, how is it implemented?
Doesn't seem like a standard feature of Consul. It can be implemented with a cron script that updates the K/V store in a local Consul agent every few seconds.
I have been looking into Ansible vault but want to check something incase I have missed a crucial point.
Do you have to run the playbook and provide the password. Encrypting the data seems a great idea but if I share the playbook the person running it will require the password. If they have the password then they can decrypt the file and see the data.
I would like to use it to set passwords for files but would like non admins to be able to run the playbook.
Have I missed something. I am struggling to see its worth if this is the case.
Thanks
The purpose of the vault is to keep secrets encrypted "at rest" (eg, in your source control repo, on-disk), so that someone can't learn the secrets by getting ahold of the content. As others have mentioned, if you want to delegate use of the secrets without divulging them, you'll need an intermediary like Tower.
In your case you need something that will be brokering ansible execution. Because like you've said an encryption would be useless if you share the password.
Like it's mentioned in the comment you can use Ansible Tower, or you can try and set a simple http endpoint that will be trigerring ansible based on specified parameters.