Tell me please, how can I do next?:
I am creating a secret in Vault.
This secret is automatically added to Consul.
I go to Consul and see the keys from Vault there.
Please give me documentation or article.
Related
What are permissions that my lambda function need to be able to retrieve secrets from AWS Secrets Manager and change it also ?
You need the secretsmanager:GetSecretValue policy to retrieve secrets and the secretsmanager:UpdateSecret policy to update secrets.
Note that if you are using a customer-managed AWS KMS key for encryption you will also need some KMS permissions:
kms:Decrypt for retrieving the secret.
kms:Decrypt and kms:GenerateDataKey for updating the secret.
https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/get-secret-value.html
https://docs.aws.amazon.com/cli/latest/reference/secretsmanager/update-secret.html
If you are using the Lambda functions provided by AWS, then (as described in the docs) you will need: DescribeSecret, GetSecretValue, PutSecretValue, UpdateSecretVersionStage and GetRandomPassword. If you are using a Custom KMS Key (CMK) you will also need Decrypt and GenerateDataKey permissions for that CMK (both in the Lambda policy and in the KMS key policy).
If you are seeing Task timed out errors, it is likely your Lambda can not access either the secrets manager endpoint (try using a VPC endpoint), or the Lambda can not connect to the DB (check security group settings).
So basically, I have set up an architecture with Consul/Vault in a kubernetes cluster within AWS. My vault auto unseals with AWS KMS when the pods start.
Recently I’ve done some testing around backing up vault using consul snapshot.
The scenario I tested is:
First taking snapshot of vault consul snapshot save vault.prod.snap
Then removing vault doing consul kv delete -recurse vault/
Removing vault statefulsets and pods
consul snapshot restore vault.prod.snap
Finally re-create vault statefulsets
Result:
I got an error 500 on the third key during the auto unseal that says:
body {“errors”:[“failed to decrypt encrypted stored keys: cipher: message authentication failed”]}
I tried that another test where I don’t clean the vault with command kv delete -recurse vault/
I basically just remove a couple of policies in the UI and the restore. That scenario seems to work correctly, it’s only when I restore from “scratch”, that my vault cannot unseal anymore.
could somebody give me some hint please ?
I've finally fixed the problem after 3 days
After fixing the followings my vault was able to be unsealed again:
Duplicate Node IDs :https://github.com/hashicorp/consul/issues/4741
Consul server Has to be given ACL token with sufficient rights
For the first issue, I've reused a ruby script and added it to the consul docker image to generate a permanent node id for each consul.
Regarding the second issue, my ACL policy for the agent was not giving required permission for the agent to make necessary updates.
I'm deploying my Spring boot application into Heroku server via git deployment. There are passwords and api secrets in my application.yml. Those properties are encrypted with Jasypt. One thing I don't understand is: how to pass jasypt decryption password into deployed application for startup?
Heroku has Config Vars, but they do not seem secure, considering that all of them could be revealed on the dashboard
Is there a secure way to send a password into deployment?
the Config Vars is the accepted mechanism to pass runtime information to the apps upon deployment;
It is pretty secure if the access to the Dashboard is controlled of course (those settings are never exposed or logged), only the owner can reveal the values.
I have a minio server running on debian using SystemD and proxied with NGINX and secured with Let's Encrypt. In the docs it suggests the service is comparable to Amazon S3 but I can't figure out how to actually use the service.
Version: 2019-03-27T22:35:21Z
Release-Tag: RELEASE.2019-03-27T22-35-21Z
Commit-ID: 6df05e489dc789cf26e82810cf5cfeefb1d90761
It looks like in order to create a bucket or use the minio cli mc there needs to be a registered TARGET along with accessKey and secretKey. I can't find anywhere on the server where that information is available and it's not clear to me how to create a new target.
Here is the /etc/default/minio file:
MINIO_VOLUMES="/usr/local/share/minio"
MINIO_OPTS="-C /etc/minio --address :9000"
There are no files in /etc/minio.
It's running and set up, but how can I start actually using the minio server?
Edit: Config JSON
I tried creating a new config file and entering new accessKey and secretKey in the credential field. I was not able to sign in to the Minio Browser app using the same keys.
Edit: Key Files
I tried entering a new access key and secret key into the files /etc/minio/access_key and /etc/minio/secret_key and adding the following lines to the /etc/default/minio environment file:
MINIO_ACCESS_KEY_FILE="/etc/minio/access_key"
MINIO_SECRET_KEY_FILE="/etc/minio/secret_key"
I restarted the service systemctl restart minio but I still can't log into the Minio Browser app.
It only worked to provide MINIO_ACCESS_KEY and MINIO_SECRET_KEY into /etc/default/minio environment file. Every other method failed.
I used the following to generate a secret key that resemble AWS access keys in the example. In the CLI help text it looks like access key and secret key would work however.
SecureRandom.urlsafe_base64(30)
I want to write cloud-init script which initializes REX-Ray docker plugin(A service which uses AWS credentials on its configuration).
I have considered the following methods. However, these methods have some disadvantages.
Hard code access key/secret key in cloud-init script.
Problem: This is not secure.
Create IAM role, then refer access key, secret key from instance meta data.
Problem: Access key will expires in a certain period.
So I need to restart REX-Ray daemon process, which causes service temporary unavailable.
Please tell me which is better way to refer access key/secret key, or another way if it exists.
Thanks in advance.
The docker plugin should get the credentials automatically. You don't have to do anything. Do not set any environment variables for AWS credentials.
AWS CLI / AWS SDK will get the credentials automatically from the meta data server.
You can use the following method of authentication
Environment variables
Export both access and secret keys in environment environment as follow:
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
Shared Credential file
You can use an AWS credentials file to specify your credentials. The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%.aws\credentials" for Windows users. If terraform fail to detect credentials inline, or in the environment, Terraform will check this location
You can optionally specify a different location in the configuration by providing the shared_credentials_file attribute as follow
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
https://www.terraform.io/docs/providers/aws/