fetch secrets from HCVault from a location other than secrets - spring

I have to fetch some secrets from HC Vault from a location other than /secrets in my spring application, these are set in pcf env variables like "SPRING_CLOUD_VAULT_KV_APPLICATION-NAME": "secrets" as of now, and my new location is secrets / somefolder : secretkeyvalue. Can someone please help how to achieve this?

Related

How can I get secrets from Azure Key Vault from Databricks without creating a Secret Scope?

I'm trying to find a way to get secrets from KV without creating a secret scope
OR
Create the secret scope automatically using Databricks CLI (following https://learn.microsoft.com/en-us/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope-using-the-databricks-cli)
For the second option, I'm confuse on where run those command lines.
Ideally, can Databricks CLI be used to retrieve secrets instead of creating the secret scope?
If you want to use dbutils.secrets.get or Databricks CLI, then you need to have secret scope created. To create secret scope using CLI you need to run it from your personal computer, for example, that has Databricks CLI installed. Please note the comment that if you're creating a secret scope from Key Vault using CLI, then you need to provide AAD token, not the Databricks PAT. Simplest way to do that is to set environment variables and then use CLI:
export DATABRICKS_HOST=https://adb-....azuredatabricks.net
export DATABRICKS_TOKEN=$(az account get-access-token -o tsv
--query accessToken --resource 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d)
databricks secrets create-scope ...

Cassandra Astra securely deploying to heroku

I am developing an app using python and Cassandra(Astra provider) and trying to deploy it on Heroku.
The problem is connecting to the database requires the credential zip file to be present locally- https://docs.datastax.com/en/astra/aws/doc/dscloud/astra/dscloudConnectPythonDriver.html
'/path/to/secure-connect-database_name.zip'
and Heroku does not have support for uploading credentials files.
I can configure the username and password as environment variable but the credential zip file can't be configured as an environment variable.
heroku config:set CASSANDRA_USERNAME=cassandra
heroku config:set CASSANDRA_PASSWORD=cassandra
heroku config:set CASSANDRA_KEYSPACE=mykeyspace
Is there any way through which I can use the zip file an environment variable, I thought of extracting all files and configuring each file an environment variable in Heroku.
but I am not sure what to specify instead of Cluster(cloud=cloud_config, auth_provider=auth_provider) if I started using the extracted files from an environment variable?
I know I can check in the credential zip inside my private git repo that way it works but checking credentials does not seem secure.
Another idea that came to my mind was to store it in S3 and get the file during deployment and extract it inside the temp directory for usage.
Any pointers or help is really appreciated.
If you can checkin secure bundle into repo, then it should be easy - you just need to point to it from the cloud config map, and take username/password from the configured secrets via environment variables:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import os
cloud_config = {
'secure_connect_bundle': '/path/to/secure-connect-dbname.zip'
}
auth_provider = PlainTextAuthProvider(
username=os.environ['CASSANDRA_USERNAME'],
password=os.environ['CASSANDRA_PASSWORD'])
cluster = Cluster(cloud=cloud_config, auth_provider=auth_provider)
session = cluster.connect()
Idea about storing the file on S3, and downloading - isn't very bad as well. You can implement it in the script itself, to get file, and you can use environment variables to pass S3 credentials as well, so file won't be accessible in the repository, plus it would be easier to exchange the secure bundles if necessary.

Access environment variables stored in Google Secret Manager from Google Cloud Build

How can I access the variables I define in Google Secret Manager from my Google Cloud Build Pipeline ?
You can access to secret from Cloud Build by using the standard Cloud Builder gcloud
But, there is 2 issues:
If you want to use the secret value in another Cloud Build step, you have to store your secret in a file, the only way to reuse a previous value from one step to another one
The current Cloud Builder gcloud isn't up to date (today, 03 feb 2020). You have to add a gcloud component update for using the correct version. I opened an issue for this.
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args:
- "-c"
- |
gcloud components update
# Store the secret is a temporary file
gcloud beta secrets versions access --secret=MySecretName latest > my-secret-file.txt
- name: AnotherCloudBuildStepImage
entrypoint: "bash"
args:
- "-c"
- |
# For getting the secret and pass it to a command/script
./my-script.sh $(cat my-secret-file.txt)
Think to grant the role Secret Manager Secret Accessor roles/secretmanager.secretAccessor to the Cloud Build default service account <PROJECT_ID>#cloudbuild.gserviceaccount.com
EDIT
You can access to the secret from anywhere, either with the gcloud CLI installed (and initialized with a service account authorized to access secrets) or via API call
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://secretmanager.googleapis.com/v1beta1/projects/PROJECT_ID/secrets/MySecretName/versions/latest:access
Note: You recieve the secret in the data field, in base64 encoded format. Don't forget to decode it before using it!
You have to generate an access token on a service account with the correct role granted. Here I use again gcloud, because it's easier. But according with your platform, use the most appropriate method. A python script can also do the job.
EDIT 2
A new way to get secrets exists now in Cloud Build. Less boiler plate, safer. Have a look and use this way now.

AWS S3 authentification from windows

I am using Pentaho (8.1) from windows environment (remote desktop).
To Upload files to S3 I am using config & credential files.
When I use default file location in %USERPROFILE%.aws\config and %USERPROFILE%.aws\credentials it works fine.
I don't want every user to manually handle credentials file, so I would like to use same location for all users.
I have set environment variables:
AWS_SHARED_CREDENTIALS_FILE D:\data.aws\credentials
AWS_CONFIG_FILE D:\data.aws\config
But looks like it doesn't pick up this location correctly.
I am sure that files in %USERPROFILE% are actually used. I have also done full restart after changing variables, but it doesn't help.
Is there something I am missing from configuration?
If you are willing to set environment variables, then you can simply put the credentials in environment variables for each user:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY

How to switch between multiple AWS accounts with Zappa

I am experimenting with how to deploy lambdas into different AWS accounts in continuous delivery environment. At the moment I am stuck with that. Can you please give me a clue about this? As an example with AWS CLI we could define which profile we need to use.
Ex: aws s3 ls --profile account2
In the AWS config file, we define the profile as follows.
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
[account2]
aws_access_key_id = XXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Can we use the same approach with zappa deployments?
Highly appreciate any clue to solve this issue.
There is an options to nominate the profile name, did you try it?
"profile_name": "your-profile-name", // AWS profile credentials to use. Default 'default'. Removing this setting will use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables instead.
https://github.com/Miserlou/Zappa/blob/b12bc67aac00b1302a7f9b18444a51f21deac46a/README.md
You can define which profile to use on your own using Zappa's setting:
"profile_name": "your-profile-name", // AWS profile credentials to use. Default 'default'. Removing this setting will use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables instead.
But in your CI you first have to create your AWS config file and populate it with your profile from environment variables that are set in your CI's web interface.
In CircleCI (same would be done for TravisCI) I'm doing it like this for my mislavcimpersak profile:
mkdir -p ~/.aws
echo -e "[mislavcimpersak]" >> ~/.aws/credentials
echo -e "aws_access_key_id = "$AWS_ACCESS_KEY_ID >> ~/.aws/credentials
echo -e "aws_secret_access_key = "$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials
Complete working CircleCI config file is available in my repo:
https://github.com/mislavcimpersak/xkcd-excuse-generator/blob/master/.circleci/config.yml#L58-L60
And also complete working TravisCI config file:
https://github.com/mislavcimpersak/xkcd-excuse-generator/blob/feature/travis-ci/.travis.yml#L25-L29
Also, as it says in Zappa's docs:
Removing this setting will use the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables instead
So you can remove "profile_name": "default" from your zappa_settings.json and set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY in your CI's web interface. Zappa should be able to use those.

Resources