Source Azure Service principal yaml - bash

I am writing a shell script to delete Virtual machines in Azure. As part of that, I need to access a YAML file (as shown below) that has azure service principal for different subscriptions. Now I am not sure how to load this YAML file in my script.
123456-5897-1223357-7889:
subscription_id: "123456789"
client_id: "123456789"
secret: "123456789"
tenant: "1234567899"
azure_cloud_environment: "AzureCloud"
578945-5897-1223357-7889:
subscription_id: "987456123"
client_id: "987456123"
secret: "987456123"
tenant: "987456123"
azure_cloud_environment: "AzureCloud"
is there a way to source this file as we do in GCP or is there a way to load the file from YAML file
gcloud auth activate-service-account --key-file="/tmp/project1.json"
gcloud config set project project1

I dont think its possible to do that, the only auth option you can do (with a file) is replacing .azure/azureProfile.json (cant recall right now for sure, it might be another file under .azure folder) under users profile, but its format is completely different (not sure where you got these files).

Related

Cassandra Astra securely deploying to heroku

I am developing an app using python and Cassandra(Astra provider) and trying to deploy it on Heroku.
The problem is connecting to the database requires the credential zip file to be present locally- https://docs.datastax.com/en/astra/aws/doc/dscloud/astra/dscloudConnectPythonDriver.html
'/path/to/secure-connect-database_name.zip'
and Heroku does not have support for uploading credentials files.
I can configure the username and password as environment variable but the credential zip file can't be configured as an environment variable.
heroku config:set CASSANDRA_USERNAME=cassandra
heroku config:set CASSANDRA_PASSWORD=cassandra
heroku config:set CASSANDRA_KEYSPACE=mykeyspace
Is there any way through which I can use the zip file an environment variable, I thought of extracting all files and configuring each file an environment variable in Heroku.
but I am not sure what to specify instead of Cluster(cloud=cloud_config, auth_provider=auth_provider) if I started using the extracted files from an environment variable?
I know I can check in the credential zip inside my private git repo that way it works but checking credentials does not seem secure.
Another idea that came to my mind was to store it in S3 and get the file during deployment and extract it inside the temp directory for usage.
Any pointers or help is really appreciated.
If you can checkin secure bundle into repo, then it should be easy - you just need to point to it from the cloud config map, and take username/password from the configured secrets via environment variables:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import os
cloud_config = {
'secure_connect_bundle': '/path/to/secure-connect-dbname.zip'
}
auth_provider = PlainTextAuthProvider(
username=os.environ['CASSANDRA_USERNAME'],
password=os.environ['CASSANDRA_PASSWORD'])
cluster = Cluster(cloud=cloud_config, auth_provider=auth_provider)
session = cluster.connect()
Idea about storing the file on S3, and downloading - isn't very bad as well. You can implement it in the script itself, to get file, and you can use environment variables to pass S3 credentials as well, so file won't be accessible in the repository, plus it would be easier to exchange the secure bundles if necessary.

Access environment variables stored in Google Secret Manager from Google Cloud Build

How can I access the variables I define in Google Secret Manager from my Google Cloud Build Pipeline ?
You can access to secret from Cloud Build by using the standard Cloud Builder gcloud
But, there is 2 issues:
If you want to use the secret value in another Cloud Build step, you have to store your secret in a file, the only way to reuse a previous value from one step to another one
The current Cloud Builder gcloud isn't up to date (today, 03 feb 2020). You have to add a gcloud component update for using the correct version. I opened an issue for this.
steps:
- name: gcr.io/cloud-builders/gcloud
entrypoint: "bash"
args:
- "-c"
- |
gcloud components update
# Store the secret is a temporary file
gcloud beta secrets versions access --secret=MySecretName latest > my-secret-file.txt
- name: AnotherCloudBuildStepImage
entrypoint: "bash"
args:
- "-c"
- |
# For getting the secret and pass it to a command/script
./my-script.sh $(cat my-secret-file.txt)
Think to grant the role Secret Manager Secret Accessor roles/secretmanager.secretAccessor to the Cloud Build default service account <PROJECT_ID>#cloudbuild.gserviceaccount.com
EDIT
You can access to the secret from anywhere, either with the gcloud CLI installed (and initialized with a service account authorized to access secrets) or via API call
curl -H "Authorization: Bearer $(gcloud auth print-access-token)" \
https://secretmanager.googleapis.com/v1beta1/projects/PROJECT_ID/secrets/MySecretName/versions/latest:access
Note: You recieve the secret in the data field, in base64 encoded format. Don't forget to decode it before using it!
You have to generate an access token on a service account with the correct role granted. Here I use again gcloud, because it's easier. But according with your platform, use the most appropriate method. A python script can also do the job.
EDIT 2
A new way to get secrets exists now in Cloud Build. Less boiler plate, safer. Have a look and use this way now.

How to store a small Confidential file in PCF so that a Spring boot App can access it

I have to store my Kafka Keystore.jks and Truststore.jks file in PCF so that my spring boot app runs there can use it to access the cluster.
The solution that I have in my mind is as follows.
My Jenkins pipeline is hooked up to Hashicorp Vault. So I can keep the BASE64 encoded content in Vault and read it from there during the deployment. But I don't know how to dump that content as a file in PCF VM before my Java app starts. I tried to pursue .profile route; unfortunately Java Build pack doesn't support .profile. Any pointers will be greatly appreciated, Thank you!
A few options:
Put your JKS files which are password protected as a file inside your app. Pass in the password to read them using an environment variable, user provided service, via Spring Config Server or via CredHub service broker.
Same as #1, but create a small custom buildpack that installs your JKS files. Use the platforms multi-buildpack functionality to run your custom buildpack first and Java buildpack second. This option is a little more convenient if you have lots of apps using the same JKS files.
Base64 encode your JKS files and stuff them into an environment variable, user provided service, Spring Config Server or via CredHub service broker. Retrieve & decode them as your app starts, either in a .profile file or in the app itself.
When building your JAR file, you can run run jar uf <path/to/file.jar> .profile and it will add the .profile file to the root of your JAR.
You can confirm it's in the right place by running jar tf <path/to/file.jar>. The output should look like this...
...
BOOT-INF/lib/jackson-annotations-2.9.0.jar
BOOT-INF/lib/jackson-core-2.9.6.jar
BOOT-INF/lib/reactive-streams-1.0.2.jar
BOOT-INF/lib/logback-core-1.2.3.jar
BOOT-INF/lib/log4j-api-2.10.0.jar
<rest of your files here>
...
.profile
Note how there is no path in front of .profile. That's where it needs to be to work properly.
Hope that helps!
You can use the spring-cloud-vault plugin to let spring-boot bootstrap your secrets from vault on application startup.
Spring-cloud-vault plugin
But I dont understand why you need to put your JKS files in vault.
These JKS files are password protected, which means you can bundle the jks files as a resource inside your JAR and just get the secret password you need to open these JKS files from vault, on application startup
I've created a new supply buildpack to help solve this
https://github.com/starkandwayne/dot-profile-buildpack
cf push javaapp --path build/jibs/myapp-1.0.0.jar \
-b https://github.com/starkandwayne/dot-profile-buildpack \
-b java_buildpack \
--no-start
cf set-env javaapp PROFILED "$(cat <<SHELL
#!/bin/bash
cat > config.json <<JSON
{
"some": "config"
}
JSON
SHELL
)"
cf start javaapp
Of course, you could also set the $PROFILED env var within your cf push -f manifest.yml.

Symlink Secret in Kubernetes

I'm trying to use the google sheets and gmail APIs, and I'd like to access the credentials file as a K8s secret (which seem to be mounted as symlinks).
However, the google oauth2 python client specifically says that credential files cannot be symbolic links.
Is there a workaround for this?
Is there a workaround for this?
There are at least two that I can think of off-hand: environment variables, or an initialization mechanism through which the symlinks are copied to files
Hopefully the first one is straightforward, using env: valueFrom: secretKeyRef: etc.
And for the second approach, I lumped them into "initialization mechanism" because it will depend on your preference between the 3 ways I can immediately think of to do this trick.
Using an initContainer: and a Pod-scoped volume: emptyDir: would enable you to copy the secret to a volume that is shared amongst your containers, and that directory will be cleaned up by kubernetes on the destruction of your Pod
Using an explicit command: to run some shell before launching your actual application:
command:
- bash
- -ec
- |
cp /path/to/my/secret/* ./my-secret-directory/
./bin/launch-my-actual-server
Or, finally (and I would guess you have already considered this), have the application actually read in the contents and then write them back to a file of your choice

Parse Server S3 Adapter Deprecated

The Parse S3 Adapter's requirement of S3_ACCESS_KEY and S3_SECRET_KEY is now deprecated. It says to use the environment variables: AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. We are have setup an AWS user with an Access Key ID and we have our secret key as well. We have updated to the latest version of the adapter and removed our old S3_X_Key variables. Unfortunately, as soon as we do this we are unable to access, upload or change files on our S3 bucket. The user does have access to our buckets properties and if we change it back to use the explicit S3_ACCESS_KEY and secret everything works.
We are hosting on Heroku and haven't had any issues until now.
What else needs to be done to set this up?
This deprecation notice is very vague on how to fix this.
(link to notice: https://github.com/parse-server-modules/parse-server-s3-adapter#deprecation-notice----aws-credentials)
I did the following steps and it's working now:
Installed Amazon's CLI
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
Configured CLI by creating a user and then creating key id and secret
http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
Set the S3_BUCKET env variable
export S3_BUCKET=
Installed files adapter using command
npm install --save #parse/s3-files-adapter
In my parse-server's index.js added the files adapter
var S3Adapter = require('#parse/s3-files-adapter');
var s3Adapter = new S3Adapter();
var api = new ParseServer({
appId: 'my_app',
masterKey: 'master_key',
filesAdapter: s3Adapter
})
Arjav Dave's answer below is best if you are using AWS or a hosting solution where you can login to the server and run the AWS Configure command on the server. Or if you are running everything locally.
However, I was asking about Heroku and this goes for any server environment where you can set ENV variables.
Really it comes down to just a few steps. If you have a previous version setup you are going to switch your file adapter to just read:
filesAdapter: 'parse-server-s3-adapter',
(or whatever your npm installed package is called some are using the #parse/... one)
Take out the require statement and don't create any instance variables of S3Adapter or anything like that in your index.js.
Then in Heroku.com create config vars or with the CLI: heroku config:set AWS_ACCESS_KEY_ID=abc and heroku config:set AWS_SECRET_ACCESS_KEY=abc
Now run and test your uploading. All should be good.
The new adapter uses the environment variables for access and you just have to tell it what file adapter is installed in the index.js file. It will handle the rest. If this isn't working it'll be worth testing the IAM profile setup and making sure it's all working before coming back to this part. See below:
Still not working? Try running this example (edit sample.js to be your bucket when testing):
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-started-nodejs.html
Completely lost and no idea where to start?
1 Get Your AWS Credentials:
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/getting-your-credentials.html
2 Setup Your Bucket
https://transloadit.com/docs/faq/how-to-set-up-an-amazon-s3-bucket/
(follow the part on IAM users as well)
3 Follow IAM Best Practices
https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Then go back to the top of this posting.
Hope that helps anyone else that was confused by this.

Resources